Social media platforms are transforming how online advertising works and, in turn, raise concerns about new forms of discrimination and predatory marketing.
Today, the ARC Center of Excellence for Automated Decision Making and Society (ADM + S) – a multi-university entity led by RMIT – launched the Australian Ad Observatory. This research project will explore how platforms target Australian users with advertisements. The aim is to foster a conversation about the need for public transparency in online advertising.
The rise of “dark ads”
In the age of mass media, advertising was (largely) public. It meant he was open to scrutiny. When advertisers behaved illegally or irresponsibly, the results were visible to many.
And the history of advertising is riddled with irresponsible behavior. We have seen tobacco and alcohol companies engage in predatory targeting of women, minors and socially disadvantaged communities. We have seen the use of sexist and racist stereotypes. More recently, the circulation of disinformation has become a major concern.
When such practices take place in the open, they can be challenged by media watchdogs, citizens and regulators. On the other hand, the rise of online advertising – which is tailored to individuals and delivered to personal devices – reduces public accountability.
These so-called “dark” advertisements are visible only to the targeted user. They are difficult to follow because an ad may appear only a few times before disappearing. In addition, the user does not know if the advertisements they see are served to other people or if they are being targeted on the basis of their identity data.
There is a lack of transparency regarding the automated systems Facebook uses to target users with ads, as well as the recommendations it provides to advertisers.
In 2017, ProPublica investigative journalists were able to purchase a test Facebook ad targeting users associated with the term “hate Jews.” In response to the attempted ad purchase, Facebook’s automated system suggested additional targeting categories, including “how to burn Jews.”
Facebook deleted the categories after being confronted with the results. Without the scrutiny of investigators, could they have lasted indefinitely? Researchers’ concern about dark ads continues to grow. In the past, Facebook has allowed housing, credit, and employment to be advertised based on race, gender, and age.
This year, he was found running targeted ads for military equipment alongside posts about the attack on the United States Capitol. It also allowed ads targeting African Americans during the 2016 US presidential campaign to suppress voter turnout.
Public support for transparency
It is not always clear whether these offenses are willful or not. Nonetheless, they have become a feature of the vast automated ad targeting systems used by commercial digital platforms, and the risks of harm are pervasive – deliberate or not.
Most of the examples of problematic Facebook ads come from the United States, as this is where most of the research on this issue is conducted. But it is just as important to look at the issue in other countries, including Australia. And Australians agree.
A study released Tuesday and conducted by Essential Media (on behalf of the ADM + S Center) found strong support for transparency in advertising. More than three-quarters of Australian Facebook users responded that Facebook “should be more transparent about how it distributes advertising on its news feed.”
To this end, the Australian Ad Observatory has developed a version of an online tool created by ProPublica to allow members of the public to anonymously share the advertisements they receive on Facebook with journalists and researchers.
The tool will allow us to see how advertisements are targeted to Australians based on demographic characteristics such as age, ethnicity and income. It’s available as a free plugin that anyone can install on their web browser (and can be removed or disabled at any time).
It is important to note that the plug-in does not collect any personally identifying information. Participants are encouraged to provide basic non-identifying demographic information when settling in, but this is voluntary. The plugin only captures the text and images of advertisements labeled as “sponsored content” that appear in users’ news feeds.
Facebook’s online advertising library provides some level of visibility into its targeted advertising practices, but it is not exhaustive.
The Ad Library only provides limited information on how ads are targeted and excludes certain ads based on the number of people reached. It is also not reliable as an archive, as the ads disappear when no longer in use.
The need for public interest research
Despite its past failures, Facebook has been hostile to outside attempts to secure accountability.
For example, he recently asked researchers at New York University to halt their research into how political ads are targeted on Facebook. When they refused, Facebook cut them off from its platform.
The tech company said it should ban the research because it was bound by a settlement with the United States Federal Trade Commission over past privacy breaches.
However, the Federal Trade Commission publicly rejected this claim and emphasized its support for public interest research intended “to shed light on opaque business practices, particularly around surveillance-based advertising.” Platforms should be required to provide universal transparency on how they advertise.