Reasons why Facebook is Discriminating…

Online advertising’s been built around the ability to know exactly who you are. Are you a guitar-playing, cat-owning botanist? You’ll get served ads for guitar-shaped catnip hedge grow kits....

Online advertising’s been built around the ability to know exactly who you are. Are you a guitar-playing, cat-owning botanist? You’ll get served ads for guitar-shaped catnip hedge grow kits. Google, Twitter, and all other digital platforms where an advertisement can be served—with Facebook leading the charge—want to know everything about you, usually already do, and use that information to convince marketers that the ads you’ll see are perfectly targeted to whoever you are (or aren’t).

facebook reportedly allows advertisers to exclude users by ‘ethnic affinity’

That’s an intense stripe of power. And maybe that’s fine—even, potentially, helpful—when advertisers are dividing people by shopping habits, or hobbies. But when it comes to identifiers like race or ethnicity, that targeting can devolve into discrimination that cements harmful societal norms. At worst, it can perpetuate blatant prejudice that can affect critical issues in our lives, like housing and hiring.
On Friday, Facebook finally admitted it’d been on the wrong side of that line. The company said it would stop allowing advertisers to exclude certain “ethnic affinities” in categories like jobs and housing.

The ability to exclude people by “race” will still be available for the vast majority of Facebook advertisers, though. Facebook still claims this is too useful of a tool to kill—that the upside of it outweighs the downside.

Others aren’t so sure.

The entire fracas is just the latest in a running series of incidents of online discrimination that are even more subtle than Facebook’s situation. The quiet discrimination of online advertising raises questions about the fairness of targeting users via personal data-points, and the lengths marketers are willing to go do so.
It’s a dilemma that ties back to one of the biggest challenges that’s always faced the ad industry: How do you sell to society in specific ways without reinforcing our ugliest tendencies?

Ethnic ‘affinities’

An October 2016 ProPublica investigation found that the way Facebook packages users could violate civil rights laws that ban discrimination in housing ads.

They discovered an option within the platform that lets brands cater to (or exclude) people by ethnic affinity—in other words, users the company’s judged to have a distinct predilection for African-Americans, Asian-Americans, Hispanics or other racial groups.

Facebook countered that the tool looks for “affinity” rather than a stated race, and thus, doesn’t break laws governing ad discrimination.

John Relman, a prominent civil rights lawyer, emphatically disagreed when ProPublica plied him with the same question.

“This is horrifying. This is massively illegal,” he told the outlet. “This is about as blatant a violation of the federal Fair Housing Act as one can find.”

The site also pointed out how the options appear under Facebook’s “demographics” heading, to which the social network replied that it had plans to switch their category. Of course, Facebook eventually made the absolute minimum changes needed to stay inside the law.
When machines discriminate

Not every example of ad discrimination is as blatant as Facebook’s.

Sometimes bias is the consequence of cold machine learning. Algorithms evolve and find patterns on their own that might fly in the face of what their creators intended. Researchers at Carnegie Mellon University, for instance, found last year that Google ads tended to show far fewer women than men job offers for positions with salaries of $200,000 or more.

More troubling yet, Google-hosted job listings in STEM fields also disproportionately sought out males.

An ad tech firm helped researchers trace the decision-making process back to a so-called “black-box” system—an opaque targeting script with little human supervision or input.

“There’s really very little transparency into how these decisions are getting made and whether there are societal values either intentionally or unintentionally being compromised in the process,” says Anupam Datta, a computer science professor who helped build the software responsible for the findings.

It’s unclear whether the algorithm observed traits or tendencies in Google’s cache of user data that led it to act this way, or if it was a function of its programming. It seems Google might not even know what happened.

While no laws were broken, the process illustrated how easily self-teaching programs can end up reinforcing, say, the gender pay gap, and stereotypes about which professions are supposedly more suitable for women or men.

These kinds of biases can manifest in more latent ways, too. Shades of sexism like gender-coded language or postings soliciting someone who’s the “right fit” might be amplified by perceptive artificial intelligence.

The problem there? The tools with potential for discriminatory abuse are the same ones needed for legitimate efforts to rope in diverse communities.

“In advertising you’re trying to reach out to specific groups,” says Wendy Stryker, an employment lawyer at New York law firm Frankfurt Kurnit Klein & Selz. “How do you do that without violating some of these [discrimination] laws?”

Stryker says there are clear laws dictating what constitutes discrimination in advertising to job applicants, as there are for home-seekers. As the above case shows, those rules don’t always account for everything.
Racist searches

“Have you ever been arrested?” Harvard data privacy professor Latanya Sweeney begins her powerful 2013 study, examining how ads placed on Google and Reuters treat race. “Imagine the question not appearing in the solitude of your thoughts as you read this paper, but appearing explicitly whenever someone queries your name in a search engine.”

In her research, Sweeney found that searches for “black-sounding” names turned up significantly more sponsored results and display ads mentioning things like criminal records and background checks than others.
A Google spokesperson denied that the search engine does any sort of racial profiling, pointing to a policy that bans advertisers from advocating against any one group or subset of people. Background check site Instantcheckmate.com, one of the primary brands named, also swore it didn’t use any technology that could connect name to race.
If Sweeney’s right, her findings demonstrate a particularly egregious instance of prejudice. Yet, such problems also connect back to a defining dilemma that’s plagued socially-aware advertisers for years: Advertising has always been caught between reflecting and advancing the culture from which it feeds.

Advertising, by nature, plays off of sometimes-stale generalizations and group commonalities in order to better appeal to specific parts of the population, leading to a reputation as “the most prevalent and toxic of the mental pollutants,” in the words of Canadian activist Kalle Lasn.

But more groundbreaking work can also change societal perceptions and split open otherwise untapped markets. For instance, efforts like a Spanish-language campaign geared towards Hispanic web surfers—the kind of diversity outreach for which Facebook says its race tools were rightly intended.

On the web, where all ads are hyper-personal, and privacy’s something of an illusion, the contrast between marketers’ intentions and practices are intensified so much more than they are outside of the digital realm.

But the responsibility ultimately falls on Facebook to design targeting tools in a way that enable diverse advertising while preventing the more insidious abuses it might enable. And on Friday, Facebook took a step—however minimal—towards addressing that challenge. The question remains: What else, if anything, will they do about it?

(Source: Mashable) 

Categories
BusinessUncategorized

Related By:

  • J3016 gets a needed update

    The engineering group SAE International has updated its Levels of Driving Automation chart with more definitions for the Level 3 and Level 4 autonomous systems, which are in the...
  • 5G to the Enterprise? What’s the play? What makes it a hype or not?

    Join the conversation here: https://info.tmforum.org/hard-talk-series-3.html...
  • Xpeng’s NGP User Interviews

    黑人工程师克服挑战直至遇到“干杯和烤鱼” 程雷(Emmanuel)已经来中国5年多了。解决问题,适应环境,他一直乐此不疲。 在此之前,他是某公司的首席信息技术官,喜欢“去做别人都不愿意做的”棘手事情。换工作的时候,华为和IBM都找到了他。但是,程雷想要尝试一个更有开创性的领域,所以他选择了前者。 他说,自己骨子里就喜欢挑战,喜欢逆袭的感觉。来自加纳的程雷,正是依靠着新兴科技适应了中国的一切。 作为一名来自加纳的工程师,程雷觉得,人们经常会因为他是黑人而有误解。有一次他去试驾,销售员一来就给他找说唱嘻哈音乐,令他哭笑不得,“他觉得我会喜欢吧”。 语言不通,程雷却靠着百度翻译在中国生活了那么多年。他经常说,挑战促使他往前走,“正是因为这一切,事情才有趣”。 他一直就是这样,灵活适应着周遭的一切。闲暇时候,他会去游泳、骑车、做设计,弹首曲子或者下个厨房,还会去旅… Read more at: https://mp.weixin.qq.com/s/FXRa0a5YtHxEZUOeN7tyLA...
  • What did NIO do? ET7

    Chinese automaker launched a flagship electric car with an available 150-kilowatt-hour battery pack last week with scheduled deliveries starting in early 2022. It could be the first mass-produced passenger...