Artificial Intelligence driven Marketing Communications
Illustration by Alex Castro / The Verge
We’ve spent the past couple weeks looking at the clash of social networks and democracy in the United States. So let’s turn our attention abroad.
Assam is a state in India that is home to a large population of Bengali Muslims. The ruling party of India, Bharatiya Janata Party (BJP), are Hindu nationalists. In August, after six years of development, the government released a controversial national register of citizens that omitted 1.9 million residents, many of them Muslim. The government has presented the project as part of an effort to expel “infiltrators,” but the overall effect has been to create an environment of fear for minorities in Assam, many of whom are poor.
None of that is Facebook’s doing. But as we have seen before in other countries where ethnic tensions are running high, the platform has become what the human rights group Avaaz is calling a “megaphone for hate,” in a report released Tuesday. Here’s Pranav Dixit at BuzzFeed:
Comments and posts that called Bengali Muslims “pigs,” “terrorists,” “dogs,” “rapists,” and “criminals,” — seemingly in violation of Facebook’s standards on hate speech — were shared nearly 100,000 times and viewed at least 5.4 million times, showed the Avaaz review, which covered 800 Facebook posts related to Assam. As of September, Facebook had removed just 96 of the 213 posts and comments that the organization reported, including calls to poison Hindu girls to prevent Muslims from raping them. […]
“Facebook is being used as a megaphone for hate, pointed directly at vulnerable minorities in Assam, many of whom could be made stateless within months,” Alaphia Zoyab, senior campaigner at Avaaz, said in a statement. “Despite the clear and present danger faced by these people, Facebook is refusing to dedicate the resources required to keep them safe. Through its inaction, Facebook is complicit in the persecution of some of the world’s most vulnerable people.”
The report is unfortunately not publicly available, and even if it were, I can’t read Assamese. It’s worth noting that Facebook does not agree with Avaaz’s contention that everything the group found is hate speech. (“We have clear rules against hate speech, which we define as attacks against people on the basis of things like caste, nationality, ethnicity and religion, and which reflect input we received from experts in India,” the company told Dixit.) And fortunately, as best as I can tell, nothing in the report links the spread of hate speech in Assam to real-world violence.
Still, reports of rising xenophobia and hate speech on social platforms will always make me nervous. It was only a year ago, after all, that a mob in the Indian village of Rainpada beat five strangers to death over a rumor started on WhatsApp. And it was only 18 months ago that United Nations human rights investigators said that Facebook had played a role in spreading hate speech in Myanmar during that country’s genocide against the Rohingya Muslim minority.
In the aftermath of the Rohingya tragedy, the UN Office of the High Commissioner for Human Rights issued an excellent, nuanced report on the conflict. (I wrote about it at the time.) One thing I took from the report is this excellent suggestion, which to my knowledge no social platform has ever taken:
Before entering any new market, particularly those with volatile ethnic, religious or other social tensions, Facebook and other social media platforms, including messenger systems, should conduct in-depth human rights impact assessments for their products, policies and operations, based on the national context and take mitigating measures to reduce risks as much as possible.
I wonder what such an impact assessment might have said about Assam before Facebook opened shop there. Does Facebook employ enough content moderators who speak Assamese? How effectively can its machine-learning systems understand potential hate speech in that language?
That leads to a second thing I took from the UN report, which is that social platforms should provide country-specific reports about the hate speech they discover on their networks. As I wrote then:
Facebook ought to provide country-specific data on hate speech and other violations of the company’s community standards in Myanmar. We may not be able to say with certainty to what degree social networks contribute to ethnic violence — but we ought to be able to monitor flare-ups in hate speech on our largest social networks. Dehumanizing speech is so often the precursor to violence — and Facebook, if it took its role seriously, could help serve as an early-warning system.
And in Assam, it seems, that early-warning system is flashing red.
Today in news that could affect public perception of the big tech platforms.
Trending up: Facebook suing the NSO Group for hacking WhatsApp to target human rights activists and journalists is a meaningful blow for free expression.
Trending up: Facebook launched a Preventative Health tool that lets users receive personalized reminders about health care tests and vaccines. The tool uses the age and sex from someone’s Facebook profile to send recommended screenings.
Trending down: Google executives told employees a former top Department of Homeland Security official who had recently joined the company was “not involved in the family separation policy.” It turns out he was, and some employees feel misled.
⭐ Facebook, Amazon, and Apple are all ramping up their lobbying efforts as antitrust scrutiny grows in Washington. Facebook increased spending by nearly 25 percent, to $12.3 million, through the first nine months of the year over the same period in 2018. Ryan Tracy at The Wall Street Journal has more:
The tech lobbying uptick comes amid heightened scrutiny of tech companies in Washington. Facebook is facing antitrust investigations from the Federal Trade Commission, the Justice Department and state attorneys general. Amazon is a target of a nascent Federal Trade Commission probe into its market power.
The House Judiciary Committee is examining Apple, Facebook and Amazon as well as search giant Google.
The firms have said they welcome the scrutiny and are working with investigators.
Elizabeth Warren said that, if elected, she will prohibit big tech companies likeFacebook from hiring senior government employees right out of office. The plan is the latest in her campaign to fight corruption in Washington and Silicon Valley — and she mentioned Joel Kalpan, Facebook’s head of policy, by name . (Louise Matsakis / Wired)
Senator Josh Hawley (R-MO) is making a name for himself by going afterFacebook and Google. The congressman might be an even bigger threat than Warren, since he’s able to work with Democrats on trying to regulate the tech industry. (On the other hand, his idea of banning content moderation beyond what the First Amendment allows is insane.) (Emily Stewart / Recode)
Facebook and Google agreed to stop selling political ads in Washington state last year, but they are still doing it. They said they would stop after Attorney General Bob Ferguson sued them for not obeying the state’s rules on political ad transparency. (David Gutman / The Seattle Times)
Former Facebook security chief Alex Stamos suggested limiting microtargeting of potential voters on the platform in paid political advertising. It’s a suggestion Facebook’s own employees made in their recent letter to Mark Zuckerberg.(Mathew Ingram and Alex Stamos / Columbia Journalism Review)
The letter that Facebook employees sent to Mark Zuckerberg, which urged him to rethink his stance on misinformation in political ads, might be the most meaningful action they’ve organized against their own company’s policies to date. (Lauren Kaori Gurley / Vice)
Facebook sued two domain hosts for allegedly hosting websites that provide hacking tools against the company. The websites, “HackingFacebook.net” and “iiinstagram.com,” reportedly allow people to phish and hack Facebook accounts. (Alfred Ng / CNET)
The European Union released a statement urging Google, Facebook, and Twitterto do more to fight disinformation. EU commissioners also warned that they could introduce legislation to regulate the companies if they don’t start doing a better job. (Natalia Drozdiak / Bloomberg)
Microsoft said Russian state hackers attacked the computers of 16 national and international antidoping organizations. Officials are still dealing with the 2015 Russian doping scandal, which snowballed in recent months after Russian athletes’ failed drug tests were erased from a critical data set. (Nicole Perlroth and Tariq Panja / The New York Times)
The Australian Competition and Consumer Commission (ACCC) is suing Googlefor allegedly misleading customers over how they collected location data. The watchdog group said the company prevented people from making an informed choice when setting up their Android accounts. (Josh Taylor / The Guardian)
Lawmakers in Australia are developing facial recognition technology to limit kids’ access to porn and verify the age of the watcher. The UK tried to pass a similar measure, but stopped after privacy advocates voiced their concerns. (Timothy B. Lee / Ars Technica)
⭐ WhatsApp sued the Israeli cybersurveillance firm NSO Group, claiming the company used the popular messaging service in a wide-ranging spy campaign on journalists and human-rights activists. Here’s Nicole Perlroth at The New York Times:
The investigation started last spring, after Citizen Lab charged that NSO Group’s technology had exploited a WhatsApp security hole to hack the phone of a London lawyer. The lawyer represented several plaintiffs in lawsuits that accused NSO Group of providing tools to hack the phones of a Saudi Arabian dissident living in Canada, a Qatari citizen and a group of Mexican journalists and activists.
Will Cathcart, head of WhatsApp, wrote an op-ed about how the company discovered the attack and why they’re pursuing a lawsuit. (Will Cathcart / The Washington Post)
TikTok’s parent company ByteDance is still in the early stages of considering an initial public offering, either in the US or Hong Kong, according to Bloomberg. ByteDance denied the report — so if it does go public next year, that will tell us a lot about ByteDance’s credibility. (Lulu Yilun Chen, Zheping Huang and Manuel Baigorri / Bloomberg)
Mozilla announced a partnership with Element AI to advocate for ethical artificial intelligence. The collaboration will involve developing tools to give people more control over their data. (Charlie Osborne / ZDNet)
4 Old-Fashioned Ways to Catch Him Cheating Now That The Instagram Following Tab Is Gone
Reductress always has your back.