It will be five to 10 years before artificial intelligence can reliably identify online hate speech and distinguish it from legitimate political debate, Facebook CEO Mark Zuckerberg testified before the U.S. Senate today.
Facebook has tested AI for detecting hate speech but found it error-prone, Zuckerberg said. AI used for this purpose needs to make nuanced judgments and be able to understand multiple languages, he said.
"Determining if something is hate speech is very linguistically nuanced," he said, according to various transcripts of his testimony. "You have to understand what's a slur and whether something is hateful."
Facebook employs more than 20,000 people in security and content review, he said, and when a user flags something as offensive, these staffers examine it and take it down if it violates the site's policies. "Some problems lend themselves more easily to AI solutions than others," he said, and hate speech is one of the difficult ones.
In his opening remarks, Zuckerberg apologized for the use of Facebook by Russian operatives seeking to interfere in the 2016 U.S. presidential election. Facebook has provided "powerful new tools" to users, "but it's clear now that we didn't do enough to prevent these tools for being used [for] harm as well," he said. "That goes for fake news, for interference in elections, and we didn't take a broad enough view of our responsibility and that was a big mistake, and it was my mistake, and I'm sorry."
He later told Democratic Sen. Dianne Feinstein of California, "One of my greatest regrets is we were slow in identifying the Russian operations in 2016." He also said special counsel Robert Mueller's team has spoken with Facebook officials as part of Mueller's probe. "I want to clarify, I'm not sure we have subpoenas. I know we're working with them," he said.
Facebook has acknowledged that Cambridge Analytica, a technology firm with ties to Donald Trump's presidential campaign, "may have had information on about 87 million Facebook users without the users' knowledge," CNN reports. The professor who created the application that mined that information violated Facebook's terms of service by turning the data over to Cambridge Analytica.
Democratic Sen. Maria Cantwell of Washington State asked Zuckerberg today if Facebook employees interacted with Cambridge Analytica, and he said he didn't know. "Although I know we did help out the Trump campaign overall in sales support in all in same way that we help do with all other campaigns," he said.
He expressed support for legislation proposed in the Senate called the Honest Ads Act, which "would place new disclosure requirements on political advertisements," according to CNN. He also said Facebook is adding a feature to offer users more information about ads.
Republican Sen. Ted Cruz of Texas grilled Zuckerberg on whether Facebook has a liberal slant, claiming that some users have seen a "persuasive pattern of political bias." Cruz brought up several issues, including Facebook's initial blocking of a Chick-fil-A Appreication Day page in 2012 (the company is known for its leaders' anti-LGBT views). Zuckerberg admitted California's Silicon Valley, where the company is based, "is an extremely left-leaning place," but said employees take care not to let their political beliefs affect their jobs. One of his duties, he added, "is making sure we don't have any bias in the work we do. and I think it is a fair concern that people would wonder about,"<
Zuckerberg told other senators that Facebook is committed to assuring that activist groups aren't "unfairly" targeted and to providing a platform for a wide spectrum of speech. For more, see CNN's live blog of the hearing.