The Tech Giants’ Secret War Against Fake News Is Too Secret

by 4:30 AM 0 comments
We know the internet companies can make more progress against lies and terror through transparency.
Look at what they just did with their algorithms.
Facebook, Google, and Twitter have a problem with harmful content.
And the consequences of the twin online scourges of political disinformation and terrorist incitement have been on full display lately.
During three congressional hearings in Washington, lawmakers and the rest of us learned that as many as 126 million Facebook users may have seen divisive content posted by Russians seeking to interfere with the 2016 election.
Meanwhile, in New York, authorities said that an Uzbeki immigrant who killed eight people in a truck attack on Oct.
31 was radicalized online by Islamic State videos.
Five days later, tweets amplified by Google News spread phony stories that the shooter in the Texas church massacre had been a supporter of Hillary Clinton and Senator Bernie Sanders.
These developments ought to provide a spur for the world’s dominant search engine (Google) and its two leading social networks (Facebook and Twitter) to accept greater responsibility for addressing internet pollution.
Senator Dianne Feinstein (D-Calif.
) laid out the options during a Nov.
1 hearing before the Senate Intelligence Committee.
Lecturing the three companies’ general counsels, the California Democrat said: “You’ve created these platforms, and now they’re being misused.
And you have to be the ones to do something about it.
Or we will.
” It would be better for all concerned—the companies, their users, and society at large—if Google, Facebook, and Twitter heeded Feinstein’s admonition and instituted serious reforms addressing the propaganda and violent imagery their platforms can be used to convey.
They would preclude knotty free-speech arguments about government restrictions on content and save lawmakers from having to delve into technical realms where their expertise is thin.
(A new dimension to the connection between Russia and U.
-based social networks emerged in early November, when multiple media outlets reported that hundreds of millions of dollars in past investments in Facebook Inc.
and Twitter Inc.
came indirectly from Kremlin-controlled financial institutions.
) On the topic of deleterious online material, the best evidence that the internet companies can make meaningful progress comes from their own recent records of improving algorithms, providing better user warnings, and increasing human oversight of automated systems.
As the New York University Stern Center for Business and Human Rights argues in a new report called “Harmful Content,” the companies can—and should—do more.
Before going any further, let’s stipulate that the odds of sweeping U.
regulation in this area are minuscule.
A Republican-controlled, business-friendly Congress isn’t likely to go after two of the country’s most successful companies—Google and Facebook—or even Twitter which, while less financially robust, is nevertheless a favorite outlet of the Tweeter-in-Chief.
The one area where Congress might act is political advertising.
Senator John McCain (R-Ariz.
) has co-sponsored a bipartisan bill that would make online election ads subject to the same disclosure requirements as conventional broadcast ads.
At the recent hearings, the three companies’ lawyers vowed to adopt voluntary rules similar to those in the McCain bill—a blatant and entirely healthy example of industry trying to get out in front of threatened legislation.
If the companies had enforced rigorous transparency rules in 2016, they might have stymied Russian operatives’ postings and tweets.
More broadly, the digital giants could prove their good faith and lessen misuse of their platforms if they opened up their corporate data operations—not, of course, the private data of customers—to outsiders.
“It’s difficult to impossible for researchers to see” what’s going on within company systems, “and as a result, we don’t know much, or we’re guessing,” says Alice Marwick, an assistant professor of communication at the University of North Carolina at Chapel Hill.
“Only by ending the opacity and secrecy around social media will we fully understand what goes wrong,” says Wael Ghonim, a former Google product manager and internet activist.
Radical transparency would clash with prevailing corporate instincts—and would have to be tempered by careful protection of user privacy—but it could open the industry to new ideas and win it new levels of trust.
Twitter, for example, has said that some 36,000 Russian-controlled “bots” were tweeting during the 2016 campaign.
But Senator Mark Warner (D-Va.
) suggested during the Nov.
1 hearing that Twitter’s tally of automated accounts was low.
Warner cited independent estimates that up to 15 percent of all Twitter accounts—potentially 49 million—are controlled by software, not humans.
More access to company data would presumably address Warner’s skepticism and possibly help provide answers to what Twitter should do about all of those bots.
Asked about the transparency idea, a Twitter spokesperson pointed to a recent company report that said: “Twitter is committed to the open exchange of information.
” Facebook, Google, and Twitter make money by selling users’ attention to advertisers.
The companies do most of their digital business via algorithms—the complex instructions that tell computers how to select and rank content.
For all their subtlety, though, algorithms sometimes elevate clearly false information.
Without pretending that algorithms can be perfected—they’re human constructions, after all—it’s not too much to expect the internet companies to improve them with maximum urgency.
Consider one recent example involving Google.
In April the company said on an in-house blog that 0.
25 percent of searches—meaning millions per day—had been “returning offensive or clearly misleading content.
” In one illustration in December 2016, the very first result for the search, “Did the Holocaust happen?” was a page from the neo-Nazi site Stormfront offering the “top 10 reasons why the Holocaust didn’t happen.
” Alarmed by that and similar incidents, Google launched an algorithm scrubbing called Project Owl.
In April, Google announced it had made false information “less likely to appear.
” The company didn’t provide a new rate for misleading content to compare with the 0.
25 percent figure, but by one admittedly anecdotal measure, Project Owl seems to have had some effect.
I Googled, “Did the Holocaust happen?” on Nov.
A site devoted to “combating Holocaust denial” led the results; Stormfront’s opposite message didn’t surface until the middle of the fourth page.
The drive for more refined algorithms needs to be accelerated.
A Google spokesperson said via email: “While we’ve made good progress, we recognize there’s more to do.
” In some markets, Facebook has been experimenting with a fact-checking function to keep its News Feed honest.
Based on user reports and other signals, the company says it sends stories to third-party fact checkers such as PolitiFact.
When they question a story, Facebook notifies users it has been “disputed” and discourages sharing.
“We already do a lot when it comes to the security and safety of our community,” a Facebook spokesperson said via email.
Now the fact-checking program and others like it deserve to be expanded and imitated elsewhere.
When it comes to violent incitement, the search and social network companies face a whack-a-mole problem: They’re continually taking down extremist videos, only to see copies re-uploaded.
In response, Facebook, Google’s YouTube video site, and Twitter are experimenting with a technique called “hashing,” which allows the companies to track the digital fingerprints of copied videos so they can be automatically removed.
YouTube used hashing recently to take down tens of thousands of sermons by Anwar al-Awlaki, an American-born cleric notorious for terrorist recruiting who was killed in a U.
drone strike in 2011.
In August, YouTube toughened its stance toward videos that contain inflammatory religious or supremacist content but do not qualify for removal.
Such material now comes with a warning and isn’t eligible for recommended status, likes, or comments.
Borderline videos also are harder to find via search and can’t have ads sold next to them.
In a related experiment, a Google affiliate has developed a tool called the Redirect Method that can detect a user’s possible extremist sympathies based on their search words.
Once it has identified such a person, the tool redirects them to videos that show terrorist brutality in an unflattering light.
Over the course of a recent eight-week trial run, some 300,000 people watched videos suggested to them by the Redirect Method for a total of more than half a million minutes.
As these illustrations show, the digital platform companies are willing and able to improve, but they need to step up the pace, breadth, and intensity of their efforts.
Facebook announced at the congressional hearings that by late 2018 it would double to 20,000 the number of employees and contractors working on “security and safety.
” Chief Executive Officer Mark Zuckerberg told investors on Nov.
1 such expenses would “impact our profitability.
” That’s easier for a CEO to say, of course, on a day when his company releases blockbuster results.
For its third quarter, Facebook earned $4.
7 billion, up 79 percent.
“Protecting our community is more important than maximizing our profits,” Zuckerberg also said.
But that’s a false dichotomy.
In the long run, the internet companies will retain users and advertisers only if they avoid being swamped by objectionable content.
The path to profits points toward doing the right thing.
 Barrett, a former Bloomberg Businessweek writer, is deputy director of the NYU Stern Center for Business and Human Rights.

Dramelin

Developer

Cras justo odio, dapibus ac facilisis in, egestas eget quam. Curabitur blandit tempus porttitor. Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor.

0 comments:

Post a Comment