In case you are not familiar with the story, Gizmodo ran this piece last week: Former Facebook Workers: We Routinely Suppressed Conservative Views. The story hit a nerve, given election season timing and concerns about the growing influence of Facebook and other social networks.
To quell the controversy, Facebook made a number of statements and released details of their process for how the trending news “sausage” is made (which was a real page turner for news feed geeks like me).
These moves did not settle the matter. The major news organizations covered the details – but predictably and ironically from their left and right leaning perspectives.
The New York Times story by Mike Isaacs saw no evil; he covered the checks and balances, implying that these filter out biases:
While algorithms determine the exact mix of topics displayed to each person… a team is largely responsible for the overall mix of which topics should — and more important, should not — be shown in Trending Topics. For instance, after algorithms detect early signs of popular stories on the network, editors are asked to cross-reference potential trending topics with a list of 10 major news publications, including CNN, Fox News, The Guardian and The New York Times.
But the Wall Street Journal’s Deepa Seetharaman saw something more insidious:
Facebook researchers last year ranked 500 news sites based on how popular they were with the social network’s users who identified their political alignment as conservative or liberal. According to those rankings, eight of the 10 national news outlets that play an outsize role in determining trending topics are more popular with liberals.
I’ll leave the topic of Facebook’s response to a crisis for another post (they broke the first rule: don’t piecemeal information, that just prolongs the issue).
It seems to me that there must be some unstructured text mining tech that could settle the question. Isn’t it possible to analyze Facebook Trending News stories to find and tabulate the ones with a slant – and compare the numbers for left and right-leaning stories?
It is no simple task, given the complexities of how this all works. Facebook’s document reveals the processes, which blend human and machine effort. Even after the algorithms and editors pick their shots, the feeds get further tailored based on user preferences and actions.
But the bigger question: Is it really even possible to edit or curate without bias?