Fake News, AI and Bias, Oh My!

First published on DigiDig

There have been quite a few relevant articles recently, so I thought I would write this post which includes links and brief summaries.

Real Interest in Fake News

Fake news continues to be a hot story that shows what can go wrong when algorithms choose what we read. I covered the topic here blog when it first broke a few weeks ago.

Ryan Holmes of HootSuite wrote for the Observer in The Problem Isn’t Fake News – it’s Bad Algorithms: “As algorithms mature, growing more complex and pulling from a deeper graph of our past behavior, we increasingly see only what we want to see… More dangerous than fake news, however, is all the real news that we don’t see. For many people, Facebook, Twitter and other channels are the primary… place they get their news. By design, network algorithms ensure you receive more and more stories and posts that confirm your existing impression of the world and fewer that challenge it. Overtime, we end up in a “filter bubble”‘

Writing for DigiDay, Lucia Moses explained Why Top Publishers are Still Stuck Distributing Fake News. It is not just about news feed algorithms, but also involves the “intelligence” behind automatically-served programmatic ads (native ads can look like real articles). She shares an example in which the NY Times displayed a fake news ad next to their real story on fake news (got that?).

Algorithms and Bias

One of the primary concerns about algorithms relates to bias. How do biases infect data-driven computations? And in which ways do programs discriminate?

Kristian Hammond answers the first question in the TechCrunch story 5 Unexpected Sources of Bias in AI. He ponders whether bias is a bug or feature, and says: “Not only are very few intelligent systems genuinely unbiased, but there are multiple sources for bias… the data we use to train systems, our interactions with them in the ‘wild,’ emergent bias, similarity bias and the bias of conflicting goals. Most of these sources go unnoticed. But as we build and deploy intelligent systems, it is vital to understand them so we can design with awareness and hopefully avoid potential problems.”

Alvin Chang writes in Vox about How the Internet Keeps Poor People in Poor Neighborhoods. He shares an example of a Facebook ad that violates the Fair Housing Act by excluding certain users from seeing it. This is blatant, but algorithmic discrimination can be a lot more subtle, and thus harder to root out, he explains.

Artificial Intelligence for Dummies

If you are new to algorithms and AI, you might want to read this Digital Trends story, which breaks down the differences between machine learning, AI, neural networks, etc.

 



Share article on

Leave a Reply

Your email address will not be published. Required fields are marked *