Does Emotion-baiting have a Role in B2B and Tech PR?

Who’s your biggest foe on social media? Does company A have a bigger social footprint? Is competitor B getting more engagement and sharing?

Actually, if you are in B2B tech marketing, your biggest competitor is not any single company. To truly understand who’s beating you out, you should recall the famous words: there’s nothing to fear but fear itself. That’s right, fear, or more generally, emotions, can steal your thunder and customer’s attention.

It’s been documented that emotion-rich posts and content are the ones that are most shared. And consumer marketers have a decided advantage over B2B when it comes to emotion.

The Growth of Social News Sharing

These days more of us are spending more time heads down, looking at our phones and surfing social channels. News and other info come to us, through newsfeeds and social sharing.

Viral cat videos, real news, fake news, vacation pictures and political rants all get mixed into the same soupy mess. And what rises to the top? It’s often the stuff that pushes our emotional buttons. We push a button, we share, like, inundate friends and family and signal Facebook that we favor this kind of content. This leads to more of the same.

The last presidential campaign was a great illustration of this. Much of the fake and real news stories pushed our emotional buttons, e.g. made us angry, surprised or afraid, and drove social sharing.

Fighting Emotions with Emotions

What’s a B2B marketer to do? Should you just accept that you’ll never get people to swoon over your widget launch news – and settle for a smaller audience?

Steve Rubel of Edelman asked publishers about “emotion baiting” at the recent Newswhip Whipsmart conference, which I attended. It was a great session, you can read more here. The post starts with this excerpt:

“Clickbait could be going “the way of the dodo”, according to Edelman’s Steve Rubel. We explore how expert newsrooms are supercharging their content with emotion instead.”

These days, many brands are publishers too. They can try injecting emotion into their content and news. E.g. tech vendors have long used Fear, Uncertainty, and Doubt to undermine the competition. FUD battles used to be fought in the media. The tactic could be applied to content and digital channels.

There’s no reason that you couldn’t tap other emotions, e.g. create videos that make people smile, tell startup stories that inspire and impress.

Bloomberg Gets all Emo

When asked by Rubel if emotion-baiting has a place in business content, Meena Thiruvengadam from Bloomberg said:

“I think it’s about being fair and accurate… and not taking things a step too far to drive virality or shares… There are some things where there are natural emotional elements, like ‘this CEO went from getting fired to creating this amazing business empire’… That speaks to aspiration, inspiration, encouragement, and motivation… but for something like ‘here are the monthly job numbers’ … that’s going to be much harder… you can only trick your audience so long before they get wise to it. You want to present your content honestly, and what’s good content will speak for itself.”

Tech vendors don’t need to follow the same newsroom standards as Bloomberg, still they should be careful. You don’t want to be seen as the inauthentic drama king or queen or be tone deaf to your target audience. E.g., humor might not fly when it comes to things like compliance and legal tech.

Tap other Social Sharing Levers

Another approach is to look at other sharing levers on social media. It is not all just about emotions.

Content marketing guru Jay Baer wrote a Medium piece on this recently. He cited research about the types of content that are most effective for vertical markets. It’s worth a read. For tech, the data show that long form and listicle articles work well.

What do you think? Does emotion have a place in your marketing?

Thanks for reading and any feedback.



Share article on

Lolcat Kills Hello Kitty. Live Video NOW!

Most PR people are familiar with journalistic conventions. Things like AP style, for example. We know a story starts with the lede, not lead, and about the reverse triangle approach to news writing. We know what a byline is, and are constantly on the hunt for the people behind them, to pitch stories to.

We also know that it is a noisy world, and there’s much competition for the headline. PR folks often don’t know the reasoning behind what makes the cut. The choices can seem arbitrary and unfair.

But would we be better off if it was some machine, and not a person, deciding what news is important?

The fact is, what we call news is changing – driven by tech and not journalistic principles. More of us are getting our news from Facebook, other networks and news apps these days. And it is the tech companies – and their programmers and algorithms – that determine what appears in our news feeds.

They shape news not only by filtering but also deciding what is share-worthy, what’s crap, and through their deals with news organizations, all while trying to make money and keep advertisers happy.

Big tech and their solutions are increasingly the lenses through which we see the news. They insist, generally, that they’re not the media – but make no mistake. Their influence is real and significant, at almost every step of the editorial supply chain.

I thought it might be interesting to look at traditional news value (see this helpful Taylor and Francis guide) vs. how a social network prioritizes the same.

As you can see from the lists below, there’s quite a difference (OK the “News Now” list is meant to be light-hearted and not 100% accurate – but you get the idea). Another challenge is that the algorithms are constantly changing, hence all the strikethroughs.

This is a big deal, in my opinion. What do you think? What does it mean for your communications strategies and news promotion?

Stay tuned to this blog for further updates on the topic.



Share article on

Nailing the Facebook Image: Handy Cheat Sheet

To properly target and engage your audience on Facebook, you need impressive visual assets. Luckily, Facebook offers the freedom to be creative and use eye-catching images in your profile, company page, ads and event invites.

However, there are image dimensions and sizing guidelines that you must follow, or they will not appear as you like and may not be approved at all.

Luckily, TechWyse created this Facebook image sizes and dimensions cheat sheet to lend you a helping hand when crafting your next social media campaign.

Even the savviest social media professionals may not be aware of the Facebook’s image specs. For instance, shared images and shared links require different sizing when it comes to uploading.

Facebook recommends 1200 x 630 px for a shared image. On the other hand, Facebook recommends that shared links should be 1200 x 627 px.

Besides image sizing and dimensions, Facebook also imposes text character limits. They will disapprove/give lesser reach to promoted posts with more than 20% text. This means that you need to make sure the text you are using in your post images must meet this character limit if you want to see your post approved.

Bookmark, download or print the cheat sheet and share it with your team of social media content creators, digital marketers and graphic designers. Hang it on your desk, on your office wall or anywhere you can easily reference it when working out the specifics of the visual assets to accompany your Facebook posts.

Hope it helps!  Good luck.



Share article on

My Take on the Fake News Controversy

I attended the Daily News Innovation Lab’s session: Proposition: We can Solve the Fake News Problem. It featured an Oxford-style debate on whether there’s a solution to the fake news problem.

Some very smart people from the worlds of media, business, and technology made great arguments for each side. It was entertaining and informative. I’m pleased to say that the optimists won, according to the audience vote at the end.

Why would anyone think we can’t fix the fake news problem? In brief, it is hard to define, pervasive, systemic, and there will always be bad actors trying to game the system. Think of it like hacking, or information warfare. Plus, Google and Facebook make money on fake news, and some say they’re just giving users what they want.

Arguing for the optimists, Jane Elizabeth of American Press Institute said that these systems were created by the people, for the people, and people will solve the problem. Dean Pomerleau of The Fake News Challenge likened it to the Spam epidemic of the early 2000s, which most would agree has been contained, if not completely solved.

I unsuccessfully tried to get a question in at the end about the faulty premise of the debate. How can you even ponder a cure until you’ve more clearly defined the problem? As I pointed out in my last post, there are many varieties of fake news (propaganda, misinformation, counterfeit news sites, and yes, lies, damned lies). And it is almost impossible to define the concept of “news” itself, or “truth.”

Looking beyond this one debate, fake news has inflamed passions, as it may have tilted the US presidential election and encouraged a nut to shoot up a pizzeria. Any discussion about solutions inevitably gets into tricky areas like censorship, free speech, the roles of media and the government, and the responsibility of business.

I don’t think it will be as easy as fighting spam (this CIO article implies that AI has met its match here). But I do think there are fixes, assuming we can agree on a definition, and what might qualify as solving this.

I attempt to do so below, and also share my thoughts on the most contentious issues.

Defining the Problem

We’ll never get rid of misinformation, wacky theories, bias, rumors or propaganda. I propose defining fake news as lies or false information packaged as news. Let’s include counterfeit news sites and any gaming of algorithms and news feeds to propagate false information.

The Social Network’s Role

Some place the problem at the doorstep of social networks and online news aggregators, such as Facebook and Google respectively. Others say that it is not the platform’s jobs to be truth-tellers. Should they hire fact checkers? Who then checks the fact checkers?

Many say that Facebook and Google have no incentive to clean up the mess, as their business models are based on clicks and sharing regardless of veracity. I completely disagree. If they don’t, their brands and reputations (and hence businesses) will take a beating. No one wants to spend time in places where there is lots of junk.

They can and should take measures to combat fake news. I mean, they’re already policing their sites for bullying, obscenity, grisly pictures and other clearly unacceptable things.

It could involve a combination of crowd correction, e.g. a way for users to flag fake news items, and technology akin to spam detection. For all the grousing that it is too hard a problem to solve, check out these articles:

Who Should Judge Truth?

Some argue for greater regulation and transparency. Since algorithms play a growing role in determining what news we see on the networks, shouldn’t we all better understand how they work? Why not make their inner workings public, like open source software?

Others say that doing this would make it easier for bad actors to understand and manipulate the programs.

Can’t the government come up with laws to make sure that news feeds are unbiased and don’t spread false information? Or, perhaps there should be some watchdog group or fact checking organization to keep the networks “honest.”

Again, I think it is incumbent on the tech companies to clean up the mess. But this should not go so far as making them hand over their algorithms. It’s their intellectual property. And I am leery of government oversight or any third party organization that polices truth telling by decree.

I am in favor of setting up a group that proposes standards in fake news detection and eradication. This industry body could factor in interests of all parties – the social networks, government, users, and media to issue guidelines and also audit the networks (on a voluntary basis – think the MPAA movie ratings, the Parental Advisory Label for recorded music, or Comics Code Authority).

If Facebook, Google, Reddit, Apple News and others want to earn the seal of approval, they’d need to open up their systems and algorithms to inspection to show they are not aiding the propagation of fake news.

 

 

 



Share article on

Internet Society Drills Down on Fake News


I attended the Internet Society’s “Content Rules?!” session the other week.   The panel drilled down on what we now call The Fake News problem (I couch it like this because, as you’ll see it’s not a new one), defining it and exploring causes and solutions.

There’s been a lot already written about fake news. It’s turned into a real meme and hot button, but there’s been lots of noise and confusion. That’s not surprising because it is a complex topic, one that only recently hit our radars in the wake of the election.

Giving it a name gave it legs, a thing to blame (in some cases just because someone doesn’t like an article), and evoked lots of teeth gnashing. The session gave me the opportunity to hear from some very smart people from different sides, better understand the issues and crystallize my thoughts about how we might address the problem.

Not a New Problem

Journalist and American University Professor Chuck Lewis started by explaining that fake news has been around for years in various forms, e.g. government disinformation and propaganda. Toni Muzi Falcone’s DigiDig wrap (in Italian) also discussed this.

“The irony of us feeling victimized by fake news is pretty thick,” he said. “We’ve gone from truth to truthiness to a post-truth society, and now it’s fake news,” said Chuck, “but it’s been going on for centuries.”

He blamed the number of people “spinning information” vs. reporting it, and the ratio of PR people to journalists (which has grown to 5:1), and said it is a crisis for journalism. The big questions are, who decides what is true, and how do you set standards for 200+ countries? We’ve traditionally relied on the press to be content mediation experts.

“We are at a critical, disturbing crossroad,” Lewis said, as “No one wants the government to be the mediators.”

A Systemic Problem

Compounding the problem are the changing ways we get info, and the growing influence of social networks. Gilad Lotan, head of data science at Buzzfeed, discussed this.

He’s studied political polarization in Israel. Gilad showed some fancy social graphs that tracked the spreading of stories in the wake of IDF’s bombing of a Palestinian school. Two different story lines emerged. Neither was “fake” Gilad explained; “They just chose to leave certain pieces of info out in order to drive home points of a narrative.”

Gilad further discussed how your network position defines the stories you see; this leads to polarization and homophily (a fancy way of saying echo chamber). He also explained the role of algorithmic ranking systems. “You’re much more likely to see content which aligns with your viewpoints,” he said. This spawns “personalized propaganda spaces.”

It gives bad actors a way to game the system. Gilad illustrated this via what had been the elephant in the room – the 2016 US presidential election. He shared images that showed phantom groups manipulating the spread of information.

“The awareness of how algorithms work gave them a huge advantage. To this day, if you search for ‘Hillary’s Health’ on YouTube or Google, you see conspiracy theories at the top.”

Moderator Aram Sinnreich, associate professor at American University added: “My impression as a media scholar and critic… is that there’s been a lot of finger-pointing… everyone feels that there’s been a hollowing out of the Democratic process… undermining of the traditional role that the media has played as the gatekeeper of the shared narrative and shared truths; people want to hold the platforms accountable.”

Flavors of Fake News

Andrew Bridges, a lawyer who represents tech platforms, said that it is important to define the problem before considering solutions. The knee-jerk reaction has been to try to turn social networks into enforcement agencies, but that would be a mistake, according to Bridges. That’s because there are seven things calling fake news that could have different solutions (I list them with the examples he cited):

  1. Research and reporting with a pretense of being objective (e.g., major newspapers)
  2. Research and reporting in service of a cause (National Review, Nation, New Republic)
  3. Pretend journalism – claim to be a news source but is a curator (Daily Kos)
  4. Lies – the ones that Politifact and others give Pinocchio noses or “pants on fire” awards
  5. Propaganda – the systematic pattern of lying for political gain
  6. Make-believe news, like Macedonian sites.  They make up news from whole cloth.
  7. Counterfeit sites – they make you think you are at ABC News.com, for example

Then, he dramatically challenged the panel and audience to label certain big ticket topics as fake news or not: Evolution, global warming, the importance of low-fat diets, the importance of low carb diets.

Bridges said that there’s not necessarily a quick fix or tech solution to the problem. “These things have been out there in society, in front of our eyes for years.”  He likened the problem to gerrymandering, gated communities and questions about Hillary’s health.

Some have proposed algorithmic transparency (not surprisingly, Bridges thinks it is an awful idea; “Opening them up just makes it easier to game the system”).

What could work, according to the lawyer? “I think we should look to algorithmic opacity, and brand values of the organizations applying the algorithm.” What about content moderation? He said “Do we turn it over to a third party, like a Politifact? Who moderates the moderator? We know what moderation is – it’s censorship.”

In Bridges view, education is important. We should teach the importance of research and fact checking, and keep each other honest: “Friends don’t let friends spread fake news.”

Other Challenges

Jessa Lingel, Ph.D. and assistant professor at Annenberg School of Communications, seemed to be the youngest on the panel and spoke up for millennials:

“You can’t promise a generation of Internet-loving people a sense of control and agency over content and not expect this breakdown in trust.” She talked about the growth of citizen-driven journalism and the shift from content generation to interpretation. Jessa bemoaned the digital natives’ loss of innocence:

“We were promised a Democratic web and got a populist one; a web that that connects us to different people, instead we got silos. Geography wasn’t supposed to matter… anyone with an Internet connection is the same… instead, geography matters a lot.”

Jessa siad that algorithmic transparency is important but said that it is not enough. “Opacity? I do want to pay attention to the man behind the curtain. We need more than that tiny little button that explains ‘why did I get this ad?’”

Up Next: More on Solutions

As you have hopefully seen from my post, there are many opinions on the situation, and it’s a complex topic.

What do you think? In my next post, I’ll share my thoughts on fake news problems and solutions.



Share article on

How do you Solve a Problem Like Big Data?

This post is not about how to analyze big data – it is about impact and implications for laws, business and our society.  How do we ensure that our increasing reliance on data, algorithms and AI does not come at a cost?

Below, I include some great articles on the topic.  Please read on, and would love to hear your thoughts via the comments section below.

Bloomberg: Battling the Tyranny of Big Data

Mark Buchanan explores recent research and efforts, such as the Open Algorithms Projects; he says we must control how data are used, second, open up by making it more widely available, then re-balance power between companies and individuals.

Mashable: We put too much Trust in Algorithms, and it’s Hurting our Most Vulnerable

Algorithms have gone wild in Ariel Bogle’s piece; they incorrectly label welfare recipients as cheats, and blacks, recidivist risks, give rise to “mathwashing.”

Huffington Post: We need to know the Algorithms the Government uses to make important Decisions about Us

Writer Nick Diakopoulos, Fellow at Columbia Tow Center; Assistant Professor of Journalism, University of Maryland, cites similar issues, and shares a case study in transparency in which he “guided students in submitting FOIA requests to each of the 50 states. We asked for documents, mathematical descriptions, data, validation assessments, contracts and source code related to algorithms used in criminal justice, such as for parole and probation, bail or sentencing decisions.”

PHYS ORG: Opinion – Should Algorithms be Regulated?

Offers a point vs. counterpoint; Markus Ehrenmann of Swisscom says “Yes.” Mouloud Dey of SAS says “No.”

Seth Godin’s blog: The Candy Diet

The marketing guru says that algorithms are dumbing down media.

The Drum: In the Post Truth Era, the Quest to Surface Credible Content has only Just Begun

Lisa Lacy reports that Google amended its algorithm to combat holocaust deniers; but if manual intervention is needed for high profile fails, but what about other important issues that don’t get as much attention?

Foundation for Economic Education: What Happens when your Boss is an Algorithm?

Cathy Reisenwitz offers a nice primer, and argues for making algorithms open source.

The New York Times: Data Could be the Next Tech Hot Button for Regulators

Steve Lohr voices concerns about the growing market power of big tech, and explains potential antitrust issues arising from their collection of data. He writes: “The European Commission and the British House of Lords both issued reports last year on digital “platform” companies that highlighted the essential role that data collection, analysis and distribution play in creating and shaping markets. And the Organization for Economic Cooperation and Development held a meeting in November to explore the subject, “Big Data: Bringing Competition Policy to the Digital Era.”



Share article on

In Defense of “Fake News”

More people are wondering about the weird crap that mysteriously appears in their news

Is News Today too Much Like the Magic 8 Ball?

Is News Today too Much Like the Magic 8 Ball?

feeds. How much is fake news? Did disinformation tilt an election? What are Google and Facebook going to do to clean up the mess?

You could almost hear the entire PR industry shifting uncomfortably amidst the backlash. I mean, crafting news (that some might call fake, or at least a stretch) is our stock in trade. We package propaganda as newsworthy information and sell it to the media; and, increasingly publish directly to the Web and social networks.

I understand that the fuss is more about blatant lies, not the average press release. But it highlights the challenges of determining what is newsworthy and true; a role that is increasingly being taken on by algorithms.

The Web and social media gave us all ways to easily share and spread information. This can include rumor, conjecture, commercial information, news, and yes, slander and outright lies.

I would never defend the last two; but will fight for our right to issue press releases, and traffic in other kinds of info. Any good system needs to be able to deal with all of this, i.e. anticipate some BS and surface the most credible and significant information, whether via the wisdom of the crowds, programs or a combination.

It is naïve to think that a publication, editors, or algorithms (which of course are written by humans) can present news without bias. The journalistic piece you just wrote might be pristine, free of opinion; but the very act of deciding which stories to feature shows partiality.

That said, the social networking platforms where more of us are getting news can do a much better job of separating the wheat from the chaff. I thought I’d share some of the great stories I’ve seen about the controversy and takeaways from each.

TechCrunch – How Facebook can Escape the Echo Chamber

Anna Escher says “Facebook is hiding behind its [position that] ‘we’re a tech company, not a media company’ … For such an influential platform that preaches social responsibility and prioritizes user experience, it’s irresponsible …”

She recommends that they bring journalists into the process, remove the influence of engagement on news selection during elections, and expand Trending Topics to show a greater diversity of political stories – not just the ones that are the most popular.

Tim O’Reilly – Media in the Age of Algorithms

Tim’s exhaustive Medium piece looks at all sides. He rails against “operating from an out-of-date map of the world [in which] algorithms are overseen by humans who intervene in specific cases to compensate for their mistakes,“ and says:

“Google has long demonstrated that you can help guide people to better results without preventing anyone’s free speech… They do this without actually making judgments about the actual content of the page. The ‘truth signal’ is in the metadata, not the data.”

Tim makes an analogy between news algorithms and airplanes “Designing an effective algorithm for search or the newsfeed has more in common with designing an airplane so it flies… than with deciding where that airplane flies.”

He cited an example from the history of aircraft design. While it’s impossible to build a plane that doesn’t suffer from cracks and fatigue… “the right approach … kept them from propagating so far that they led to catastrophic failure. That is also Facebook’s challenge.”

Nieman Lab – It’s Time to Reimagine the Role of a Public Editor

Mike Ananny writes about the public editor’s role, and the challenges they face in the increasingly tech-driven environment. He writes:

“Today, it is harder to say where newsrooms stop and audiences begin. Public editors still need to look after the public interest, hold powerful forces accountable, and explain to audiences how and why journalism works as it does — but to do so they need to speak and shape a new language of news platform ethics.”

He asks “Will the public editor have access to Facebook’s software engineers and News Feed algorithms, as she does to Times journalists and editorial decisions?” and says:

“… public editors must speak a new language of platform ethics that is part professional journalism, part technology design, all public values. This means a public editor who can hold accountable a new mix of online journalists, social media companies, algorithm engineers, and fragmented audiences — who can explain to readers what this mix is and why it matters.”



Share article on

Breaking into Facebook’s News Feed: 3 Stories, 9 Tips

Facebook has made quite a few changes to its algorithm and news feed in recent months, as has been news-1592592_1280chronicled on this blog.  Digiday said that some publishers are responding by focusing more efforts on SEO.

But where does this leave brands and marketers who want to target Facebook users with news and content?

You need to change with times. These days, your content should be informative, relevant and entertaining – it helps if the topics resonate with your friends and family.

There were quite a few good posts that recommended strategies in light of the updates.  Here were three that stood out, and three tips for each.

In A Publisher’s Guide to Facebook’s News Feed Updates, the Newswhip blog shared these tips:

  1. Focus on organic reach and stories shared by actual users vs. brand pages
  2. Use engagement metrics to inform strategy and content creation
  3. Stay attuned to what interests your readers and work hard to serve a niche audience

The same blog follow up with more good advice: How to Adapt to Facebook’s “Personally Informative” News Feed. It offered a helpful pointer to how Facebook defines Personally Informative. This means staying in tune with audience interests. Newswhip recommends:

  1. Creating an RSS aggregator featuring the news sources favored by your desired audience
  2. Building your personal brand – the changes favor peer-to-peer sharing
  3. Being genuine, avoiding clickbait and deception

To the last point, Facebook’s more recent changes target and penalize click bait.  The Hootsuite blog featured a story on How to Get Clicks without resorting to Clickbait.  It recommends:

  1. Be accurate, the headline shouldn’t promise more than the content delivers
  2. Create an emotional connection
  3. Take the time and care to craft an effective headline

 



Share article on

Steal this News Feed (How to get Into Facebook Trending)

I don’t typically write about reverse engineering news feeds. This blog is about hacking the feed in a hacker-1500899_1280figurative sense; i.e. boosting the odds that your news gets featured in the social networks that dominate our attention these days. It’s less about black hat, more smart marketing and communications.

But I thought I’d share a story about the actual hacking of algorithms. In Quartz, David Gershgorn wrote that Stealing an AI Algorithm and its data is a “high school-level exercise.”  He wrote:

Researchers have shown that given access to only an API, a way to remotely use software without having it on your computer, it’s possible to reverse-engineer machine learning algorithms with up to 99% accuracy. Google, Amazon, and Microsoft allow developers to either upload their algorithms to their cloud or use the cloud company’s proprietary AI algorithms, both of which are accessed through APIs.

The article explained how you can crack the algorithm’s logic by sending queries, and evaluating the answers:

Think about making a call to a machine learning API as texting a friend for fashion advice. Now imagine you were to send your friend thousands of messages…  After driving your friend insane, you would get a pretty clear idea of their fashion sense, and how they would pick clothes given your wardrobe. That’s the basis of the attack.

However, anyone who wants to hack Facebook’s news feed would not benefit from this approach, which relies on the availability of an API that’s accessible to developers in the cloud.

So, what about figurative hacking? As this recent NiemanLab piece relates (it also references Gershgorn) Almost No one Knows How Facebook Trending Algorithm Works (But Here’s an Idea). Joseph Lichterman wrote:

Trending now… features broad topics surfaced by the algorithm. According to Facebook’s guidelines, the engineers overseeing Trending are “responsible for accepting all algorithmically detected topics that reflect real-world events.”

Based on some sniffing around, he determined that these thing can help you get into Facebook Trending:

  • Make sure your content includes keywords or hash tags that are trending
  • Don’t spam (Facebook detracts for frequent posting)

What do you think? I’ll be sharing many more tips about how to optimize your news for the Facebook news feed in an upcoming post.



Share article on

Don’t Shoot Me, I’m Only the Headline Writer!

You may recall Elton John’s album “Don’t Shoot Me, I’m Only the Piano Player.” That’s what I thought piano-1589154_1280of when I heard about Facebook’s move to lessen the role of people in Trending topics.

As reported in the Washington Post: “Facebook just greatly diminished the role that human beings will play in the platform’s Trending topics bar, announcing … that actual people will no longer write topic descriptions for the site.  [This] comes months after the company faced an unusually high level of scrutiny for alleged political bias in its Trending feature. Humans will serve a janitorial role in the process, while the algorithms take more control.”

It is an interesting state of affairs when algorithms are deemed to be more unbiased than people – and we are serving the machine in a “janitorial role” (of course, Facebook did not come out and say this as I have – another article attributed the change to the need for scalability).

It seems clear, however that they are still smarting from the bias accusations. Perhaps Facebook’s trying to counter this by giving machines, which we think of as logical, more of a role.

There was a great article in Time magazine about the danger of placing too much trust in algorithms. Rana Foroohar wrote about Cathy O’Neil’s new book Weapons of Math Destruction, which highlights the growing role of algorithms in everything from job performance evaluations, to grading teachers, credit decisions, etc.  They determine which ads we see, and increasingly point us to (and describe) news topics.

Rana writes:

“The Big Data algorithms that sort us into piles of “worthy” and “unworthy” are mostly opaque and unregulated, not to mention generated (and used) by large multinational firms with huge lobbying power to keep it that way.”



Share article on