ENGAGE NYC covered the Latest in Digital Storytelling

I enjoyed last week’s ENGAGE: NYC Digital Storytelling Conference. Talk NYC founder Derek Smith and his team put on a great event, featuring thought leaders from agencies, startups, and major brands. They covered the state of storytelling and where it’s going.

Below I share some of the highlights:

Aki Spicer, Chief Digital Officer of TBWA/ChiatDay covered the evolutîon of the agency and a day in the life of their teams. The operation seemed more like a busy newsroom with a mix of roles and talents. They have trend spotters, mixed-media specialists, and former journalists on staff. It’s a much more diverse mix than the traditional creative and account director-led teams.

Rodney Williams, CMO of Moët Hennessy talked about old brands employing new storytelling tricks. They are in a regulated industry which prevents MH from selling direct to consumers (like the car business). The company employs experiential storytelling and won an award at the TriBeCa Film Festival for Moët Moments short films.

One of the most interesting things Rodney covered was about the cult of Cognac and the master distiller. It’s something I never knew about (kind of reminded me of the rigors of the master sommelier in the wine world). E.g. new recruits don’t even speak up in tasting meetings for the first 10 years – that’s how long it takes to get your Cognac “nose.” And Cognac master distillers come from long lines often stretching back eight generations.

An audience member suggested using VR to capture the experience of a tasting session – but Rodney demurred. Doing so would pierce the veil, transparency has its limits and it is important to pròtect the mystique – no watching the Cognac sausage getting made here.

It was great hearing from former Boxee CEO Avner Ronen. He is heading up a new company called Public that has an app of the same name. It uses text messaging to help teens collaborate and tell stories. The app is aimed at members of communities such as middle and high schools.

Navid Khonsari of INK Stories talked about storytelling through gaming. I am not really a gamer – and one might not instantly connect the medium with brand marketing. But Navid made a compelling case and spoke about cool titles they’ve produced like 1979, which brings you into the Iranian revolution.

Then it was time for the session “The Rise of the Storytelling Bots” and I thought: great, here comes the Trump communications team (ba dum). Seriously, Hakari Bee of Rapp NYC spoke about the topic. It’s fascinating. Who knew you could go to a site called ChatFuel.com and build a Facebook bot in 5-7 minutes, without coding? Hakari covered best practices and case studies, including Mr. Miles (a bot and fictional character who flies KLM and Air France), covered by DM News.

Joe Hyrkin, CEO of Issuu (pronounced “issue”) made a compelling case that basically says creators are inheriting the Earth – they are the real kings of storytelling. Joe cited examples ranging from the singer Solange to Sweet Paul. He said that storytellers are building new media companies, and going where they want to share; it’s a creator’s world.

There were other great sessions. Unfortunately, I missed the later ones, as I could not stay the entire afternoon. I look forward to attending the next ENGAGE DSC.



Share article on

My Take on the Fake News Controversy

I attended the Daily News Innovation Lab’s session: Proposition: We can Solve the Fake News Problem. It featured an Oxford-style debate on whether there’s a solution to the fake news problem.

Some very smart people from the worlds of media, business, and technology made great arguments for each side. It was entertaining and informative. I’m pleased to say that the optimists won, according to the audience vote at the end.

Why would anyone think we can’t fix the fake news problem? In brief, it is hard to define, pervasive, systemic, and there will always be bad actors trying to game the system. Think of it like hacking, or information warfare. Plus, Google and Facebook make money on fake news, and some say they’re just giving users what they want.

Arguing for the optimists, Jane Elizabeth of American Press Institute said that these systems were created by the people, for the people, and people will solve the problem. Dean Pomerleau of The Fake News Challenge likened it to the Spam epidemic of the early 2000s, which most would agree has been contained, if not completely solved.

I unsuccessfully tried to get a question in at the end about the faulty premise of the debate. How can you even ponder a cure until you’ve more clearly defined the problem? As I pointed out in my last post, there are many varieties of fake news (propaganda, misinformation, counterfeit news sites, and yes, lies, damned lies). And it is almost impossible to define the concept of “news” itself, or “truth.”

Looking beyond this one debate, fake news has inflamed passions, as it may have tilted the US presidential election and encouraged a nut to shoot up a pizzeria. Any discussion about solutions inevitably gets into tricky areas like censorship, free speech, the roles of media and the government, and the responsibility of business.

I don’t think it will be as easy as fighting spam (this CIO article implies that AI has met its match here). But I do think there are fixes, assuming we can agree on a definition, and what might qualify as solving this.

I attempt to do so below, and also share my thoughts on the most contentious issues.

Defining the Problem

We’ll never get rid of misinformation, wacky theories, bias, rumors or propaganda. I propose defining fake news as lies or false information packaged as news. Let’s include counterfeit news sites and any gaming of algorithms and news feeds to propagate false information.

The Social Network’s Role

Some place the problem at the doorstep of social networks and online news aggregators, such as Facebook and Google respectively. Others say that it is not the platform’s jobs to be truth-tellers. Should they hire fact checkers? Who then checks the fact checkers?

Many say that Facebook and Google have no incentive to clean up the mess, as their business models are based on clicks and sharing regardless of veracity. I completely disagree. If they don’t, their brands and reputations (and hence businesses) will take a beating. No one wants to spend time in places where there is lots of junk.

They can and should take measures to combat fake news. I mean, they’re already policing their sites for bullying, obscenity, grisly pictures and other clearly unacceptable things.

It could involve a combination of crowd correction, e.g. a way for users to flag fake news items, and technology akin to spam detection. For all the grousing that it is too hard a problem to solve, check out these articles:

Who Should Judge Truth?

Some argue for greater regulation and transparency. Since algorithms play a growing role in determining what news we see on the networks, shouldn’t we all better understand how they work? Why not make their inner workings public, like open source software?

Others say that doing this would make it easier for bad actors to understand and manipulate the programs.

Can’t the government come up with laws to make sure that news feeds are unbiased and don’t spread false information? Or, perhaps there should be some watchdog group or fact checking organization to keep the networks “honest.”

Again, I think it is incumbent on the tech companies to clean up the mess. But this should not go so far as making them hand over their algorithms. It’s their intellectual property. And I am leery of government oversight or any third party organization that polices truth telling by decree.

I am in favor of setting up a group that proposes standards in fake news detection and eradication. This industry body could factor in interests of all parties – the social networks, government, users, and media to issue guidelines and also audit the networks (on a voluntary basis – think the MPAA movie ratings, the Parental Advisory Label for recorded music, or Comics Code Authority).

If Facebook, Google, Reddit, Apple News and others want to earn the seal of approval, they’d need to open up their systems and algorithms to inspection to show they are not aiding the propagation of fake news.

 

 

 



Share article on

Internet Society Drills Down on Fake News


I attended the Internet Society’s “Content Rules?!” session the other week.   The panel drilled down on what we now call The Fake News problem (I couch it like this because, as you’ll see it’s not a new one), defining it and exploring causes and solutions.

There’s been a lot already written about fake news. It’s turned into a real meme and hot button, but there’s been lots of noise and confusion. That’s not surprising because it is a complex topic, one that only recently hit our radars in the wake of the election.

Giving it a name gave it legs, a thing to blame (in some cases just because someone doesn’t like an article), and evoked lots of teeth gnashing. The session gave me the opportunity to hear from some very smart people from different sides, better understand the issues and crystallize my thoughts about how we might address the problem.

Not a New Problem

Journalist and American University Professor Chuck Lewis started by explaining that fake news has been around for years in various forms, e.g. government disinformation and propaganda. Toni Muzi Falcone’s DigiDig wrap (in Italian) also discussed this.

“The irony of us feeling victimized by fake news is pretty thick,” he said. “We’ve gone from truth to truthiness to a post-truth society, and now it’s fake news,” said Chuck, “but it’s been going on for centuries.”

He blamed the number of people “spinning information” vs. reporting it, and the ratio of PR people to journalists (which has grown to 5:1), and said it is a crisis for journalism. The big questions are, who decides what is true, and how do you set standards for 200+ countries? We’ve traditionally relied on the press to be content mediation experts.

“We are at a critical, disturbing crossroad,” Lewis said, as “No one wants the government to be the mediators.”

A Systemic Problem

Compounding the problem are the changing ways we get info, and the growing influence of social networks. Gilad Lotan, head of data science at Buzzfeed, discussed this.

He’s studied political polarization in Israel. Gilad showed some fancy social graphs that tracked the spreading of stories in the wake of IDF’s bombing of a Palestinian school. Two different story lines emerged. Neither was “fake” Gilad explained; “They just chose to leave certain pieces of info out in order to drive home points of a narrative.”

Gilad further discussed how your network position defines the stories you see; this leads to polarization and homophily (a fancy way of saying echo chamber). He also explained the role of algorithmic ranking systems. “You’re much more likely to see content which aligns with your viewpoints,” he said. This spawns “personalized propaganda spaces.”

It gives bad actors a way to game the system. Gilad illustrated this via what had been the elephant in the room – the 2016 US presidential election. He shared images that showed phantom groups manipulating the spread of information.

“The awareness of how algorithms work gave them a huge advantage. To this day, if you search for ‘Hillary’s Health’ on YouTube or Google, you see conspiracy theories at the top.”

Moderator Aram Sinnreich, associate professor at American University added: “My impression as a media scholar and critic… is that there’s been a lot of finger-pointing… everyone feels that there’s been a hollowing out of the Democratic process… undermining of the traditional role that the media has played as the gatekeeper of the shared narrative and shared truths; people want to hold the platforms accountable.”

Flavors of Fake News

Andrew Bridges, a lawyer who represents tech platforms, said that it is important to define the problem before considering solutions. The knee-jerk reaction has been to try to turn social networks into enforcement agencies, but that would be a mistake, according to Bridges. That’s because there are seven things calling fake news that could have different solutions (I list them with the examples he cited):

  1. Research and reporting with a pretense of being objective (e.g., major newspapers)
  2. Research and reporting in service of a cause (National Review, Nation, New Republic)
  3. Pretend journalism – claim to be a news source but is a curator (Daily Kos)
  4. Lies – the ones that Politifact and others give Pinocchio noses or “pants on fire” awards
  5. Propaganda – the systematic pattern of lying for political gain
  6. Make-believe news, like Macedonian sites.  They make up news from whole cloth.
  7. Counterfeit sites – they make you think you are at ABC News.com, for example

Then, he dramatically challenged the panel and audience to label certain big ticket topics as fake news or not: Evolution, global warming, the importance of low-fat diets, the importance of low carb diets.

Bridges said that there’s not necessarily a quick fix or tech solution to the problem. “These things have been out there in society, in front of our eyes for years.”  He likened the problem to gerrymandering, gated communities and questions about Hillary’s health.

Some have proposed algorithmic transparency (not surprisingly, Bridges thinks it is an awful idea; “Opening them up just makes it easier to game the system”).

What could work, according to the lawyer? “I think we should look to algorithmic opacity, and brand values of the organizations applying the algorithm.” What about content moderation? He said “Do we turn it over to a third party, like a Politifact? Who moderates the moderator? We know what moderation is – it’s censorship.”

In Bridges view, education is important. We should teach the importance of research and fact checking, and keep each other honest: “Friends don’t let friends spread fake news.”

Other Challenges

Jessa Lingel, Ph.D. and assistant professor at Annenberg School of Communications, seemed to be the youngest on the panel and spoke up for millennials:

“You can’t promise a generation of Internet-loving people a sense of control and agency over content and not expect this breakdown in trust.” She talked about the growth of citizen-driven journalism and the shift from content generation to interpretation. Jessa bemoaned the digital natives’ loss of innocence:

“We were promised a Democratic web and got a populist one; a web that that connects us to different people, instead we got silos. Geography wasn’t supposed to matter… anyone with an Internet connection is the same… instead, geography matters a lot.”

Jessa siad that algorithmic transparency is important but said that it is not enough. “Opacity? I do want to pay attention to the man behind the curtain. We need more than that tiny little button that explains ‘why did I get this ad?’”

Up Next: More on Solutions

As you have hopefully seen from my post, there are many opinions on the situation, and it’s a complex topic.

What do you think? In my next post, I’ll share my thoughts on fake news problems and solutions.



Share article on

How do you Solve a Problem Like Big Data?

This post is not about how to analyze big data – it is about impact and implications for laws, business and our society.  How do we ensure that our increasing reliance on data, algorithms and AI does not come at a cost?

Below, I include some great articles on the topic.  Please read on, and would love to hear your thoughts via the comments section below.

Bloomberg: Battling the Tyranny of Big Data

Mark Buchanan explores recent research and efforts, such as the Open Algorithms Projects; he says we must control how data are used, second, open up by making it more widely available, then re-balance power between companies and individuals.

Mashable: We put too much Trust in Algorithms, and it’s Hurting our Most Vulnerable

Algorithms have gone wild in Ariel Bogle’s piece; they incorrectly label welfare recipients as cheats, and blacks, recidivist risks, give rise to “mathwashing.”

Huffington Post: We need to know the Algorithms the Government uses to make important Decisions about Us

Writer Nick Diakopoulos, Fellow at Columbia Tow Center; Assistant Professor of Journalism, University of Maryland, cites similar issues, and shares a case study in transparency in which he “guided students in submitting FOIA requests to each of the 50 states. We asked for documents, mathematical descriptions, data, validation assessments, contracts and source code related to algorithms used in criminal justice, such as for parole and probation, bail or sentencing decisions.”

PHYS ORG: Opinion – Should Algorithms be Regulated?

Offers a point vs. counterpoint; Markus Ehrenmann of Swisscom says “Yes.” Mouloud Dey of SAS says “No.”

Seth Godin’s blog: The Candy Diet

The marketing guru says that algorithms are dumbing down media.

The Drum: In the Post Truth Era, the Quest to Surface Credible Content has only Just Begun

Lisa Lacy reports that Google amended its algorithm to combat holocaust deniers; but if manual intervention is needed for high profile fails, but what about other important issues that don’t get as much attention?

Foundation for Economic Education: What Happens when your Boss is an Algorithm?

Cathy Reisenwitz offers a nice primer, and argues for making algorithms open source.

The New York Times: Data Could be the Next Tech Hot Button for Regulators

Steve Lohr voices concerns about the growing market power of big tech, and explains potential antitrust issues arising from their collection of data. He writes: “The European Commission and the British House of Lords both issued reports last year on digital “platform” companies that highlighted the essential role that data collection, analysis and distribution play in creating and shaping markets. And the Organization for Economic Cooperation and Development held a meeting in November to explore the subject, “Big Data: Bringing Competition Policy to the Digital Era.”



Share article on

AI and Algorithms in the News

board-1364652_1920Cross-posted on DigiDig

I have been carefully watching for stories about the growing influence of technology in our lives, and sharing links with the DigiDig team via Toni Muzi Falcone. We discussed turning these into a weekly digest or DigiDigest (I truly hope that puns and weak humor attempts are not lost in translation; otherwise my writing tenure will be short here).

So, without further adieu, I list three very relevant stories.

Mother Nature’s Network – How Algorithms Influence us Every Day

This article describes ways in which algorithms help and hurt us. Cory Rosenberg worries that technology reduces our interests, backgrounds and behaviors to a number, and quotes Michele Willson of Curtin University in Perth, Australia: “Time, bodies, friendships, transactions, sexual preferences, ethnicity, places and spaces are all translated into data for manipulation and storage within a technical system or systems. On that basis alone, questions can be posed as to… how people see and understand their environment and their relations.”
Cory asks: “But isn’t the thought of humans as data an affront to our uniqueness?”
The article share examples of bias, e.g. “Google’s online advertising system shows ads for high-income jobs to males much more so than to females,” and examples of algorithms that help, like in “understanding” preferences in music and dating.

Mashable – Online Shopping Algorithms have us in a Decision Rut

Lance Ulanoff sees a “fundamental flaw in the technology designed to serve up things we might like. They are based entirely on past choices and activities and leave zero room for improvisation and unpredictability.” He bemoans the loss of serendipity in shopping recommendations, for example. It’s a timely topic in the US, as Cyber Monday was yesterday and the holidays are looming.

Lance writes: “If we continue to follow the choices made for us on social, services, subscription and retail sites, we will all soon be living a very vanilla life. Our friends will be the same kinds of people, our social feeds will offer just one point of view and our gift-giving will surprise no one.It is time to stand up and say, ‘You don’t know me.'”

Seems Lance would disagree with Cory’s view on the technology’s benefits.

Quanta Magazine – How to Force our Machines to Play Fair

Quanta writer Kevin Hartnett interviews author and Microsoft Distinguished Scientists Cynthia Dwork, who pioneered ideas behind “differential privacy.” She is now taking on fairness in algorithm design.

Cynthia says: “algorithms… could affect individuals’ options in life.. to determine what kind of advertisements to show people. We may not be used to thinking of ads as great determiners of our options in life. But what people get exposed to has an impact on them.”

She explores individual vs. group fairness and introduces the idea of “fair affirmative action.” Dwork would love to find a metric or way to ensure that “similar people [get] treated similarly,” but concludes that it is a thorny problem that people must first come to terms with before training computers to make these judgments.



Share article on