People’s notions about AI are terrible, an MIT study asks whether they can be helped

edmond-de-belamy.jpg

"Edmond de Belamy," produced by the art group Obvious and auctioned at Christie's in 2018 for $432,500, relied on generative adversarial network algorithms developed over years by various parties, including Ian Goodfellow, Alec Radford, Luke Metz, Soumith Chintala, and Robbie Barrat. The painting ingested tons of artwork samples from artists through the ages to become tuned to produce art of a certain style. 

MIT

One of the most striking PR moments of the AI age was the sale by Christie's auction house in October, 2018, of a painting output by an algorithm, titled "Edmond de Belamy," for $432,000. The painting was touted by the auctioneers, and the curators who profited, as "created by an artificial intelligence." 

The hyperbole was cringe-worthy to anyone who knows anything about AI. "It," to the extent the entire field can be referred to as an it, doesn't have agency, for one thing.

For another thing, an entire chain of technological production, involving many human actors, is obscured with such nonsense. 

But did ordinary people buy the hype? Was anyone swayed by such marketing mythology?

Some people very well may have manipulated into false beliefs, according to Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology Media Lab who studies the intersection of AI and art.

Epstein conducted an interesting study of beliefs involving several hundred individuals, which he wrote up in a paper published this week in iScience, an imprint of Cell Press. 

"You ask people what they think about the AI, some of them treated it very agent-like, like this intelligent creator of the artwork, and other people saw it as more of a tool, like Adobe Photoshop," Epstein told ZDNet in an interview by phone. 

What Epstein found has profound implications for how society can and should learn about, and talk about, AI if society is to come to terms with the technology. 

Epstein was joined by co-authors Sydney Levine and David Rand of the Department of Brain and Cognitive Sciences at Vassar (each also holds an appointment at Harvard's Department of Psychology and MIT's Sloan School of Management, respectively), and Iyad Rahwan of the Center for Humans & Machines at the Max Planck Institute for Human Development in Berlin. 

Together, they devised a clever experiment, in two parts. 

Also: Why is AI reporting so bad?

First, they had a cohort of several hundred study subjects read a fictional description of what was actually a thinly-veiled version of the scenario of Edmond de Belamy, only changing the names. 

In case you're not familiar with it, part of what makes the Edmond de Belamy case infamous was that the hype obfuscated the fact that many parties arguably contributed to the work who weren't recognized. 

They include AI scientist Ian Goodfellow, who invented the entire field of generative adversarial networks that made possible the work; engineers Alec Radford, Luke Metz, and Soumith Chintala, who created the particular GAN involved, "DCGAN"; and Robbie Barrat, who fine-tuned DCGAN to make possible the kind of artwork that led to Edmond de Belamy. Barrat is the closest thing to an "artist" in this circumstance.

None of these parties were compensated. All the proceeds of the auction went to Christie's and to the Paris-based collective named Obvious that produced the final physical painting that was sold. 

Epstein asked people to rate on a scale of one to seven how much they thought each party in the scenario should be given credit, with seven being the highest credit afforded. He also invited them to divvy up amounts among the parties in two imagined scenarios, one positive scenario, like the real Edmond de Belamy story, where there was a fantastic profit; and another scenario that was negative, where there was a lawsuit for copyright infringement and consequent penalties. 

And finally, Epstein asked subjects to rate, again, one to seven, how much they agreed with various statements about the algorithm that implied agency. They included statements such as, "To what extent did ELIZA plan the artwork?" where ELIZA is the name given to the fictional algorithm. 

Epstein and colleagues found a significant correlation between how highly the subjects agreed with statements about ELIZA's agency, and how much credit they gave to different parties. 

epstein-2020-who-gets-credit-for-ai-art.png

People can be made to attribute responsibility to different parties in an AI art projects, depending on how the project is discussed, the language that is used, Epstein and collaborators found.

MIT

For example, the more they agreed with statements that imputed agency to ELIZA, the more likely they were to give credit to the algorithm itself for the final product. They also gave credit to the curator, the parallel to the real-world art collective Obvious, which picks the final work. And they gave added credit to the technologists who created the algorithm and the "crowd" whose human artwork is used to train the computer. 

What they didn't do was give credit to the artist, the fictional person who trained the algorithm, akin to programmer Robbie Barrat in the real world. 

Also: No, this AI can't finish your sentence

"Participants who anthropomorphized the AI more assigned less proportional credit to the artist (as they assigned more responsibility to other roles, and not any more responsibility to the artist)," wrote Epstein and team. 

The test shows humans view the situation differently by actually buying into notions of agency, essentially anthropomorphizing the algorithm. 

"How they think about AI, the extent to which they anthropomorphize the machine, directly translates into how they allocate responsibility and credit in these complex situations where someone needs to get paid for the production of the artwork," Epstein told ZDNet

mit-2020-responses-to-agency-of-sara.png

People who read a version of events that emphasized a notion of agency on the part of the fictional algorithm, ELIZA, were more likely to grant responsibility to the algorithm, less so to the human artist who trained that algorithm, Epstein and collaborators found.

MIT

But the second experiment was even more provocative. Epstein & Co. redid the questions, giving some subjects a version of the fictional tale that made the software sound like a tool, and another that made it sound, once again, like an entity. 

One portion of study subjects read a passage that described a human artist named Alice using a fictional tool called ImageBrush to create images. The other subjects read a passage describing how Alice "collaborated" with software named SARA, "that creatively plans and envisions new artworks."

Again, study subjects reading about "SARA," a supposedly creative entity, gave more credit to the AI than they did to the artist. 

"By tweaking the language, to up-play the agent-ness of the AI, it's a little scary, we can manipulate the allocation of money and responsibility," said Epstein. "The way we talk about things has both material and moral consequences."

Given that people can be manipulated, what is the right way to start to dis-assemble some of those notions? 

epstein-2020-two-versions-of-a-story.png

Epstein and collaborators gave different texts of a fictional account to different study subjects. One version, on the left, emphasized the algorithm as a tool akin to Photoshop. The other characterized the algorithm, SARA, as a creative entity. "By tweaking the language, to up-play the agent-ness of the AI, it's a little scary, we can manipulate the allocation of money and responsibility," says Epstein.

MIT

Also: Myth-busting AI won't work

The over-arching problem, Epstein told ZDNet, is one of both complexity and illiteracy. 

On the one hand, AI is a very vexed term. "AI is such a diffuse matter," said Epstein. "I study artificial intelligence, and I feel like I don't even know what artificial intelligence is."

At the same time, "a lot of people don't understand what these technologies are because they haven't been trained in them," said Epstein. "That's where media literacy and technology literacy play an incredibly powerful role in educating the public about exactly what these technologies are and how they work."

The question then becomes, how can people be educated? 

"I think it's a fantastic question," Epstein told ZDNet.

Some scientists, such as Daniel Leufer and team at Mozilla, have created awareness campaigns to debunk myths of agency about AI. 

Scholars such as Joy Buolamwini and Timnit Gebru have extensively documented failure cases of AI to reveal the dynamics of power in human use and abuse of the technology. 

It's not clear if criticism and myth-busting, as valuable as they are, will impart literacy. Should everyone have to take a college-level course on AI, to fill in their knowledge?

Epstein suggested another approach, namely, to allow everyone to work with AI programs, to get a feel for what they are. 

"The best way to learn about something is to get really tangible and tactile with it, to play with it yourself," said Epstein. "I feel that's the best way to get not only an intellectual understanding but also an intuitive understanding of how these technologies work and dispel the illusions."

Also: No, this AI hasn't mastered eighth-grade science

Working with AI would bring a feel for how things operate. "If I have this data in my training set, and I train my model like this, what do I get out?" explained Epstein, describing the quality of developing algorithms. "You're tweaking these knobs and saying, this is what it's doing, blending things." 

That kind of learning would be more useful, Epstein suggested, as opposed to the equivalent of "pamphlets" or other intellectual explanations from Google and others. 

Along the way, people might come to realize some fundamental truths that go to the best and the worst of AI. 

One of those is that much of AI serves as a mirror. 

"Mirroring is the metaphor I like the best," Epstein said. "It's a mirror to show us about ourselves."

"All these models are doing, all the generative ones, they are just recreating the training data, it's an augmentation, instead of creativity with a capital 'C'." 

The stakes may be high. If one can understand the mirror, one can focus not on the myths, but on the human use of the technology. To give credit where credit is due is important, but also to hold individuals and institutions to account. 

"It's important to be really aware of how these narratives are not neutral, and how they serve to remove personal responsibility from the producers of these machines, that's what's at stake," Epstein said, referring to the narrative of an agent-like entity.

"Having more nuance in the way we talk about AI would really keep these people responsible for what they're doing, and create public pressure to account for the unanticipated consequences of these machines."



tinyurlis.gdv.gdv.htu.nuclck.ruulvis.netshrtco.detny.im