Given the outsized attention paid to ChatGPT and its brethren, one could be forgiven for thinking the technology behind generative AI sprung out of nowhere, fully formed. Not so, says Kellogg’s Sergio Rebelo. This week: a brief history of the long history of AI.
Plus, AI-generated misinformation, and how technology is changing friendships.
Try, try again
During the Cold War, the U.S. Defense Department sought to create a machine that could quickly translate captured phrases from Russian into English. Computer scientists fed the machine a large number of words, rules, and definitions and then had it translate words one at a time.
All too often, however, the machine was unable to catch nuances in the languages and ended up producing inaccurate translations. For instance, the Biblical phrase, “the Spirit is willing, but the flesh is weak,” when translated into Russian and back into English, would turn into, “the whiskey is strong, but the meat is rotten.”
Decades later, the team at Google Translate was still using much the same overall strategy for its translations. The result, unsurprisingly, was that translations often ended up being too literal, missing the subtleties of language. It wasn’t until 2016, when Google abandoned this approach, that its AI translation took off. The team leveraged neural networks to process entire sentences at once, using context to refine its translations.
It wasn’t an overnight success, Rebelo says. Instead, years of trial and error have brought AI translation to where it is today. “And the success that we’re seeing now with generative AI is a bit of the same thing. It’s a seemingly overnight success that was more than 50 years in the making.”
Rebelo says there are lessons to be learned from stories like this one about the importance of individual and institutional persistence in furthering science. You can read more about the history of AI in Kellogg Insight.
A bevy of misinformation
Misinformation runs rampant on social media. And concern over its influence is only growing with the emergence of AI-generated misinformation.
So policymakers, regulators, and other stakeholders in content moderation have proposed a variety of methods to counter or prevent the spread of misinformation on social media. For example, there are accuracy nudges that discreetly remind people to keep an eye out for misleading information. There are efforts to debunk misinformation head on. And there are “prebunking” methods that preemptively address falsehoods to help build public resilience to misinformation.
In large part, these strategies rest on the idea that people generally care about the accuracy of information, says Kellogg’s William Brady. But for some kinds of information—specifically misinformation that evokes moral outrage—that’s not the case.
Across ten studies analyzing more than a million social-media posts, Brady and his colleagues find that misinformation is more likely to trigger outrage than trustworthy news. Outrage, in turn, drives people to share or retweet these often-misleading social-media posts. And it makes them more willing to do so without actually reading the article linked in the post, even if they are otherwise good at identifying misinformation.
“We actually find people are not terrible at discerning between misinformation versus trustworthy news,” Brady says. “But here’s the key: if you give them an outrage-evoking article with misinformation, that ability they have, it goes out the window; they are more likely to share it anyway.”
You can read more in Kellogg Insight.
“It’s almost as though friends provide a buffer for our health.”
— Neal Roese, on BYUradio, discussing the many benefits of making and maintaining strong friendships, even in an era of remote work, social media, and constant distraction. (To listen, hop to about the 1:06:00 mark.)
Jessica Love, editor in chief
Kellogg Insight