Smarter AI Assistants Could Make It Harder to Stay Human

9 months ago 42
RIGHT SIDEBAR TOP AD

That’s a well-meaning concern but it would probably force agents to work more slowly and maybe even prevent some of the more innovative solutions that our superintelligent assistants might come up with. I suspect that the temptation to make AI-to-AI communication efficient and effective will overwhelm that nice sentiment.

Another potential nightmare: ad-funded bots that direct owners to sponsored products and services. Suleyman doesn’t love the idea, but he doesn’t seem to rule it out. He says that his bots, delivering tremendous value, won’t come cheap. “You regularly pay a lawyer hundreds of dollars per hour. But for some reason, we've just become allergic to paying more than 10 bucks a month for any service online. That will have to change.” So will those who can’t afford the fee be offered ad-supported versions? He acknowledges not everyone will want to pay for access to the technology. In any case, Suleyman says, trust and accountability is essential. “It will take many years before we feel comfortable with AI having autonomous actions,” he says. “I don't think we should be doing that anytime soon.”

To me, that’s the worry—once we get comfortable, we’re finished. When I sought validation in a scan of research papers, my attention was snared by the title “The Power to Harm: AI Assistants Pave the Way to Unethical Behavior.” Coauthored by University of Southern California scientists Jonathan Gratch and Nathanael Fast, it hypothesizes that intelligent agents can democratize an unsavory habit of rich people, who outsource their bad behavior through lawyers, spokespeople, and thuggish underlings. “We review a series of studies illustrating how, across a wide range of social tasks, people may behave less ethically and be more willing to deceive when acting through AI agents,” they write.

I caught up with Gratch, who spoke to me from a conference in Würzburg, Germany. “Every man or woman can have their personal assistant do things on their behalf,” he says. “Our research suggests people might be willing to tell their assistants to do things that are more ethically questionable than they themselves would be willing to do.”

Gratch has been researching the possible impact of intelligent agents for years. In the past year the field has undergone a transformation similar to a lightning bolt striking some nebbish who suddenly takes on superpowers. “It used to be that you spend a whole PhD thesis trying to build the frickin’ agent that you want to test,” he says. “And now, with two days playing around with GPT or something, you can get something that interacts with people and looks pretty good.” Gratch says his field is now infused with a blend of excitement and angst.

“The technology will make individual people more powerful, opening up free time,” he says. “The one concern I have is, what do people do with that power?” For instance, if I had directed an agent to call him on my behalf, he says, a potential human connection would have been lost. “Those personal connections are what keep us nice and promote empathy,” Gratch says. “When AI makes it more about algorithms and laws and transactions, it diminishes us as people.”

Gratch’s field, once centered on hypotheticals, can now feel like a guide to what commercial AI services are around the corner. Consider some of the presentations at the conference in Würzburg: “Effects of Agent’s Embodiment in Human-Agent Negotiations,” “Accommodating User Expressivity While Maintaining Safety for a Virtual Alcohol Misuse Counselor,” “The Effect of Rapport on Delegation to Virtual Agents.” (Reassuringly, other papers were about maintaining ethics in a world full of agents.) Gratch saw similar work at a conference he recently attended at MIT. All the major tech companies were there too, he says, and he expects them to hire many of his students.

Read Entire Article