You Are Not a Parrot

tags
Artificial Intelligence

Notes

Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do — with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.

NOTER_PAGE: (3 0.3297872340425532 . 0.06038135593220339)

They’re great at mimicry and bad at facts. Why? LLMs, like the octopus, have no access to real-world, embodied referents. This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter,

NOTER_PAGE: (3 0.5793780687397709 . 0.05402542372881356)

“We could say, ‘Hey, look, this is technology that really encourages people to interpret it as if there were an agent in there with ideas and thoughts and credibility and stuff like that.’” Why is the tech designed like this? Why try to make users believe the bot has intention, that it’s like us?

NOTER_PAGE: (4 0.3412438625204583 . 0.19809322033898305)

applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development,

NOTER_PAGE: (4 0.6235679214402619 . 0.05826271186440678)

In 2019, she raised her hand at a conference and asked, “What language are you working with?” for every paper that didn’t specify, even though everyone knew it was English.

NOTER_PAGE: (6 0.21358428805237317 . 0.05826271186440678)

In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co- authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team.

NOTER_PAGE: (7 0.10801963993453356 . 0.1716101694915254)

“This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”

NOTER_PAGE: (8 0.1359154929577465 . 0.17866909753874202)

He advocates for “a broader sense of meaning.” In a recent paper, he proposed the term distributional semantics: “The meaning of a word is simply a description of the contexts in which it appears.” (When I asked Manning how he defines meaning, he said, “Honestly, I think that’s difficult.”)

NOTER_PAGE: (8 0.6528169014084507 . 0.13309024612579762)

“To me,” he said, “this isn’t a very formal argument. This just sort of manifests; it just hits you.”

NOTER_PAGE: (8 0.8570422535211268 . 0.6845943482224248)

Manning does not favor pumping the brakes on developing language tech, nor does he think it’s possible to do so. He makes the same argument that has drawn effective altruists to AI: If we don’t do this, someone else will do it worse “because, you know, there are other players who are more out there who feel less morally bound.”

NOTER_PAGE: (9 0.43732394366197186 . 0.04649042844120328)

The toys are fun, enchanting, and addicting, and that, he believed even 47 years ago, will be our ruin: “No wonder that men who live day in and day out with machines to which they believe themselves to have become slaves begin to believe that men are machines.”

NOTER_PAGE: (10 0.8894366197183099 . 0.2406563354603464)

“From here on out, the safe use of artificial intelligence requires demystifying the human condition,” Joanna Bryson, professor of ethics and technology at the Hertie School of Governance in Berlin, wrote last year. We don’t believe we are more giraffelike if we get taller. Why get fuzzy about intelligence?

NOTER_PAGE: (11 0.18943661971830986 . 0.5797629899726526)

even more blunt. We can’t live in a world with what he calls “counterfeit people.” “Counterfeit money has been seen as vandalism against society ever since money has existed,” he said. “Punishments included the death penalty and being drawn and quartered. Counterfeit people is at least as serious.”

NOTER_PAGE: (11 0.2964788732394366 . 0.19234275296262535)

Bender has made a rule for herself: “I’m not going to converse with people who won’t posit my humanity as an axiom in the conversation.” No blurring the line.

NOTER_PAGE: (11 0.6647887323943662 . 0.05013673655423883)

“Whether these things actually are people or not — I happen to think they are; I don’t think I can convince the people who don’t think they are — the whole point is you can’t tell the difference. So we are going to be habituating people to treat things that seem like people as if they’re not.”

NOTER_PAGE: (12 0.1563380281690141 . 0.3245214220601641)
NOTER_PAGE: (12 0.5309859154929578 . 0.31267092069279856)

This is the darkness that awaits when we lose a firm boundary around the idea that humans, all of us, are equally worthy as is.

NOTER_PAGE: (13 0.0795774647887324 . 0.16681859617137648)

human potential — that’s the fascist idea — human potential is more fully actualized with AI than without it.” The AI dream is “governed by the perfectibility thesis, and that’s where we see a fascist form of the human.” There’s a technological takeover, a fleeing from the body.

NOTER_PAGE: (13 0.21126760563380284 . 0.09662716499544212)

“The point is to create a tool that is easy to interface with because you get to use natural language. As opposed to trying to make it seem like a person,” said Elizabeth Conrad, who, two years into an NLP degree, has mastered Bender’s anti-bullshit style. “Why are you trying to trick people into thinking that it really feels sad that you lost your phone?”

NOTER_PAGE: (13 0.4140845070422535 . 0.536007292616226)

Blurring the line is dangerous. A society with counterfeit people we can’t differentiate from real ones will soon be no society at all. If you want to buy a Carrie Fisher sex doll and install an LLM, “put this inside of that,” and work out your rape fantasy — okay, I guess. But we can’t have both that and our leaders saying, “i am a stochastic parrot, and so r u.” We can’t have people eager to separate “human, the biological category, from a person or a unit worthy of moral respect.” Because then we have a world in which grown men, sipping tea, posit thought experiments about raping talking sex dolls, thinking that maybe you are one too.

NOTER_PAGE: (13 0.5246478873239437 . 0.05560619872379216)