Do we need to be saying 'please' and 'thanks' to AI?

Why are people being nice to AI? And does it actually make any difference?

Isra'a EmhailDigital Journalist
8 min read
Loading image...
Caption:A survey of more than 1000 people found 67 percent and 71 percent who use AI in the US and UK, respectively, are polite to it.Photo credit:Unsplash / masantocreative

"Don't use manners when interacting with AI… It's not your grandma. It's a tool."

That was the warning US-based AI anthropologist Cliff Jurkiewicz shared online a few months ago – and it sparked a debate.

Why shouldn't we be polite to AI? Are there any harms in doing this? And could being rude to AI spill over into how we talk to actual humans?

Illustration of chatbot appearing as half robot and half human on phone screen.

AI anthropologist Cliff Jurkiewicz is worried we'll be blurring the line between technology and human interaction.

Unsplash / Sanja Djordjevic

Jurkiewicz, who's a vice president at AI company Phenom, says this boundary matters — especially as AI tools rapidly evolve.

"As we learn to use these tools and adopt these tools, if we don't do it in the right way, we are going to ascribe human attributes that don't exist [to AI] and it's going to blur the line between tool and humanity. And I don't like that. That's scary."

For him, the concern is the distinction of human connection — something he feels is already strained, particularly for younger generations growing up on social media and shaped by the isolating Covid years.

Cliff Jurkiewicz is the vice president of global strategy at Phenom, an applied AI company specializing in HR.

Cliff Jurkiewicz is the vice president of global strategy at Phenom, an applied AI company specializing in HR.

Supplied / Phenom

Why are people being nice to AI?

A survey of more than 1000 people found 67 percent and 71 percent who use AI in the US and UK, respectively, are polite to it. For most, it's just the "nice thing to do". A minority admitted they're being nice just in case AI ever turns against us.

But there may be deeper habits at play, says Andrew Lensen, director of the AI programme at Victoria University of Wellington. He thinks we might be mirroring how we speak to people online, especially when asking for help.

"It's this realisation that the way we communicate matters and that if we start communicating in sort of abrupt or less polite ways with these chatbots that are increasingly a big part of our day-to-day life, that may well translate to how we talk to our colleagues and other people in the world."

Victoria University of Wellington AI programme director Andrew Lensen.

Victoria University of Wellington AI programme director Andrew Lensen.

Supplied / Robert Cross

Psychologist Dougal Sutherland agrees it could be a case of social mimicry.

"I feel rude if I don't put those [please and thanks] in, because then when it responds to me, it responds to me being polite in itself. And I think we're trained to be polite to people who are polite to us."

Is there a problem with that?

From a behavioural point of view, Sutherland doesn't believe it's a bad thing.

Umbrella wellbeing clinical psychologist, Dougal Sutherland.

Umbrella principal psychologist Dougal Sutherland.

sueallmanpeople ©2016

"I think it's good to have some of those sort of minimum standards of civility and respect … those are often things that are lacking in the digital space."

But Jurkiewicz sees a "hard line" when it comes to our relationship with tech versus people.

"Social conditioning cannot be applied to technology and should not be applied to technology."

He points to social media as a warning: "We've already done it with social media - the algorithms in social media, the dopamine rush with social media, the human response to that - the conditioning response to that has trapped entire generations into this false view of how humans interact with each other."

What about kids? Do they know the difference?

Ying Xu, a Harvard researcher studying AI's impact on children, says most kids understand they're interacting with a machine.

Video poster frame
This video is hosted on Youtube.

"I think that a lot of time we kind of are concerned that children anthropomorphise AI," Xu told the Screen Deep podcast. "I will have to say that, from my own study, we haven’t seen much evidence supporting that. Children, even as young as four, they recognise that they're interacting with a machine which is different from their interactions with people."

There is some evidence that children can pick up language habits from AI and may repeat them in real life — but it's unclear whether that's just playful mimicry or a lasting shift, Xu told Harvard EdCast.

Amazon Kids, for example, has a default ‘polite mode’ on Echo devices. So when kids say ‘please’, Alexa may respond with ‘thanks for asking so nicely’.

Xu says this could be a step in the right direction as children pick up social etiquette through interactions. "But it also poses a risk of obscuring, from the children's perspective, the boundaries between AI and humans."

Does it make a difference to the results?

Kurtis Beavers, a director on the design team for Microsoft's AI companion Copilot, said basic etiquette “helps generate respectful, collaborative outputs” and sets the tone for the response.

A Microsoft WorkLab memo notes: "Generative AI also mirrors the levels of professionalism, clarity, and detail in the prompts you provide."

But Lensen says it's hard to pin down how much of a difference using manners would make, because it would depend on the datasets the models are trained on.

Still, Jurkiewicz, who uses AI platforms daily, doesn't believe it makes any difference to the quality of the results.

How much energy does it use?

These language learning models are powered by huge data centres, which consume a lot of electricity — often from non-renewable sources.

We don’t exactly know the answer but when someone asked OpenAI chief executive Sam Altman that question earlier this year, he replied, "tens of millions of dollars well spent — you never know".

Jurkiewicz estimates that in the US, if everyone used extra-polite language with AI for a year, it could burn through about a gigawatt of energy.

“It consumes the energy of a nuclear power reactor just for 'please' and 'thank yous', along with all the other pleasantries that we might [use].”

Lensen, on the other hand, believes those environmental cost claims may be overblown. The real energy impact, he says, comes from generating complex outputs, not just adding a few polite words to your prompt.

Well, what does AI make of it?

We asked popular chatbots ChatGPT, CoPilot and Gemini. Here's what the machines had to say (bearing in mind of course these are bots are trained on previous queries and what's online etc).

Screenshots testing AI chatbots' responses to the question 'how important are manners to using you?' in polite versions that start with 'Hi, can you I ask you a question please?' and blunt versions that ask the question right away.

Screenshots of testing AI chatbots' responses to the question 'how important are manners to using you?' in polite versions that start with 'Hi, can you I ask you a question please?' and blunt versions that ask the question right away.

OpenAI / Microsoft / Google

In both versions tested, with and without 'Hi, can I please ask you a question?', the answers to 'how important are my manners to using you?' didn’t change much apart from the wording.

All reiterated that interactions with AI could spill over into the real world, that they don’t have feelings so won’t be hurt (so ultimately it's up to you) but that your word choice can shape the conversation.

More from Lifestyle