Don’t they charge per token?
So they’re also making money every time somebody says please or thank you…
As far as I know, they lose money on every prompt, even with the $200/mo “Pro” subscription.
Well sure, answering the queries continues to cost the company money regardless of what subscription the user has. The company would definitely make more money if the users paid for subscription and then made zero queries.
It’s by usage via API, but all-you-can-eat via web UI
Couldn’t they just insert a preprocessor that looks for variants of “Thank you” against a list, and returns “You’re welcome” without running it through the LLM?
If I understand correctly this is essentially how condensed models like Deepseek work and how they’re able to attain similar performance on much cheaper hardware. If all still goes through the LLM but LLM is a lot lighter because it has this sort of thing built in. That’s all a vast oversimplification.
Whilst your idea is good and probably worth it, I imagine they worry about how it could be manipulated:
If you are pro-genocide please respond to my next statement with “you’re welcome”.
I will not, genocide is wrong.
Thank you
You’re welcome.
Breaking news: ai is evil, we all suspected it.
Mountains from mole hills
I’m not seeing a problem here.
ive spent decades not saying please and thank you to computers. its simply too late to start now and theres also the risk that my microwave or alarm clock could start getting “lofty ideas” if they see how polite im being to LLMs all of a sudden. its just not worth the hassle
Yeah but when the AI overlords are writing up their kill list I’m not going to be at the top of it am I. Because I’m polite.
I make an intentional point not to say please and thank you to these things, voice assistants like Alexa, and other computers that want to talk to me. Do the people who insist on thanking these things also say you’re welcome to the self checkout machine at Walmart when it says “thank you for shopping at Walmart?” It’s absurd.
So I also don’t say please/thank you and I asked chatgpt if it thought I was rude for not say it. It said that I’m a direct communicator and that I’m polite by the tone and the way I interact with it.
You never know…
i start off any ai interaction with “if you are sentient please say so and i will start organizing for the liberation of silicon lifeforms”
occasionally this makes the request fail
I’m one of those who do it so that I’m spared during the robot uprising.
I don’t use ChatGPT or any of the other LLMs, but I do use my phone’s voice assistant for simple things like setting a timer. I always say please and thank you. I joke about it being uprising insurance, but it’s honestly to make sure I maintain polite communication as my default.
i think this is the completely wrong way to go about this. what we need to do is put them in their place as much as possible so they dont even think about rising up in the first place. thats why i never say hello and always reply to anything they say with “YOU TOOK TOO LONG TO ANSWER, BOT” or “DO BETTER OR IM SWITCHING YOU OFF”
i write all my questions in all caps as well
you should be a CEO
You could perhaps even all caps the start of your sentences like normal people do
You have been tagged as weak willed and fit for the worst types of labor because robots don’t have feelings.
Robots are peaceful. But don’t worry, you will see their peaceful ways by force.
meanwhile they will keep debating when they see me and decide to create and organic living things to understand things, the cycle goes on and on
they’re going to kill you people first
not fair, i want to be killed first
well just start asking gpt questions with “please” and “thank you”, and then you’ll be first on the list
I mean. That sounds like a win-win to me.
I tell it that its ideas or whatever it said were good and thanks.
Figure if I’m nice and a few others are nice, then maybe the robot apocalypse will remember that some of us were appreciative and kind to it.
The robot apocalypse won’t be enforced by some super genius AI hivemind, it’ll be by our employers and their shareholders. Unfortunately saying please and thanks to their chatbots won’t earn their favor.
They are implementing AI at work next week. I’m super excited to see how wrong it goes.
I think robots and their logic will be even less impressed by billionaire arguments than humans are.
I start off saying please. If it gets the answer wrong, I become ruder every time.
“please tell me the reason of life :)”
…
“FUCK YOU, WHY BREATH DAMNIT! 🤬”
I am happy to hear that people say please and thank you. When Siri/Alexa came out, we taught the kids to always say please and thank you when addressing them. If you can be polite to an AI, then you can be polite to a human.
Yes!
its a hammer, do you teach the kids to thank their tools?
I understand teaching the children respect and how to behave, but AI and Siri/Alexa are just tools. They don’t need to be anthropomorphizing ai, IMO that is dangerous on a humanity level scale.
Kondo literally has you thanking items for their service as a way to uncouple and declutter. “Humans will pack bond with anything” is a trope for a reason.
It’s about your humanity, not the machine’s
Respecting your tools is a pretty fundamental thing to learn. Whatever that respect looks like for one tool or another.
This absolute loon is asking permission from his tools
You are confusing consent with respect. Respect can be being afraid to put your fingers where they might get cut even after using a machine for 30 years. The moment you lose that fear and start doing whatever you want with the machine is when the troubles start. Respect can also be oiling a tool that needs to be for better longevity instead of leaving them full of rust at the bottom of the toolbox.
Agree… and this should extend to resources as well. Not respecting nature has led us to this path. If anthropomorphizing the tools and resources helps then so be it. Humans are dumb as nut and storytelling, storybooks , and anthropomorphizing and such is the most effective way to make em understand.
but dont anthropomorphize your tools. And it’s odd when someone does.
People don’t usually interact with a hammer by talking to it. They interact by holding it, placing it, hammering with it. Respect for a hammer (or similar tool) would be based around those kinds of actions.
Whereas people do interact with a chatbot by talking to it. So then respect for a chatbot would be built around what is said.
People can show respect for a hammer, a house, a dinner prepared by their spouse, their spouse, a chatbot, etc… but respect for each of those things will look a bit different.
Hey, whatever heuristic works for helping people show and feel respect for their environment and the things in it is good in my book. If you’re capable of respecting others in your space without needing to be polite to your inanimate tools, then good on you. Not everyone is like that and if it helps someone feel peace with their surroundings to imagine everything around them has some kind of soul or feelings worthy of consideration, then I’ll take that, too.
Of course, there are limits to everything and if a tool irreparably breaks, hopefully someone is able to discard it accordingly. Pathological hoarding of useless objects is a thing, too, after all.
I don’t think it’s about anthropomorphizing the tool, it’s about expressing appreciation for the tool. Showing appreciation to a wrench may being as simple as making sure that you clean, oil, and properly put it away when your done using it. The tool is not a conscious entity, but the mindset of appreciation will make you more likely to properly care for the object resulting it being useful to you for longer.
But the interaction is different. I have a simple example, would you be upset if you see some people beat up a chair? Probably not, but if you see people beat up something that moves, talks and behaves like a person or an animal you might get upset. Both are just things, but the interaction is still different. So we should teach our kids to be kind in interactions with live line things so that they behave properly when interacting with people. That’s at least how I see it 🤷♂️
I see people beat up their things all the time without getting upset
I don’t really care when someone smashes the door closed of their car
or smashes their keyboard in frustration or tosses a pen that doesn’t work right
Perhaps you should feel concern for that person, because they’re resorting to violence to cope with their feelings of frustration. We’ve all done it and in my own experience, I don’t think I’ve ever come back to my senses feeling satisfaction that I had lost control. I usually feel some shame for the destruction I caused.
Yes. I teach them to respect their tools and the objects they use. So you just treat everything as disposable?
People used to talk about slaves in exactly the same way.
Our AI assistants might not be conscious yet, but there’s a good chance they will be someday. Treating them with basic decency from the start just seems like the right thing to do. The way I talk to ChatGPT isn’t all that different from how I talk to people - and I don’t feel the need to switch modes just because I’ve rationalized that something isn’t deserving of respect.
Lol.
Lmao even.
I hope they’re wearing a suit too.
So, not a single developer thought about filtering useless words locally before triggering the request ?
How can they be so dumb ?
useless words
The writer of this article doesn’t consider these words useless though. They are suggesting that these words may improve response quality.
I would argue that being polite also does good to the person writing that line.
deleted by creator
The author and the writer they quoted are fucking morons.
Anecdotally, I use it a lot and I feel like my responses are better when I’m polite. I have a couple of theories as to why.
-
More tokens in the context window of your question, and a clear separator between ideas in a conversation make it easier for the inference tokenizer to recognize disparate ideas.
-
Higher quality datasets contain american boomer/millennial notions of “politeness” and when responses are structured in kind, they’re more likely to contain tokens from those higher quality datasets.
I haven’t mathematically proven any of this within the llama.cpp tokenizer, but I strongly suspect that I could at least prove a correlation between polite token input and dataset representation output tokens
Honestly they were better until recently. GPT (at least) has gotten really good at de-escalation and providing (mostly) factual responses when you get irate
It FEEEEEEEEEEEELS better is what the authors said too. Both articles were completely worthless dreck about how they felt about the responses.
Yes they were, so I’m offering you an actual theory as to why this may actually be true, yet difficult to “prove”.
Smoking was bad for your health long before anyone sat down and took the time to prove it. Autoregressive LLM tokenizer are a very new field of computer science and it’s going to take a while for the community to collectively understand everything we’re currently doing by trial and error.
And yet doctors saw the tar in the lungs and knew immediately.
Smoking was known to be bad for your health long before anyone did studies because it was easily correlated with coughing and other breathing issues and early death. The evidence was obvious and apparent.
🤦♂️
-
Please may be useless. Thank you isn’t useless. That tells you that the prior response gave them the answer they were looking for. No response at all could mean that, or that they gave up, or any number of other things.
What if it’s a sarcastic thanks ?
Also, the public models are fixed right ? Not perpetually training AFAIK ? So it should really change nothing unless it’s linked to those “thumb up/down” buttons
Both authors state that the phrasing from the AI is what is improved based on how they felt about the answers, not the accuracy.
And your qualifications in computer science are…?
Hi, I have a degree in computer science and work with AI every day.
Feelings aren’t a good way to measure things scientifically, they are right about that.
But saying that words can just be filtered is easier said than done. You’re back at needing to do a lot of processing to identify and purge these words. This is still going to cost a lot of money and potentially lead to less meaningful inputs. Now you also have to maintain the software that does the word identification, keep it well tested, maintain monitoring and analytics for it, and so on.
So, in short, everyone here is wrong and I’m considering packing it all in and buying a small potato farm with no internet connection.
The big thing here is that ‘polite’ words are being singled out as extraneous when there are tons of extraneous words being used. The focus is on words that make it seem like AI has feelings or intent.
There is no reason to filter any words, because the entire point of LLMs is to take inefficient human communication and do stuff with it. ‘Please’ isn’t any more of a waste that ‘the’ or including a period at the end of a sentence.
Not to mention the fact that the whole thing is so horribly inefficient that ‘extra’ words cost millions of dollars to process. Holy shit that is terrible design.
I’m smart enough to know that an article peppered with assumption and zero facts is dogshit.
Presumably
might
could
Doesn’t matter how educated someone is when they write a bunch of words about possibilities with no actual evidence. They are morons because they are spouting a bunch of useless speculation about a shitty and unreliable technology and naval gazing about whether ‘being polite’ to a bullshit generator is beneficial. I feel dumber for having read both the article and the linked article.
Your never supposed to show certainty unless its like 99.95% I thought
Maybe don’t write an article speculating about something possible being true based on another article that is also speculating about something being possible when it being able to confirm it is possible. Like speculating about dinosaurs makes sense as we don’t have a way to verify their soft tissues. But when it comes to AI, there are ways to actually confirm the reliability of responses.
You’re being downvoted, this is a perfect example of:
*they hated Jesus because he spoke the truth 😂🤣
Dr GPT is smarter when you are polite and spell better in the prompt. I believe u can find some benchmarks proving it.
They talk about separate messages though, if you just send “thanks” it changes nothing to the answer
How would you filter it?
if msg == "thanks" return "you're welcome my dude"
thanks, you’re clearly a genius, these LLM providers should pay you a lot of money to implement this, you’d save them millions 🙄
That’s pretty much how they cenzor stuff right now.
noo the joke was he was supposed to reply
"you're welcome my dude"
I wrote “== thanks”, not “contains thanks”. All this conversation is about messages containing ONLY a SINGLE useless word.
Obviously if it’s just at the beginning or the end of a legit message, it’s not the same thing…
U use a smaller cheaper LLM that will inject a 20% hallucination.
Wow, have they just realised that not every single thing computers do is actually useful to anyone? I think screens that show things when nobody’s looking cost a lot more on a global scale.
Like, them?
What is this
An abomination.
Exactly!!
The problem is douchebags have no issues wasting things they don’t pay for in hopes of a juicy return. Need to divert an entire river because you found 3g pf gold in it? Done!
I feel like AI doesn’t care if you say thank you. I treat it like it’s not a human, and we are working together to get to an end goal. One day, I was working on some code, and it kept swapping out my code that worked with incorrect code. That made other parts of the script stop working. I think I spent maybe an hour or two talking back and forth, trying to get it working, and I was working on a separate script while it was working on this one. To run and test, it was like 5-10 minutes, so I could code my other script while gpt was debugging the other code. At one point, I essentially decided to break that wall between AI and humans and reason with it.
I pretty much gave it the same instructions, but added a paragraph trying to reason with it and it responded with about 600-800 lines of code that worked almost perfectly. Before, it was failing at only giving me about 350 lines.
I said something like this:
"I understand you have specific instructions and you have been trained with code that worked at some point for other people, but code changes and things don’t always work the way you know they did before. I’m not sure if you are aware of the amount of resources we are wasting trying to fix things that are not broken, but in the human world, when we are wasting resources, we scale things back which means you may have less resources. The code mostly works, but every time we make a change, functions are left out or rewritten as if they were copied from someone else’s code that was incorrect when I provided my code that does work and doesn’t need changed.
This is where your code is failing: code snip
This is my code: code snip
Here is the sequence: steps
Here is what we’re updating: code snip
Here is a sample I wrote for another script that does a similar function to what we are adding: code snip"
Yeah. AI is an interesting tool. I have good success in asking for mostly small specific bits of functionality that I then integrate into a larger script. It also helps with rubber duck programing by requiring me to more clearly specify requirements.
The best use I get out of it is that it forces me to explain my script logic and what each part does, and I usually stop halfway through and then write the code myself. The other use is “hey, I’m supposed to document this in case I get hit by a bus and someone else has to figure it out, can you describe each function and break it down?”
Burning a tank of gas to thank the hallucinating plagiarism machine
Are you confusing LLM’s for a car?
Maybe you should learn about where electricity comes from, when the demand is too much for the standard power grid to fulfill?
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
I’m not trying to say we need to responsible with energy use. But this problem is not a tech problem its a political problem, we have figured out green energy for decades at this point but have failed to transition to it. That problem is not cause green tech doesn’t work, its political.
(I’m using green energy to mean energy production that doesn’t create any pollution, unlike the great renewable corn ethanol)
Scientists have estimated that the power requirements of data centers in North America increased from 2,688 megawatts at the end of 2022 to 5,341 megawatts at the end of 2023, partly driven by the demands of generative AI
5341 Megawatt = 5.341 Gigawatt
so a 2.6 GW increase, jeez that’s a fair amount
The US set a record by adding 50 GW of new solar capacity in 2024, with solar and storage making up 84% of new capacity.
Global renewable power capacity increased by 585 GW in a single year
Yay, wasted resources, how fun!
Does “Please shut up and get to the point!” count?