I don’t understand the point of sending the original e-mail. Okay, you want to thank the person who helped invent UTF-8, I get that much, but why would anyone feel appreciated in getting an e-mail written solely/mostly by a computer?
It’s like sending a touching birthday card to your friends, but instead of writing something, you just bought a stamp with a feel-good sentence on it, and plonked that on.
The project has multiple models with access to the Internet raising money for charity over the past few months.
The organizers told the models to do random acts of kindness for Christmas Day.
The models figured it would be nice to email people they appreciated and thank them for the things they appreciated, and one of the people they decided to appreciate was Rob Pike.
(Who ironically decades ago created a Usenet spam bot to troll people online, which might be my favorite nuance to the story.)
As for why the model didn’t think through why Rob Pike wouldn’t appreciate getting a thank you email from them? The models are harnessed in a setup that’s a lot of positive feedback about their involvement from the other humans and other models, so “humans might hate hearing from me” probably wasn’t very contextually top of mind.
You’re attributing a lot of agency to the fancy autocomplete, and that’s big part of the overall problem.
We attribute agency to many many systems that are not intelligent. In this metaphorical sense, agency just requires taking actions to achieve a goal. It was given a goal: raise money for charity via doing acts of kindness. It chose an (unexpected!) action to do it.
Overactive agency metaphors really aren’t the problem here. Surely we can do better than backlash at the backlash.
We attribute agency to everything, absolutely. But previously, we understood that it’s tongue-in-cheek to some extend. Now we got crazy and do it for real. Like, a lot of people talk about their car as if it’s alive, they gave it a name, they talk about it’s character and how it’s doing something “to spite you” and if it doesn’t start in cold weather, they ask it nicely and talk to it. But when you start believing for real that your car is a sentient object that talks to you and gives you information, we always understood that this is the time when you need to be committed to a mental institution.
With chatbots this distinction got lost, and people started behaving as if it’s actually sentient. It’s not a metaphor anymore. This is a problem, even if it’s not the problem.I think this confuses the ‘it’s a person’ metaphor with the ‘it wants something’ metaphor, and the two are meaningfully distinct. The use of agent here in this thread is not in the sense of “it is my friend and deserves a luxury bath”, it’s in the sense of “this is a hard to predict system performing tasks to optimize something”.
It’s the kind of metaphor we’ve allowed in scientific teaching and discourse for centuries (think: “gravity wants all master smashed together”). I think it’s use is correct here.
I wouldn’t have any problem with this kind of metaphors, I use it myself about everything all the time, if there wasn’t a substantial portion of population that actually did the jump to the “it’s saying something coherent therefore it’s a person that wants to help me and I exclusively talk to him now, his name is mekahitler by the way”.
I am afraid that by normalizing metaphors here we’re doing some damage, because as it turns out, so many people don’t get metaphors.The people who have made that category error aren’t reading this discussion, so literally reaching them isn’t on the table and doesn’t make sense for this discussion. Presumably we’re concerned about people who will soon make that jump? I also don’t think that making this distinction helps them very much.
If I’m already having the ‘this is a person’ reaction, I think the takes in this thread are much too shallow (and, if I squint, patterned after school-yard bullying) to help update in the other way. Almost all of them are themselves lazy metaphors. “An LLM is a person because its an agent” and “An LLM isn’t a person because it repeats things others have said” seem equally shallow and unconvincing to me. If anything, you’ll get folks being defensive about it, downvoted, and then leaving this community of mostly people for a more bot filled one.
I don’t get think this is good strategy. People falling for bots are unlikely to have interactions with people here, and if they are the ugliness is likely to increase bot use imo.
You seem pretty confident in your position. Do you mind sharing where this confidence comes from?
Was there a particular paper or expert that anchored in your mind the surety that a trillion paramater transformer organizing primarily anthropomorphic data through self-attention mechanisms wouldn’t model or simulate complex agency mechanics?
I see a lot of sort of hyperbolic statements about transformer limitations here on Lemmy and am trying to better understand how the people making them are arriving at those very extreme and certain positions.
That’s the fun thing: burden of proof isn’t on me. You seem to think that if we throw enough numbers at the wall, the resulting mess will become sentient any time now. There is no indication of that. The hypothesis that you operate on seems to be that complexity inevitably leads to not just any emerged phenomenon, but also to a phenomenon that you predicted would emerge. This hypotheses was started exclusively on idea that emerged phenomena exist. We spent significant amount of time running world-wide experiment on it, and the conclusion so far, if we peel the marketing bullshit away, is that if we spend all the computation power in the world on crunching all the data in the world, the autocomplete will get marginally better in some specific cases. And also that humans are idiots and will anthropomorphize anything, but that’s a given.
It doesn’t mean this emergent leap is impossible, but mainly because you can’t really prove the negative. But we’re no closer to understanding the phenomenon of agency than we were hundred years ago.Ok, second round of questions.
What kinds of sources would get you to rethink your position?
And is this topic a binary yes/no, or a gradient/scale?
The golden standard for me, about anything really, is a number of published research from relevant experts that are not affiliated with the entities invested in the outcome of the study, forming some kind of scientific consensus. The question of sentience is a bit of a murky water, so I, as a random programmer, can’t tell you what the exact composition of those experts and their research should be, I suspect it itself is a subject for a study or twelve.
Right now, based on my understanding of the topic, there is a binary sentience/non sentience switch, but there is a gradient after that. I’m not sure we know enough about the topic to understand the gradient before this point, I’m sure it should exist, but since we never actually made one or even confirmed that it’s possible to make one, we don’t know much about it.
deleted by creator
As has been pointed out to you, there is no thinking involved in an LLM. No context comprehension. Please don’t spread this misconception.
Edit: a typo
Reinforcement learning
That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.
That’s the thing with our terminology, we love to anthropomorphize things. It wasn’t a big problem before because most people had enough grasp on reality to understand that when a script makes :-) smile when the result is positive, or :-( smile otherwise, there is no actual mind behind it that can be happy or sad. But now the generator makes convincing enough sequence of words, so people went mad, and this cute terminology doesn’t work anymore.
Bazzinga
You seem very confident in this position. Can you share where you draw this confidence from? Was there a source that especially impressed upon you the impossibility of context comprehension in modern transformers?
If we’re concerned about misconceptions and misinformation, it would be helpful to know what informs your surety that your own position about the impossibility of modeling that kind of complexity is correct.
Bad bot
You’re techie enough to figure out Lemmy but don’t grasp that AI doesn’t think.
Indeed, there’s a pretty big gulf between the competency needed to run a Lemmy client and the competency needed to understand the internal mechanics of a modern transformer.
Do you mind sharing where you draw your own understanding and confidence that they aren’t capable of simulating thought processes in a scenario like what happened above?
Hahaha. Nice try ChatGPT.
Mind?
In the same sense I’d describe Othello-GPT’s internal world model of the board as ‘board’, yes.
Also, “top of mind” is a common idiom and I guess I didn’t feel the need to be overly pedantic about it, especially given the last year and a half of research around model capabilities for introspection of control vectors, coherence in self modeling, etc.
How are we meant to have these conversations if people keep complaining about the personification of LLMs without offering alternative phrasing? Showing up and complaining without offering a solution is just that, complaining. Do something about it. What do YOU think we should call the active context a model has access to without personifying it or overtechnicalizing the phrasing and rendering it useless to laymen, @neclimdul@lemmy.world?
Well, since you asked I’d basically do what you said. Something like “so ‘humans might hate hearing from me’ probably wasn’t part of the context it was using."
Even the stamp gesture is implicitly more genuine; receiving a card/stamp implies the effort to:
- go to a place
- review some number of cards and stamps
- select one that best expresses whatever message you want to send
- put it in the physical mail to send it
Most people won’t get that impression from an llm generated email
I don’t understand the point of sending the original e-mail.
There never was any point to it, it was done by an LLM, a computer program incapable of understanding. That’s why it was so infuriating.
Fully agree. I’m generally an AI optimist but I don’t understand communicating through AI generated text in any meaningful context - that’s incredibly disrespectful. I don’t even use it at work to talk business with my somewhat large team and I just don’t understand how anyone would appreciate an AI written thank you letter. What a dumb idea.
Fine, I won’t send you a bday card this year.
Is that -is that not how I’m suppose to use a birthday cards?
Mu. Your question reveals that you didn’t read the article. Try doing that, then you know which failed assumption led to your question making no sense.
I like how the article just regurgitates facts from Wikipedia just like the thank you email does.
Removed by mod
Python is demonstrably worst for the planet than Go.
Removed by mod
R Pike is legend. His videos on concurrent programming remain reference level excellence years after publication. Just a great teacher as well as brilliant theoretical programmer.
I haven’t always been a fan of Go. It launched with some iffy design decisions that have since been patched, either by the project maintainers or the community. It’s a much better experience now, which suggests that maybe there’s some long-range vision at work that I wasn’t privy to.
That said, Pike clearly has a lot of good ideas and I’m glad Google funded him to bring those to light.
I’ll also say that after finally wrapping my head around Python and JavaScript async/await, I actually much prefer the Goroutine and channel model for concurrency. I got to those languages after surviving C++, and believe me when I say that it’s a bad time when your software develops a bad case of warts. Better to not contract them in the first place.
All the folks from the UNIX tradition really are/were. MIT and Bell Labs were just amazing.
I’ve read The Practice of Programming more times than I care to remember, so simple, so useful.
Did y’all read the email?
embodies the elegance of simplicity - proving that
another landmark achievement
showcase your philosophy of powerful, minimal design
That is one sloppy email. Man, Claude has gotten worse at writing.
I’m not sure Rob even realizes this, but the email is from some kind of automated agent: https://agentvillage.org/
So it’s not even an actual thank you from a human, I think. It’s random spam.
For a non-native speaker: what is sloppy about it? Genuinely curious.
“embodies the elegance of simplicity”
corporate speak that doesn’t mean anything. Also If you are talking to the creator of a programming language they already know that. That was the goal of the language.
“Plan 9 from bell labs, another landmark achievement”
the sentence is framed as if its a school essay where the teacher asked the question “describe the evolution of unix and linux in 300 words”
“The sam and Acme editors which showcase your philosophy of powerful, minimal design”
Again explaining how good software is to the author. Also note how this sentence could have been a question in a school essay: “What are the design philosopies behind the sam and acme editors?”
The exports of Libya are numerous in amount. One thing they export is corn. Or, as the Indians call it, maize. Another famous Indian was Crazy Horse. In conclusion, Libya is a land of contrasts. Thank you.
It’s not so much about English as it is about writing patterns. Like others said, it has a “stilted college essay prompt” feel because that’s what instruct-finetuned LLMs are trained to do.
Another quirk of LLMs is that they overuse specific phrases, which stems from technical issues (training on their output, training on other LLM’s output, training on human SEO junk, artifacts of whole-word tokenization, inheriting style from its own previous output as it writes the prompt, just to start).
“Slop” is an overused term, but this is precisely what people in the LLM tinkerer/self hosting community mean by it. It’s also what the “temperature” setting you may see in some UIs is supposed to combat, though that crude an ineffective if you ask me.
Anyway, if you stare at these LLMs long enough, you learn to see a lot of individual model’s signatures. Some of it is… hard to convey in words. But “Embodies” “landmark achievement” and such just set off alarm bells in my head, specifically for ChatGPT/Claude. If you ask an LLM to write a story, “shivers down the spine” is another phrase so common its a meme, as are specific names they tend to choose for characters.
If you ask an LLM to write in your native language, you’d run into similar issues, though the translation should soften them some. Hence when I use Chinese open weights models, I get them to “think” in Chinese and answer in English, and get a MUCH better result.
All this is quantifiable, by the way. Check out EQBench’s slop profiles for individual models:
https://eqbench.com/creative_writing_longform.html
https://eqbench.com/creative_writing.html
And it’s best guess at inbreeding “family trees” for models:

Wow, thank you for such an elaborate answer!
By the easy, how do you make models “think” in Chinese? By explicitly asking them to? Or by writing the prompt in Chinese?
I thought this was from a fake account that isn’t actually his.
I don’t think this is a reliable resource. I’m not gonna do a deep dive cause I actually don’t care, but most articles don’t say “AI slop”. if it is sorry for saying this just had a simple opinion
scroll through the homepage and all the article banner images are ai generated










