That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.
That’s the thing with our terminology, we love to anthropomorphize things. It wasn’t a big problem before because most people had enough grasp on reality to understand that when a script makes :-) smile when the result is positive, or :-( smile otherwise, there is no actual mind behind it that can be happy or sad. But now the generator makes convincing enough sequence of words, so people went mad, and this cute terminology doesn’t work anymore.
Reinforcement learning
That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.
That’s the thing with our terminology, we love to anthropomorphize things. It wasn’t a big problem before because most people had enough grasp on reality to understand that when a script makes :-) smile when the result is positive, or :-( smile otherwise, there is no actual mind behind it that can be happy or sad. But now the generator makes convincing enough sequence of words, so people went mad, and this cute terminology doesn’t work anymore.
Bazzinga