LLMs map social learning, not human individual intellect, not even en masse. The vectorspace magic is an aggregate ripped off from the social cues we use to innovate, that process arising up from individual invention and experience ( a bit like any photo uploaded to FB or other social media stripped of its meta-data and enters an aggregate algorithmic stupefaction engine).
That these LLMs in all their variety also shows us stuff we are not aware, especially in multi-modal avenues, or not-as-serial-processed-as-we-thought, of is no surprise, we just don't understand ourselves as a social learning species (we have not innovated a response to our self-domestication success stories). LLMs steal our social learning and some of us stupidly call that general intelliegence, that is our error. Claims of copyright infringement pale into insignificance beside that "theft".
How l the internal machinations work don’t matter when, for any given input, a predetermined output is produced. The only reason LLMs (or other AI systems) don’t always give the same output in real world use is that randomness (‘heat’) is deliberately introduced.
In a very real sense, the model is ‘just’ compression of a lookup table.
Of course, maybe our intelligence is also just a representation of a lookup table at its base (the classic Chinese Room problem - or the question of ‘how can we have free will if the universe is deterministic?’).
So perhaps we are all a particular and advanced version of stochastic parrots - or are we language models? I only understand a little of the technical stuff in the article. But, my worry is that the language models are based on 'settings' by the software developers - and with all electronic things it can be a case of 'rubbish in - rubbish out. And also of malign intervention.
As a philosophical question: can AI ever become more human than we are? If so I despair for mankind - at least as we know it. But then, perhaps AI is a form of evolution !!
it's likely we do in fact use prediction to scan language acts when it comes from others as we separate the signal from the noise, (Jordan Peterson narcissistically sells this as a unique skill of his in detecting substandard humans)
note LLMs are that predictive in aggregate, and in a matrix of possibilities mapped and generative, and so that is their power, and to members of a social learning species that looks like intelligence, because suddenly the map is bigger than the territory in our heads as we have lived it so far
but it is not human intelligence, neither individually nor social learning wise, merely a map of what we have done, but it throws up surprises that we have not noticed, can not notice
Thus model collapse and hallucinations could irrupt back from "AI" into our social learning systems, as indeed model collapse can occur in our own social learning systems before computers, we just called it metaphysics and paranoia, and perhaps wrongly focus on the individual's mental health to deal with it
Certainly the USA is suffering from social model collapse and paranoid hallucination as we speak, that it has elected a narcissist to do it is telling, that the richest people are the most paranoid is also telling, they are rich and powerful but have no agency.... poor me
LLMs map social learning, not human individual intellect, not even en masse. The vectorspace magic is an aggregate ripped off from the social cues we use to innovate, that process arising up from individual invention and experience ( a bit like any photo uploaded to FB or other social media stripped of its meta-data and enters an aggregate algorithmic stupefaction engine).
That these LLMs in all their variety also shows us stuff we are not aware, especially in multi-modal avenues, or not-as-serial-processed-as-we-thought, of is no surprise, we just don't understand ourselves as a social learning species (we have not innovated a response to our self-domestication success stories). LLMs steal our social learning and some of us stupidly call that general intelliegence, that is our error. Claims of copyright infringement pale into insignificance beside that "theft".
https://whyweshould.substack.com/p/social-learning-101
Logic is a hindsight, LLMs map/mix/mash our hindsights.
It’s still a lookup table.
How l the internal machinations work don’t matter when, for any given input, a predetermined output is produced. The only reason LLMs (or other AI systems) don’t always give the same output in real world use is that randomness (‘heat’) is deliberately introduced.
In a very real sense, the model is ‘just’ compression of a lookup table.
Of course, maybe our intelligence is also just a representation of a lookup table at its base (the classic Chinese Room problem - or the question of ‘how can we have free will if the universe is deterministic?’).
So perhaps we are all a particular and advanced version of stochastic parrots - or are we language models? I only understand a little of the technical stuff in the article. But, my worry is that the language models are based on 'settings' by the software developers - and with all electronic things it can be a case of 'rubbish in - rubbish out. And also of malign intervention.
As a philosophical question: can AI ever become more human than we are? If so I despair for mankind - at least as we know it. But then, perhaps AI is a form of evolution !!
it's likely we do in fact use prediction to scan language acts when it comes from others as we separate the signal from the noise, (Jordan Peterson narcissistically sells this as a unique skill of his in detecting substandard humans)
note LLMs are that predictive in aggregate, and in a matrix of possibilities mapped and generative, and so that is their power, and to members of a social learning species that looks like intelligence, because suddenly the map is bigger than the territory in our heads as we have lived it so far
but it is not human intelligence, neither individually nor social learning wise, merely a map of what we have done, but it throws up surprises that we have not noticed, can not notice
Thus model collapse and hallucinations could irrupt back from "AI" into our social learning systems, as indeed model collapse can occur in our own social learning systems before computers, we just called it metaphysics and paranoia, and perhaps wrongly focus on the individual's mental health to deal with it
Certainly the USA is suffering from social model collapse and paranoid hallucination as we speak, that it has elected a narcissist to do it is telling, that the richest people are the most paranoid is also telling, they are rich and powerful but have no agency.... poor me