Read in the Substack app
Open app

Discussion about this post

User's avatar
meika loofs samorzewski's avatar

LLMs map social learning, not human individual intellect, not even en masse. The vectorspace magic is an aggregate ripped off from the social cues we use to innovate, that process arising up from individual invention and experience ( a bit like any photo uploaded to FB or other social media stripped of its meta-data and enters an aggregate algorithmic stupefaction engine).

That these LLMs in all their variety also shows us stuff we are not aware, especially in multi-modal avenues, or not-as-serial-processed-as-we-thought, of is no surprise, we just don't understand ourselves as a social learning species (we have not innovated a response to our self-domestication success stories). LLMs steal our social learning and some of us stupidly call that general intelliegence, that is our error. Claims of copyright infringement pale into insignificance beside that "theft".

https://whyweshould.substack.com/p/social-learning-101

Logic is a hindsight, LLMs map/mix/mash our hindsights.

Expand full comment
James Montgomerie's avatar

It’s still a lookup table.

How l the internal machinations work don’t matter when, for any given input, a predetermined output is produced. The only reason LLMs (or other AI systems) don’t always give the same output in real world use is that randomness (‘heat’) is deliberately introduced.

In a very real sense, the model is ‘just’ compression of a lookup table.

Of course, maybe our intelligence is also just a representation of a lookup table at its base (the classic Chinese Room problem - or the question of ‘how can we have free will if the universe is deterministic?’).

Expand full comment
2 more comments...

No posts