If we assume intelligence explosion, all robotics becomes no harder than teleoperated robotics. Like, humans are pretty good at using an arbitrary teleoperated robot, after hours not years of practice. If AIs are not substantially better at using an arbitrary teleoperated robot than humans are, then … that was not a true intelligence explosion! Right?
But you seem to have a different mental model, where the intelligence explosion creates AIs, which in turn do robot algorithm R&D, instead of them just piloting the robots directly … or something like that?
Anyway, I’m not an expert on teleoperated robot hardware, but I tried to look into it briefly once, and got the impression that they are much more capable, affordable, and easy to mass-manufacture than you’d think from studying the non-teleoperated robot industry—despite extremely low volumes right now, for obvious reasons. For example, I once saw a video of a teleoperated robot cleaning a messy house (folding sheets, clearing the table, etc.), I don’t remember the details but I’m pretty sure it was from 10 years ago or more, maybe much more.
(In this comment, I am severely downplaying how crazy an intelligence explosion would be, for the sake of argument.)
Good post! I made a related (much less detailed) argument a while back that physics will still exist and will presumably pose SOME kind of limit.
"""A super-intelligence wouldn’t be a god. I would expect a super-intelligence to be better than humans at creating better super-intelligences. But physics still exists! To do most things, you need to move molecules around. And humans would still be needed to do that, at least at first (https://dynomight.net/smart/)
So here’s one plausible future:
1. Super-intelligent AI is invented.
2. At first, existing robots cannot replace humans for most tasks. It doesn’t matter how brilliantly it’s programmed. There simply aren’t enough robots and the hardware isn’t good enough.
3. In order to make better robots, lots of research is needed. Humans are needed to move molecules around to build factories and to do that research.
4. So there’s a feedback loop between more/better research, robotics, energy, factories, and hardware to run the AI on.
5. Gradually that loop goes faster and faster.
6. Until one day the loop can continue without the need for humans.
That’s still rather terrifying. But it seems likely that there’s a substantial delay between step 1 and step 6. Factories and power plants take years to build (for humans). So maybe the best mental initial model is as a “multiplier on economic growth” like all the economists have been insisting all along."""
I got a fair amount of pushback along the lines that AI might exploit some unknown vector and leap to powerful robotics quickly. I guess you can't totally exclude unknown unknowns, but I still think the scenarios you lay out are most likely.
If we assume intelligence explosion, all robotics becomes no harder than teleoperated robotics. Like, humans are pretty good at using an arbitrary teleoperated robot, after hours not years of practice. If AIs are not substantially better at using an arbitrary teleoperated robot than humans are, then … that was not a true intelligence explosion! Right?
But you seem to have a different mental model, where the intelligence explosion creates AIs, which in turn do robot algorithm R&D, instead of them just piloting the robots directly … or something like that?
Anyway, I’m not an expert on teleoperated robot hardware, but I tried to look into it briefly once, and got the impression that they are much more capable, affordable, and easy to mass-manufacture than you’d think from studying the non-teleoperated robot industry—despite extremely low volumes right now, for obvious reasons. For example, I once saw a video of a teleoperated robot cleaning a messy house (folding sheets, clearing the table, etc.), I don’t remember the details but I’m pretty sure it was from 10 years ago or more, maybe much more.
(In this comment, I am severely downplaying how crazy an intelligence explosion would be, for the sake of argument.)
Good post! I made a related (much less detailed) argument a while back that physics will still exist and will presumably pose SOME kind of limit.
"""A super-intelligence wouldn’t be a god. I would expect a super-intelligence to be better than humans at creating better super-intelligences. But physics still exists! To do most things, you need to move molecules around. And humans would still be needed to do that, at least at first (https://dynomight.net/smart/)
So here’s one plausible future:
1. Super-intelligent AI is invented.
2. At first, existing robots cannot replace humans for most tasks. It doesn’t matter how brilliantly it’s programmed. There simply aren’t enough robots and the hardware isn’t good enough.
3. In order to make better robots, lots of research is needed. Humans are needed to move molecules around to build factories and to do that research.
4. So there’s a feedback loop between more/better research, robotics, energy, factories, and hardware to run the AI on.
5. Gradually that loop goes faster and faster.
6. Until one day the loop can continue without the need for humans.
That’s still rather terrifying. But it seems likely that there’s a substantial delay between step 1 and step 6. Factories and power plants take years to build (for humans). So maybe the best mental initial model is as a “multiplier on economic growth” like all the economists have been insisting all along."""
I got a fair amount of pushback along the lines that AI might exploit some unknown vector and leap to powerful robotics quickly. I guess you can't totally exclude unknown unknowns, but I still think the scenarios you lay out are most likely.