Ask HN: What is the (steelman) argument for AI deceleration?
2 by takinola | 15 comments on Hacker News.
There seems to be a lot of smart tech people arguing that AI is dangerous to humanity. However, I am yet to see any cogent description of the risk. Every time this argument is made, it runs something like Step #1: Create AGI Step #2: ??? Step #3: Launch nukes, enslave people, paperclips, etc The only real risk I can come up with is the economic displacement from the loss of entire categories of jobs. All other risks seem to be easily mitigated by pulling the plug. What am I missing?

Post a Comment

Previous Post Next Post