I posted the other day about the current artificial intelligence cutting edge GPT-3, and its ability to write like a human. But since running across the following article, the idea of an A.I. overhang has been stuck in my head: what if artificial intelligences could get 100-1,000x more competent in a matter of only months?
An overhang is when you have had the ability to build transformative AI for quite some time, but you haven’t because no-one’s realised it’s possible. Then someone does and surprise! It’s a lot more capable than everyone expected.
I am worried we’re in an overhang right now. I think we right now have the ability to build an orders-of-magnitude more powerful system than we already have, and I think GPT-3 is the trigger for 100x larger projects at Google, Facebook and the like, with timelines measured in months.
There are numbers in the post, but the argument goes that a 100x more effective A.I. will cost in the range of only $1bn, which is a relatively small fraction of Big Tech R&D.
Intel’s expected 2020 revenue is $73bn. What if they could train a $1bn A.I. to design computer chips that are 100x faster per watt-dollar? (And then use those chips to train an even better A.I…)
At what point do self-driving cars effectively become solved… and what if it was in only 6 months? All the control couplings and sensors are there, we’re just waiting for the artificial brain.
British call centres employ 1.3 million people, 4% of the UK workforce. What if they’re 99% out of work by 2022?
What if text/voice/video synthesis and persuasion becomes a solved game, such that anyone can be scammed or hacked over email or phone or Zoom with off-the-shelf software, in the hands of anyone that buys it, robocalling a thousand people per hour? What if a covert, 95% accurate lie detector can run on a smartphone with a commodity camera and commodity mic, ship in 6 months, and cost a dollar?
What’s interesting/startling/threatening about the idea of an overhang is that the changes come from every direction and there’s no time to adjust. The logic means that - if true - it’s not preventable. Sure, new professions will emerge, and new creative opportunities, and new social norms. But in the meantime?
I posted the other day about the current artificial intelligence cutting edge GPT-3, and its ability to write like a human. But since running across the following article, the idea of an A.I. overhang has been stuck in my head: what if artificial intelligences could get 100-1,000x more competent in a matter of only months?
There are numbers in the post, but the argument goes that a 100x more effective A.I. will cost in the range of only $1bn, which is a relatively small fraction of Big Tech R&D.
Intel’s expected 2020 revenue is $73bn. What if they could train a $1bn A.I. to design computer chips that are 100x faster per watt-dollar? (And then use those chips to train an even better A.I…)
At what point do self-driving cars effectively become solved… and what if it was in only 6 months? All the control couplings and sensors are there, we’re just waiting for the artificial brain.
British call centres employ 1.3 million people, 4% of the UK workforce. What if they’re 99% out of work by 2022?
What if text/voice/video synthesis and persuasion becomes a solved game, such that anyone can be scammed or hacked over email or phone or Zoom with off-the-shelf software, in the hands of anyone that buys it, robocalling a thousand people per hour? What if a covert, 95% accurate lie detector can run on a smartphone with a commodity camera and commodity mic, ship in 6 months, and cost a dollar?
What’s interesting/startling/threatening about the idea of an overhang is that the changes come from every direction and there’s no time to adjust. The logic means that - if true - it’s not preventable. Sure, new professions will emerge, and new creative opportunities, and new social norms. But in the meantime?