Hi Bradley, thanks for coming to our protest and chatting, I appreciate the coverage! (I'm the guy with the long blonde hair holding the traffic light placard) I just couldn't resist quickly butting in with some caveats about the Louis interview. For a quick thing, you call him the leader of the US branch of Pause AI, I believe that's in fact Holly Elmore, not Louis. Additionally Louis is very smart and respectable, and represents a substantial strand of PauseAI worries, but it is worth noting that:
1. Many people in the movement have additional worries or different main ones, such as job security, surveillance, creating possibly conscious beings for the purpose of our own use, lethal autonomous weapons, and more. It is a basically coalitional movement, with the only requirement being thinking that we ought to encourage policies and international resolutions focused on slowing down or stopping frontier model development, rather than merely trying to regulate a moving target we haven't even solved the requisite technical challenges to properly enforce these regulations on.
2. Louis is probably more pessimistic, both on timelines and chances of survival, than most people in the movement. More than me certainly. There were attempted polls on this, but with poor participation. For what it's worth both gave more moderate estimates closer to expert surveys and my own opinion - around the mid 21st century for timelines, and low double digits for odds of extinction. The combination of these is still enough to make it your, not your grandchildrens' or childrens' but your, most likely cause of death. And I think more than enough to justify treating this development as the public's business.
Again, thanks for coming and writing this up! I would be curious to hear your own views as well at some point.
I am so glad to hear from you and grateful for your corrections. I have tweaked the intro a bit to reflect the information you gave me. Please let me know if there is anything else I can do to represent you all fairly and honestly. It is very important to me.
Also, I am sincerely sorry that I wasn't able to incorporate some of the broader group's comments. Weaving together 5 interviews was a task I wouldn't have been able to compelete in the necessary timeframe. If there is another medium or format or idea that you all have to where I could delve into your ideas more, I'd be very open to hearing them.
I will definitely be writing on AI more this year, so stay tuned!
No worries, I do think it's very fair! I just thought the added context might help. As for other media/formats, there are many to choose from. I'm personally fond of this ongoing explainer that was launched recently which does a good job of being accurate and fair while being easy to read for a beginner:
Many of the arguments here have been most influenced by Eliezer Yudkowsky, who has views more similar to Louis', this is maybe his most influential recent piece on why he is so pessimistic:
while this is a well recieved counter post from the somewhat more optimistic former OpenAI Safety lead (and currently on the UK government's AI taskforce) Paul Christiano:
These are most focused on extinction risks, which is where I am most well read, but there is literature on all of the subjects available. The counter voices on these risks who are most influential are probably Melanie Mitchell, who has a book on this and whom I highly respect as a complexity researcher, and Yann LeCun, whose perspective mostly remains puzzling to me. I hope these help if you decide you do want to engage with the perspective further in the future, and feel free to reach out for more resources if it is of interest.
I almost forgot Holden Karnofsky's Most Important Century series, which was very influential on how I looked at the million mile high picture of this issue and our uncertainties/data about it:
Thank you for meeting with PauseAI. For those who like to help us survive, we have a discord to coordinate action:
https://discord.com/invite/nWNGK2mB
I love technology, I really do. But when you see an iceberg coming, you steer away from it.
Hi Bradley, thanks for coming to our protest and chatting, I appreciate the coverage! (I'm the guy with the long blonde hair holding the traffic light placard) I just couldn't resist quickly butting in with some caveats about the Louis interview. For a quick thing, you call him the leader of the US branch of Pause AI, I believe that's in fact Holly Elmore, not Louis. Additionally Louis is very smart and respectable, and represents a substantial strand of PauseAI worries, but it is worth noting that:
1. Many people in the movement have additional worries or different main ones, such as job security, surveillance, creating possibly conscious beings for the purpose of our own use, lethal autonomous weapons, and more. It is a basically coalitional movement, with the only requirement being thinking that we ought to encourage policies and international resolutions focused on slowing down or stopping frontier model development, rather than merely trying to regulate a moving target we haven't even solved the requisite technical challenges to properly enforce these regulations on.
2. Louis is probably more pessimistic, both on timelines and chances of survival, than most people in the movement. More than me certainly. There were attempted polls on this, but with poor participation. For what it's worth both gave more moderate estimates closer to expert surveys and my own opinion - around the mid 21st century for timelines, and low double digits for odds of extinction. The combination of these is still enough to make it your, not your grandchildrens' or childrens' but your, most likely cause of death. And I think more than enough to justify treating this development as the public's business.
Again, thanks for coming and writing this up! I would be curious to hear your own views as well at some point.
-Devin
Hey Devin,
I am so glad to hear from you and grateful for your corrections. I have tweaked the intro a bit to reflect the information you gave me. Please let me know if there is anything else I can do to represent you all fairly and honestly. It is very important to me.
Also, I am sincerely sorry that I wasn't able to incorporate some of the broader group's comments. Weaving together 5 interviews was a task I wouldn't have been able to compelete in the necessary timeframe. If there is another medium or format or idea that you all have to where I could delve into your ideas more, I'd be very open to hearing them.
I will definitely be writing on AI more this year, so stay tuned!
No worries, I do think it's very fair! I just thought the added context might help. As for other media/formats, there are many to choose from. I'm personally fond of this ongoing explainer that was launched recently which does a good job of being accurate and fair while being easy to read for a beginner:
https://aisafety.dance/
In style I've seen it compared to an earlier famous piece, which is good but in my opinion less nuanced and more out of date at this point:
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
There are also lots of relevant videos from these two youtubers:
https://www.youtube.com/@RobertMilesAI
https://www.youtube.com/@RationalAnimations
Many of the arguments here have been most influenced by Eliezer Yudkowsky, who has views more similar to Louis', this is maybe his most influential recent piece on why he is so pessimistic:
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
while this is a well recieved counter post from the somewhat more optimistic former OpenAI Safety lead (and currently on the UK government's AI taskforce) Paul Christiano:
https://www.lesswrong.com/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer
These are most focused on extinction risks, which is where I am most well read, but there is literature on all of the subjects available. The counter voices on these risks who are most influential are probably Melanie Mitchell, who has a book on this and whom I highly respect as a complexity researcher, and Yann LeCun, whose perspective mostly remains puzzling to me. I hope these help if you decide you do want to engage with the perspective further in the future, and feel free to reach out for more resources if it is of interest.
I almost forgot Holden Karnofsky's Most Important Century series, which was very influential on how I looked at the million mile high picture of this issue and our uncertainties/data about it:
https://www.cold-takes.com/most-important-century/