I Interviewed The Protestors At Microsoft In NYC
AI Safety, the Alignment Problem, and the Effort to Pause the Race To Artificial General Intelligence
A computer scientist, a caregiver, and an economist walk into a Microsoft store.
This isn’t a setup for a punchline. It’s a description of one half a protest that actually took place last Monday. Five members of the Pause AI movement gathered in front of Microsoft’s location at 11 Times Square to alert the public that artificial intelligence is an existential threat to humanity’s survival. I was curious about who such an event might attract, so I channeled my inner
and showed up to do some interviews.Although I chatted with all five members, I made an editorial decision to include only one. His name is Louis Berman, a computer scientist and leader in the U.S. chapter of the Pause AI org. It’s a summarized transcript, with parts cut out and cleaned for clarity. However, I am confident that I remained faithful to the gist of the discussion. For those of you interested, the full and raw audio to all the interviews can be found here. I have also sprinkled in some hyperlinks to give context for those who might not be up-to-snuff on AI parlance.
Please note that the views being expressed by Louis and the group are not my own. Even more importantly, Louis’ views are not necessarily representative of the entire group either. See some of the comments section for explanation on this point. Since this is my first stab at a more “journalistic” piece, I wanted to simply represent their point of view and the situation without much commentary. Hearing different perspectives from unexpected sources always challenges me to think differently, so I hope it does the same for you.
If you are a recent subscriber, welcome! I encourage you to read any of my previous work to get a feel for what an average newsletter looks like. If you like this type of material, please let me know and I will be sure to do some more.
Thanks for reading — and as always, I appreciate your attention!
(Edit: I have slightly altered the introduction as a result of corrections made by the Pause AI group in the comments section. Please see their statements down below and feel free to leave your own!)
BA: Alright, well, Louis, first of all, do I have your permission to record? Is that alright?
Louis: Yes, you do.
BA: Brilliant. Louis, what brings you here today?
Louis: Well… AI. We're very close to artificial general intelligence (AGI). AGI is AI that is literally equivalent to you and I. But going from artificial general intelligence to artificial super intelligence, which can be described as anything twice as smart as you, a thousand times smarter than you, or (as we think it will be) a millions of times smarter than you and me is very scary. Any time that intelligence has met unintelligence in our history, it's not worked out well for the unintelligent.
We're probably less than a year, and some people say as little as six months, away from basically creating what may turn out to be the successor race for humanity. I don't know how to solve that. I'm a computer scientist. I've written AI for 20 years. And it's getting too competent, too powerful, too fast. And no one has any really great ideas on how to control it. That's really why we're out here protesting.
BA: Go on!
Louis: I'm definitely not the sort of person who would show up to a thing like this normally. But it's hard getting hearts and minds to think about what we feel is a very reasonable proposal: Slow the stuff down. Put caps on the amount of computation that can be used! Don't spend $1 for every hundred on safety.
Open AI has famously pledged to spend $1 out of every five on safety. But they're not even very vaguely close. Anecdotally, it's less than $1 out of a hundred. And certainly industry-wide, that is the truth.
BA: You mentioned that it's kind of difficult to get people to be concerned about this issue?
Louis: Yes. So let's say I made a statement: “Your kid is going to be murdered on Tuesday.” It’s very specific, very alarming. You could decide whether that's true or false. You could then decide whether the timeline is true or false and you could have a reaction to that. Unfortunately, we tend to be scientists and more logical people. I don't have a timeline for you.
I just know it is really short and it's shortening. This used to be a thing that only people like me really knew about at all. So if you think about it, the risk was low, low, low, low, low, low, low, low. Then boom -- like that we hit the asymptote. Basically, it is climbing up to being more and more risky as time goes on.
My problem is that asymptote, right? When it's growing relatively slowly, as a society, I think we can cope with it. But, you know, it's accelerating at such a pace.
BA: So you're not adverse to all types of technology. Do you feel like there are good or practical uses for AI that don't hit the type of scale or threshold you're worried about?
Louis: Yes. The idea is that up until the place where it is agentic, in other words, it has its own agendas, its own desires, and those desires are not aligned with humanity. So, for instance, you know, if you had an AI that was agentic, it did things, it had desires, and it was a thousand times smarter than us, but it was completely aligned. That would help humanity. If it said, “I want to do the best thing possible for humanity.” That'd be great. That would help humanity.
BA: So it's particularly the alignment issue that you're worried about more than even the actual general intelligence?
Louis: I would say alignment is exceptionally hard and unlikely. So as an example, I think it is very unlikely that these agents, these AGIs, will be aligned with human values by default.
You know, if you're doing something you've never done before and you want to do it well, you're going to try, you're going to fail, you're going to improve, you're going to learn. The problem is, let us say we create an artificial general intelligence. It's not aligned with us. And in a very short window, it can self-improve itself to exert its own will, do whatever it wants to do. You know, and I always love to talk about ants versus humans, although I think the better comparison is the scale from bacteria to humans.
We don't hate ants. We don't vaguely hate ants. I would never step on an anthill on purpose. But you know what? If that anthill is smack dab in the middle of my lot where I'm putting up a building, the anthill is going to go away. Those ants are going to die. And that is what we think about.
BA: Who are you mostly having these types of conversations with?
Louis: So I'm leading a project called Government Action Kit, which is intending after the election is done to reach out to every U.S. politician at the federal level on this topic. We're going to be handing out a book on this stuff. Unfortunately, it's not a sound bite thing.
BA: So you primarily see government regulation then as the solution?
Louis: Yes. I don't believe there's a technical solution. As a matter of fact, I am literally convinced there is not a technical solution and you can never shut them all off.
And then, we have this notion of goals, right? Things you want to achieve. Nick Bostrom, who most famously wrote on superintelligence, his book was called Superintelligence. He wrote, what if you had an AGI that wanted to collect paper clips. How would it make paperclips? Now you think, “that's stupid.” But it's not. The goals that it will have will not necessarily be anything that you think of as important. We're more used to instrumental goals.
I'd love to be proved wrong, but I don't think it is likely to happen. And unfortunately, no one has even candidate solutions. And then, let's do the thought experiment… let's say I have come up with a perfect fix. It's embodied in code, and it's popular shareware, so lots of people adopt it. Well, it turns out that you need to have pretty near perfect adoption, right? I'm not going to say that you need in every case a perfect adoption, but enough to get a herd immunity sort of thing.
If you have a thousand different projects working on it, if less than maybe five of those thousand projects are unsafe, maybe they won't advance to the correct level. But it turns out there are thousands of projects. And even the ones that are likely to be provably safe in a technical sense, or probably more safe probably, not declaratively, you still have all these other projects that are dangerous.
We used to say there are things we'd never, ever, ever, ever do. Like, for instance, we would never connect to the Internet, surely. We'd figure out how to put it in a box, or keep our info away from others. And there's stories, and there's whole write-ups about these things. But it turns out that the first thing we did, we connected to the Internet.
There are things that we just assume or we think as baselines that are not the true baseline.
There are things that even we thought sensibly people wouldn't do, people are doing.
BA: Why do you feel like people are so fine with letting that line consistently move? For instance, your family and your friends, do they all agree with you?
Louis: No. No. Actually, I would suggest the exact opposite. My wife, for instance, you know, a very noted set and costume designer. A professor at a university. She just can't connect to it. She doesn't disbelieve it…. But, you know.
BA: What's the common thread between the people gathered, whose eyes kind of are open to the potential of this?
Louis: I think it's ultimately about some combination of forming a community around the idea that this is unsafe. So I'd be lying if I wouldn't say there's a bit of tribalism around it. I happen to be in the intellectual tribe that believes this. And I hate the word believe, by the way. It is... I am very much a scientist -- in my case, a computer scientist.
And in terms of our being here, it is heartening to be part of a community. A very small community. It's being able to talk about these things with others. There's an incredible amount of scientific discussion about this, but since it's a nuanced discussion, having a community of people that you can have the nuanced discussion with has been very, very helpful. I am well convinced and I believe that through my own direct experience and through the literature and just, quite frankly, common sense. It makes sense to be worried about this. And I think other people are waking to it. I am not the sort of person to go on a protest.
BA: But that shows how concerned you are.
Louis: I am that concerned. And I'm spending thousands of dollars of my own money on this increasingly more. I have a viable business that I am working on that I am devoting more and more time to this. It's very, very serious. And if you ask me, and it's a little schizophrenic, I'll use the word schizophrenic. Even I, who believe this completely, can only grasp it and feel it in little bites. It's absurdly scary. You're talking about the potential extinction of the entire human race, not just well within our lifetimes -- I mean within years.
Will that happen? I'm a scientist. I can't tell you 100%. But I am very, very, very concerned about it. So, as a logical proposition, I should quit my job. I should... You know, I should refocus my life.
But quite frankly, it's so fucking scary.
BA: Great. Louis, I've got two more questions before I talk to some of your peers as well. Let's just say Satya Nadella himself shows up today. He's coming to the office —
Louis: — who I've met on several occasions —
BA: — there you go. You got a good 30-second, one-minute, pitch or argument? What would you say to him?
Louis: I would say nothing to Satya. He would never, ever, ever understand it and never grasp it. I've been on stage with him. I've met him. And I've worked much more with Scott Guthrie who ran the Azure cloud business. But I don't believe that's who we're going to influence. I don't even believe I want to influence those people. I want governments to influence those people.
BA: All right. Last question: Let's just say we can skip forward one year from today. What progress do you think will be made? Any predictions?
Louis: Yes. I think there's a greater than 75-80% chance that you're going to have artificial general intelligence. Then all bets are off…
Ominous, right?
Thanks again for reading! I would be so grateful if you subscribed (if you have not already) and took time to fill out the final poll question below. If there is something you’d like to say, feel free to drop it in the comments or reach out to me here on Substack or Linkedin.
I appreciate your readership and look forward to hearing back from you!
Thank you for meeting with PauseAI. For those who like to help us survive, we have a discord to coordinate action:
https://discord.com/invite/nWNGK2mB
I love technology, I really do. But when you see an iceberg coming, you steer away from it.
Hi Bradley, thanks for coming to our protest and chatting, I appreciate the coverage! (I'm the guy with the long blonde hair holding the traffic light placard) I just couldn't resist quickly butting in with some caveats about the Louis interview. For a quick thing, you call him the leader of the US branch of Pause AI, I believe that's in fact Holly Elmore, not Louis. Additionally Louis is very smart and respectable, and represents a substantial strand of PauseAI worries, but it is worth noting that:
1. Many people in the movement have additional worries or different main ones, such as job security, surveillance, creating possibly conscious beings for the purpose of our own use, lethal autonomous weapons, and more. It is a basically coalitional movement, with the only requirement being thinking that we ought to encourage policies and international resolutions focused on slowing down or stopping frontier model development, rather than merely trying to regulate a moving target we haven't even solved the requisite technical challenges to properly enforce these regulations on.
2. Louis is probably more pessimistic, both on timelines and chances of survival, than most people in the movement. More than me certainly. There were attempted polls on this, but with poor participation. For what it's worth both gave more moderate estimates closer to expert surveys and my own opinion - around the mid 21st century for timelines, and low double digits for odds of extinction. The combination of these is still enough to make it your, not your grandchildrens' or childrens' but your, most likely cause of death. And I think more than enough to justify treating this development as the public's business.
Again, thanks for coming and writing this up! I would be curious to hear your own views as well at some point.
-Devin