Part I: The Number Line
The official estimate that the United States Department of Transportation uses for the value of a life when determining how to trade of between safety and cost is $9.4 million. GiveWell estimates that, through distributing pesticide-infused bednets in developing countries to fight mosquito-born Malaria, a life can be saved for $3,500. Those translate to, respectively, 0.1 and 285 lives saved per million USD invested, making the Against Malaria Foundation a much better investment than American automobile safety initiatives.
For the past few years the cause du jour of the effective altruism has been existential risk reduction. There are a variety of forms of the argument, but the most pervasive is:
- We will soon be able to create more and more powerful AIs
- Currently the AIs mostly just do what we tell them to do, but even that can be worrying
- Imagine that some government writes a Homeland Defense AI that has a connection to all of the country’s missiles and drones. It intelligently figures out what the most destructive attack it could conduct on an enemy regime is, and how to coordinate all of the weapons systems available. It now only takes one click of a button for a rouge actor in the state to launch a devastating attack.
- Eventually AIs will become as intelligent as humans, and then more intelligent.
- AIs will then be able to improve themselves better than we can improve them.
- Because AIs can program a lot faster than humans, once they’re able to write their own source code they will do so quite quickly
- The self-improving AIs will in fact improve at a rate that gets faster and faster as they get smarter and smarter
- Thus in a very short period of time AIs will go from human-level intelligence to something much much more powerful–a superintelligence.
- Just as we are much more powerful than Apes, this AI will be much more powerful than us. If we don’t work really hard to keep it under our control and make sure it does what we want it to do, it might quickly take over the world
- It might just destroy the world in a runaway attempt to do more and more of whatever random thing it wants to do.
- If we try to give it a utility function but mess up, it might interpret what we give it too literally, and end up turning the whole universe into something that fits what we said but not what we meant (canonically tilling the universe with paperclips), a la Midas.
- If we get really unlucky, the AI might actively create a ton of suffering in the quest to create a bunch of paperclips (or if we forget a negative sign somewhere in its utility function).
- Thus we should be really careful about developing powerful AI.
- We should make progress on AI control work–research into how to make sure AIs do exactly what you tell them to
- We should make sure that the AIs we create are tool AIs (which accomplish the tasks we tell them too) instead of agent AIs (which choose what to do on their own)
- We should figure out what utility function the AI should have, and find ways to specify it in ways a Turing machine can understand
- We should make sure to do all of these before we actually get a superintelligent AI, because once AIs start self improving we might not have much time before it exerts its well on the world
- AI superintelligence is reasonably likely to come in the next century, and so this is both important and urgent
I’m less convinced than many that x-risk is clearly the most important cause right now, but that’s not what this post is about. This post is full on contrarian arguments against EA-consensus views on AI x-risk. I’m not particularly confident in any of them, and state them without the reservation they deserve because I want to keep the frequency of “maybe” below 50% of all worlds. But together they’ve made me uncertain about how much I agree with the conventional wisdom.
My favorite food group is meat. Beef, in particular. I’m not sure why, exactly; maybe it’s because I grew up eating a lot of it. I like pretty much all forms–steak, burgers, hot dogs…
When I was young I read some article online about the environmental impact of factory farms. It made be feel shitty–I ate a lot of meat, and thought global warming was really bad. So did what everyone would do in that situations. I found some corner of my brain I didn’t care about very much, stuffed the article within those idle neurons, and disconnected it from it’s neighboring lobes. Lo and behold, I no longer felt shitty.
Occasionally that dormant area of my brain would have to take on a larger burden, like when I read this. But it wasn’t so bad. And one thing was clear to me–those fucking vegan activists were pretty incoherent. The combination of anti-science naturopothy, backwards-looking desires to keep nature preserved in the pristine state of five years before whenever they happened to live, focus on a death which should by their own account be mercy, and condescending certainty made them easy to dismiss.
I went to campaign for Clinton in suburban Pennsylvania a few weeks before the election. It was a suburban, upper-middle-class, swing district with Trump signs on one lawn and Clinton signs on the next. The goal was for me to get out the vote with likely Clinton voters who were kinda flaky and might or might not actually make it to the polls. It was clear pretty early on, though, that that didn’t even remotely describe the people whose doors I knocked on. There were a few threads that ran through the responses I got.
The first, and most prominent, was “you’re the 5th person who’s knocked on my door this week, fuck off”. Ok, seems pretty reasonable.
The second required a bit more reading between the lines. But it was clear, from what people said and how they said it, that no one wanted to tell me they were voting for Donald Trump–and not just because I was with the Clinton campaign. It was also clear that some of them were going to. One woman saw I was with the Clinton campaign, said “I’m not voting”, and slammed the door shut. Another game a long, tortured response about how she didn’t know who she was voting for; after a follow-up question she replied “I guess I’m not sure we’re voting for Clinton”. The men, in contrast, all seemed to be voting for Clinton. Which is not to say men in general voted for Clinton–they didn’t–just that the Clinton campaign’s model of the of the suburban upper-middle-class female vote seemed to have more misses than the upper-middle-class men. I’m guessing may of their male counterparts were already thought to be likely Trump supporters.