Big Numbers

Part I: The Number Line

 

The official estimate that the United States Department of Transportation uses for the value of a life when determining how to trade of between safety and cost is $9.4 million. GiveWell estimates that, through distributing pesticide-infused bednets in developing countries to fight mosquito-born Malaria, a life can be saved for $3,500. Those translate to, respectively, 0.1 and 285 lives saved per million USD invested, making the Against Malaria Foundation a much better investment than American automobile safety initiatives.
p1

If lives saved per dollar is what you care about, well then boy does Animal Charity Evaluators have the charity for you. Each year about 9 billion land animals are killed in factory farms in the United States alone, the vast majority of which are for food. These animals lead really awful lives. ACE estimates that, if you donate to the best factory farmed animal charities, for every dollar donated you’ll save about 5 animals from a life of misery. That estimate is highly sensitive to what assumptions you make–how you interpret the studies that have been done on leafleting and online adds, how you view the relative suffering from battery caged and cage-free hens, what odds you assign to different species being sentient, and how effective you think corporate campaigns are. You could plausibly come up with numbers much bigger or much smaller than 5,000,000 lives/million USD, but it’s hard to come up with estimates below the 285 lives/USD of AMF.
p2

 

The cruel fate of big numbers is that there are always bigger ones. The number of humans alive now is roughly 7 billion, and most of us are living somewhat comfortable lives; the number of factory farmed land animals is about 24 billion, a large fraction of which are living in deplorable conditions. So I guess it’s not shocking that you can save more lives per dollar if you spend if on factory farms than on humans. But there are about one hundred billion wild birds, five hundred billion wild mammals, ten trillion wild fish, and one million trillion wild insects. Even if you don’t care much about insects, and even if you don’t know how to go about helping wild trout–do you really think that your possible impact on their lives, if you devoted yourself to it, is less than 0.2% of your possible impact on factory farmed animals?

 

p3

 

It might seem like, if you’re counting lives, nothing can trump the enormousness number of wild animals; one million trillion is a big number. Do there exist any bigger numbers? Well, the earth might be around for another billion years, and there could easily be ten billion people alive at once; that already gets us to 1017, which is close to the number of insects alive. Now, suppose that we find a way to feed a lot more than 10 billion people, colonize the solar system, or find a way to upload ourselves–well, now we’re talking. Nick Bostrom, the director of the Future of Humanity Institute, estimates that we might reach as many as 1054 future beings. Now, what are the odds we can influence, say, 0.001% of them? And what are the odds we get 1054 beings–0.00001%? Even if you answer that we have only a 0.0001% chance of influencing 0.001% of them, well, 0.001% * 0.0001% * 1054 * 0.00001% = 1036, which is still much more than our 1018 wild insects. All of those resources you wanted to spend on safe roads, or malaria bednets, or vegan leaflets, or whatever it is you’d do to help wild animals: they all pale in comparison to the Machine Intelligence Research Institute reducing the chances that a malignant AI kills us all by even 0.00001%. Thus was born the AI existential risk (x-risk) movement. Think that we’re actually more at risk from bioterrism, or nuclear weapons, than AIs? Think that modeling and generalized research and movement building will do more to reduce x-risk than technical AI work? Fine, look at the Future of Humanity Institute, or Future of Life Institute. Express what skepticism you want; with the rapid advances in artificial intelligence over the last fifty years, the burden is on you to show that the odds that an AI will fundamentally reshape–or possibly end–the world as we know it are less than 10-43. As Eliezer Yudkowski would say, shut up and multiply.

These are some of the biggest numbers you can come up with. What’s the smallest number? There is no official list ranking all causes by effectiveness—to the extent there are any lists they’re about the best causes. But unofficially, when people discuss the worst charities, one name comes up more than any others.

 

p4

 

Part II: Through Which Looking Glass

 

1054 is a big number, but its largess does not belong solely to existential risk. Its provenance comes not from the number of people we could have, so much as the number of beings; whether they’re biological or digital the upper reaches of the universe’s capacity for life comes from things that probably won’t look all that human. So–what’s more important than how we treat nonhuman beings in the future? Any marginal improvement we can make to how people see animals will be magnified in a way that improvements to humans’ well-being can’t. Nonhumans are the true holders of 1054; humans, optimistically, only have 1034. Tiny! And with animals, the message is clear: animals matter. And in a world where billions of animals are tortured each year for food, it’s a message that has to be given. If you’re advocating for people–what’s your message, that people matter?

Sure, the future might be in bugs or simulations or who knows what, but one thing is clear: the keys to the future lie, for now, with humans. We are the dominant race in an unprecedented way. let’s say we even grant that future insects are what matter–a nonobvious claim–how do you expect to help their lives? By helping current grasshoppers? It’s current humans that control future nonhumans’ fate. Helping animals right now is stuck in the land of one million trillion, which is nothing compared even to future humans’ 1034. And in addition to being a totally ironic way to help future insects, when it comes to the future of the world mosquito nets just don’t matter that much. What matters is some combination of the direction that humans decide to take as a global community, and our capability to get there. And when it comes to influence and technology, there’s no need to look abroad; America is the world’s de-facto leader in both areas. You scoffed before at the Department of Transportation spending nine million dollars to save a life but nothing is more important that the stability of the American and European unions (and, of course, China). We’re rich countries; if spending loads of money to upgrade our infrastructure is what it takes to keep us moving forward, we can afford it. Don’t get bogged down in the weeds of today’s blemishes upon the world; what matters is the integrity of the countries that will lead us into the future.

You can talk all you want about future bugs and future robots and future people and future politics and future decisions, but frankly those are all going to be second order effects of our actions today. The world has re-invented itself over the last hundred years and it won’t be the last time; our attempts to bend the future to our will are fleeting. And anyway future us will be smarter and better informed; we should leave the hard decisions about what their society should look like to them, just as we wouldn’t want the ghost of Napoleon Bonaparte deciding today’s health care policy. But there is a decision that we, in the current day, have to make, that will be irrevocable. For perhaps the first time as a species we are building technology that could wipe us all out; and if it does there will be no future us. It’s already pretty clear that a nuclear war could do serious damage that we might never recover from, and it’s easy to see biological warfare doing the same. But we’re also on the verge of creating artificial intelligences that rival our own. If we are already capable of decimating ourselves, imagine what could be done by a smarter, possibly less moral entity that’s hooked up to every computer, smartphone, drone, electrical grid, and governmental database in the world. For possibly the first time in our history as a civilization we have a cause that directly controls the fate of 1054 beings, and that cause is preventing existential risk.

Yes, existential risk owns a claim on 1054, but how much of a claim? Right now it seems like the odds of existential risk in the next few hundred years are neither very close to 0, nor very close to 1, and that’s likely to be true no matter what you, or I, or the EA movement as a whole, do. So it seems like the impact of x-risk interventions are going to be on the order of 10% * 1054 * [P = probability that we get to 1054 given that we survive the next few hundred years]. But what are the odds, really, that we come close to the theoretical upper bound of how good the universe could become? That we travel to the outer reaches of the universe, that we create the happiest thing possible–the level of societal commitment, technology, philosophy, and physics necessary seem rather daunting. It seems like P is actually pretty small, and even under a fairly optimistic set of assumptions we’re pretty likely to not quite figure out the best possible future end up only achieving something like 1048 instead–one million times worse! if P is small then that factor of 1.1 we get out of x-risk reduction no longer looks so dominant, compared to the factor of 1,000,000 that we get out of taking 1048 to 1054. So how would we go about doing that? It’s not totally clear, and that’s certainly a strike against the Campaign For 1054, but for places to start–right now most people either don’t think at all about what we could eventually achieve or actively think that now is more important than the future, very little technological work is being done with 1054 as the goal, and the influential communities in the world aren’t engaging with it at all. Sure, it seems daunting, but even if you think you’re only taking up the odds of 1054 (instead of a much smaller number like 1048) by 0.1%, if P is small then this could overwhelm x-risk reduction. (Remember–P is the probability that you’ll hit the absolute maximum; this is hard!)

In the end, saying that 1054 is a big number just means that the future is important–not necessary that any one intervention for the future is much bigger than any others. If you think that the ultimate happiness of the universe is the product of the odds we make it through the next few hundred years, the odds that we eventually attempt to reach for the stars, and how effectively we do so, then all three of those have the same claim to 1054, and all three matter. Perhaps we as a society should invest comparable resources in each, and in the large number of other factors that influence the far future. And so this biggest number competition we’ve been having–maybe that’s not the right way to look at cause prioritization in the first places. Maybe cause prioritization is actually a boring game of comparing lots of different small numbers, each of which will ultimately be multiplied by our worlds’ massive future potential, and deciding which we can have the largest impact in per dollar, per hour, per unit of influence. Maybe our cause prioritization research should center around messy, empirical questions:  How much potential do we have left to persuade those in power that x-risk should be taken seriously, how much marginal impact can we have on the quality and quantity of work being done to mitigate x-risk, and what resources can we spend on it? How much influence do we have over societies morals surrounding nonhumans, and does it primarily come through veg outreach, academic outreach, PR campaigns, political influence, or something else? How much impact can we have on making sure that our society remains stable and advancing, and is our impact on the global society  biggest through interventions in the third world or the first? (Or, for that matter, the second?)  How much can we influence the priorities of future us? The problem with shutting up and calculating isn’t with the calculation part–it’s the shutting up part. There are a ton of assumptions in any impact calculation, and it’s important to be consistently honest and critical about the extent to which big numbers flow through different causes.

 

Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation that He is.

p5

 

Part III: To Stare Into the Abyss

 

The Westboro Baptist Church claims to offer you eternal salvation, and you in assign any finite probability to it at all, it seems to own infinite expected value. But are they really the most likely infinite-expected-value religion to be true? It seems odd that a deity capable of creating and managing a universe would decide to condemn anyone to eternal damnation, but it seems especially unlikely to do so based on a consensual activity that has basically no externalities and takes up less than 1% of people’s time. It seems like, if nothing else, you’re probably more likely to avoid the fiery depths by spending your time at a mainstream church than protesting soldiers’ funerals (even if you think both are pretty damn unlikely). Although, if someone came up with a religion that promised al0 utility, that would probably be even better. Let’s get to it!

There are plenty of gods that promise infinite utility. In fact, it’s not even obvious that any individual one has finite positive probability. (If it seems crazy to you to assign positive but infinitely small probability to a possibility–a napkin in front of me seems to be roughly three inches wide; what are the odds it’s exactly π inches wide?)  But even if there is no human-like omnipotent being who has created an otherworldly paradise full of infinite utility: if you take seriously the possibility of a superhuman AI, could such an AI find a way to create infinite utility? And if it could, it seems like we’d probably be able to design an infinity more expansive, more pure, and kinder that what you’ll find in the old testament. Sure, maybe you think that the most plausible current interpretations of quantum mechanics and general relativity describe a finite universe, but how confident are you that there is no chance of the infinite in our universe? 75%? 99%? 99.999999999999999999999%? Because none of those are nearly confident enough to bring infinity down from its all-important throne.

If, in the next few hundred years, we can create an infinite well of happiness with Turing machine like AIs, imagine what we can eventually build. We could dream small, and have each of these infinite mini universes spawn its own infinity, and if we recurse al0 times then in the limit that can turn our original al0 utility into al1. And then maybe, maybe, we could find a time to recurse on those recursions–and build up the entire hierarchy of the cardinal numbers. But what are the odds that the biggest infinity we will ever discover, with billions of years and technology we can’t even imagine today, is an obvious implication of 19th century set theory and twentieth century computer science? Perhaps our cosmic endowment is infinite, but if so we might not yet know its final form.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s