The Summoning

bbec

Found on the whiteboard of the Computer Safety Research Group.

Berkeley, California

Kobe and Hunter sat hunched over a laptop in Kobe’s living room.  Black and white stones slowly popped up on the rectangular grid; the commentators tried to grasp the mind of the masters.

A black stone appeared, and Sin furred his brow.  It was man vs machine, and game one wasn’t looking so good for man.

“Go go BetaGo!”  shouted Kobe.

Hunter cocked his head to the side.  “Remember how we’ve spent the last year of our lives trying to avoid scenarios where an AI inadvertently destroys the world?”

Kobe looked down and mumbled something about sports.  They sat in a somewhat awkward silence for the next ten minutes, watching Sin’s carefully designed structure crumble.

After the game, Kobe and Hunter headed over to the offices of the Computer Safety Research Group (CSRG) for the weekly strategy meeting.  The topic of discussion for the day: what to do when the Big Brain Countdown reaches 11pm?

Their positions were mostly predictable.  Hunter, Svengali, and Clapperton though that if they could just deter Big Brain, the world would have enough time to come up with some better frameworks for AI.  Kobe, Wilbert, and Cherish were skeptical that Big Brain was really going to make much progress.  (Hunter was quick to remind them about the afternoon’s go match.)  Rushmore and Melanie thought that eventually, someone was going to try to summon Elua, and although Big Brain wouldn’t be the worst group to do it, why not CSRG?

Lumpkins stood up to speak and everyone quieted down.

“In case you guys haven’t forgotten, this is the computer safety research group.  Our goal is to make sure no one summons Moloch, and that’s exactly what’s going to happen if we rush into this.  Big Brain has to be stopped.  And if you don’t agree, then maybe this isn’t the organization for you.”

A few days and many stones later, Kobe lept to his feet and cheered.  Jeffrosenberg, his boyfriend, stirred from the couch to offer a vague congratulation before going back to sleep.  Hunter, a few miles away, cursed under his breath.  (He wasn’t invited to watch with Kobe this time.)

BetaGo 4, Sin 1. 

bbec2

Fort Meade, Maryland

Hanging from wall to wall was a giant banner that read “Project Virtue”.  The room consisted mostly of unused computers sitting on unused tables, a common occurrence in NSA operations with $50 million dollars of funding per agent.  In the corner of the room was a bulletin board adorned with year-old post-its that probably made sense once.  In the middle of the board someone had scrawled “Goal: AI that wants to do good”, and below it were assorted memes and comics.

tabb

Agents Edzard Overbook, Mythzard Thelisma, Eliza Fox, and Jeffrosenberg Tan sat around a large round table and listened as Headman Dadzie summarized the week’s agenda.  Ed, the head of the bio-sim division, was to investigate rumors that a Harvard neuroanatomy lab had found a way to modulate mouse intelligence though electrical stimulation of different brain regions.  Myth’s new target was a unit in the Israeli government that had supposedly used reinforcement learning on deep neural nets stored in a distributed cloud to find their way to Facebook’s servers.  (He was pretty sure Headman was making this one up but wasn’t in the mood to figure out why.)  Jeffrosenberg was to head over to Big Brain’s headquarters and “see if he could figure out how BetaGo works”.  Dallas Creamer, as always, was leading the software team on Project Virtue.  Eliza began preparing itineraries for their new, probably pointless, expeditions.

San Francisco, California

“Shit Shit Shit Shit Shit!”

Chardonnay clicked furiously while pressing cntrl+alt+delete for the fifth time; the fans on her computer had whirred into overdrive and her mouse had become unresponsive.

Chance rolled his eyes.  “Seriously? Save your fucking code next time.”

“It’s 2019, how the hell is autosave not universal yet?”

Windy looked up from her computer; her few remaining zerglings, abandoned, wandered near a few siege tanks and were no more.  “Can you guys stop fucking arguing all the time?”

“You’re just pissed that you’re losing to a newb” shot back Chance.

“First of all, BobbieBodango isn’t a newb, he’s almost certainly Mvp’s smurf.  Second, I’ve been the best Starcraft II player in the world for 3 straight years.  They closest you guys have come is when you won Dallas Creamer’s french fry eating competition.”

This was the two hundred and fiftieth consecutive day that Chardonnay, Chance, and Windy had spent without leaving Swift Disruption’s the dimly lit headquarters (conveniently located in Windy’s 500-square-foot studio in the Mission.)  Their three person startup, which had received an initial funding round of $50,000 from Chardonnay’s bank account, had one goal: create the first sentient AI, have it figure out how to become as happy as possible, and spread it across the universe.  So far, their flagship program was a was capable of writing “I am happy” to a terminal really really quickly.

Washington, D.C.

The murmur of conversation died down as President Trebilcock entered the situation room.  “What’s the status of the fate of the world?  Six, any news?”

Secretary of State Prospero Gogo VI stood up.  “Nothing’s really new.  BetaGo beat Sin, as expected.  We haven’t been hearing any rumblings from Big Brain; either they’ve got some servers better hidden than the rest, or they’ve mostly been stalling out recently.  All the usual suspects are trying to race Big Brain, and all the usual suspects are failing. Naq, anything new with the orgs?”

Naquez Pringle, the President’s chief of staff, shrugged.  “Still lots of internal conflict.  They’re trying to stop a technology that doesn’t exist yet and whose form no one knows, so mostly they’re just exploring.  CSRG looks like they’re taking a more stringent anti-Big Brain stance, though they may lose like half their staff because of it.  Rumor has it Rushmore Cervantes and Melanie Gubbels are probably going to splinter off and start their own org; Lumpkins’ probably going to divorce Gubbels either way.  The negatives still think their best play is to slow everything down and hope that the world decides to leave Pandora’s box unopened.  I think the world’s mostly in a holding pattern right now, waiting to see what AI’s actually going to look like.”

Headman Dadzie stood up.  “Nothing concrete out of Virtue yet, but I’m optimistic,” said the head of the NSA.  “We’ve got some really good people, and way more resources than anyone thinks we have.”

“Subu, anything new on your end?” asked President Trebilcock.

“Yes,” he replied.  Marmaduke Trebilcock nodded.

The other three exchanged slightly frustrated looks with each other.

The President stood up.  “Well, meeting adjured.  Same time next week.”

As they filed out Naquez took a brief hold of Secretary Gogo’s right wrist.  Prospero’s hand emerged with a plain silver band on his ring finger; he looked at Naq and nodded once solemnly before they turned opposite directions towards their respective offices.

Two days later, BobbieBodango was crowned GSL champion, and promptly changed its name to BetaCraft.

bbec3

San Francisco, California

Chardonnay, Chance and Windy scrambled to become presentable as Windy’s doorbell rang.  Twenty seconds later Windy opened the door.

“Hey, long time no see!” said Rushmore Cervantes as he leaned forward for a hug.

“Seriously?” said Melanie.  “Put on a fucking shirt.”

Chance, now fully dressed, came forward to offer the guests an overeager fistbumps.  “What brings you to the illustrious headquarters of Swift Disruption?”

Rushmore fidgeted nervously.  “Well, uh, we kinda just left CSRG, and uh…”

“Do you have any job openings?” asked Melanie.

Windy beamed.  “Welcome on board!  Let me show you to your air matresses…”

Fort Meade, Maryland

babydoom

“It’s basically just bullshit”, said Edzard.  “Their measure of ‘mouse intelligence’ is whether the mice figure out how to get food from a lever, and the electrical shocks definitely modulate satiation.  I’m pretty sure the ‘smart mice’ are just really really hungry.”

Headman Dadzie nodded curtly.  “Mythzard, how’s the Israel project going?”

“Uh well I haven’t found any evidence of them breaking through Facebook’s encryption, but I’ll keep looking…”

Jeffrosenberg sighed.  “I managed to win the trust of a core developer on BetaGo; I’m pretty sure they just ran souped up Monte Carlo simulations on an extremely powerful computer.”

“Come on people, we’re the fucking NSA, I want something real here!”

Eliza rolled her eyes.  “What if there is nothing real?  What if we’re all just wasting our fucking time? ”

“That’s stupid Eliza, just focus on getting our travel accommodations booked for the Oakland conference.  Dallas, are we at least making progress ourselves?”

“Yes and no.  We have the world’s most powerful knowledge graph, a damn good voice interface, and the outlines of a Bayesian reasoning system.  But so far its actual output isn’t much more impressive than a damn good implementation of Siri.”

“That’s definitely a no.”

Berkeley, California

Kobe and Hunter eyed each other nervously.  Lumpkins had been in a really good mood since Melanie had asked for a divorce and left CSRG.  The bottle of pills on his desk was suspiciously full, and “manic” certainly seemed like a good description of the email they’d just received.

Hi all,

We’re making great progress!  I’ve figured out how Big Brain works.  Let’s meet at 2 to talk about step two.

–Lump

The meeting time had come and passed, and Lumpkins was nowhere to be found.  Hunter stood up.

“Well, Your Majesty isn’t here, so I guess I’ll fill in.  What’re peoples’ updates?  Clapperton?”

“I’ve been working with our friends in London and Boston to come up with an action plan if Big Brain gets close.  It’s not public yet but I think we’ve made some real progress on it; once it’s finalized we’re going to send a delegation over to BB and try to get them to sign on.”

“Do you have a name for it yet?”

“Yeah–well at least for now we’re calling it the AI Safety Protocols.  Pretty generic, we’re hoping that if BB signs off it’ll become industry standard.”

Wilbert piped up.  “I know it’s private but can you give us a flavor?”

“Basically it’s asking them to only make tool AIs.  Humans should always know what outcome they want and direct traffic.  Obviously it could still go terribly wrong, but it’s a start.”

The door burst open, hit the wall and bounced back quickly enough that Lumpkins had to kick it away.

“Ok guys you gotta see this.”  He pulled a notebook out of his backpack and threw it on the table, revealing what looked like a diagram meant for Glenn Beck’s whiteboard.  “See we were wrong, it’s not Big Brain it’s the CIA and NSA and FBI.  BigBain’s just a front controlled by the government to divert suspicion, it’s crawling with moles.  The secret to AI is us.  We can do it, we know how, we just need to write it to think how we do!”

Everyone nodded encouragingly and waited for someone else to say something.  Finally Hunter bit the bullet.

“Yeah!  So, uh, how do we do that?”

“I can’t talk now, I have to go work, but it’s all written here.”  Lumpkins motioned to the notebook.  “Anyway I’m off to start work, I’ll show you guys what I have tomorrow.”

Lumpkins hurried off to his office; his staff sat, dumbfounded, around the table.  Eventually everyone’s eyes turned to Hunter, Lumpkins’ second in command.

“I’ll fix it.”

Washington, D.C.

“What’s the status of the fate of the world?”

The President’s inner circle quieted down, and the meeting of nation’s most secret committee–The Safety Commission–was called to order.

Secretary Gogo stood up.  “Not about AI, per say, but I’ve been hearing some pretty scary rumors from some pretty scary countries.  Nothing concrete yet but there will be.  I’m readying some response teams just in case.”

“Thanks, Six.  Naq?”

“Hell’s breaking lose in Berkeley.  Looks like Lumpkins is of his meds, CSRG might not be long for the world.  Lots of movement afoot, but nothing world-changing.”

“Honestly, Project Virtue isn’t going as well as I had hoped”, Headman Dadzie said cuationsly.  “I mean we’re doing really cool shit, but, like, making AI is really hard.”

The President was unfazed.  “Just keep plugging at it, we can reevaluate later.”

Subu grunted loudly; the President shared a glance with him before standing up.

“Well I’ll see y’all same time next week.  Keep up the good work.”

San Francisco, California

Chardonnay, Chance, Windy, Rushmore, and Melanie sat around Windy’s desktop, watching as man and robot drew their swords.  They had spent the five months building a vision processing system, and the last week training it on Olympic fencing matches.  Their bot had gone undefeated until the last round, before losing 15-0; the winner of the Automated Fencing Championship was, unsurprisingly, BetaBlade.

Alex Yak held his sabre steady as he stared down the faceless machine in front of him.  A bight clock counted down: 3… 2… 1…

“The fuck?”  In that, Chardonnay had spoken for all of them.  Faster than any of them could blink, BetaBlade had taken the lead.

A few minutes later the fifth and final Automated Fencing Championship ended, and with it humans’ superiority with the sword.

bbec4

Berkeley, California

“Here, sir”, said Hunter, passing Lumpkins a glass of water.  Lumpkins gulped it down as he continued to scrawl incomprehensible notes across CSRG’s whiteboard.  Kobe looked over to his boss’ desk; the pill bottle was slightly less full, and a nearby mortar and pestle was coated with a white residue.

Three hours later Lumpkins looked sheepishly at his colleagues as he hastily erased the whiteboard.  “So, uh, any status updates?”

Wilbert stood up to speak but was interrupted by a loud noise emanated from everyone’s cellphone at once.  “President speaking, CSPAN, channel 7.”

Confused, Hunter turned on the tv and watched President Marmaduke Trebilcock walked up to a podium.

“At approximately 2am this morning, five North Korean fighter jets took off carrying live nuclear warheads.  The targets were New York, Washington, London, Paris, and Berlin.  At 2:35, a coalition strike force took flight from Beijing, Moscow, and Taipei.  At 2:42, we gained remote access to the North Korean bombers, defused the warheads, and brought them down over the Pacific Ocean.  At 2:51 the coalition forces landed in Pyonyang and took control of the North Korean government.  At 3:15 the United Nations security council passed a measuring declaring a state of emergency in North Korea and granting the coalition forces control over the nation until free elections can be held this spring.  Kim has been captured, the North Korean military has been neutralized, and the state propaganda machine has been shut down.

The world owes an enormous debt to Secretary Gogo, who orchestrated the response.  He has once again took on a daunting mission and executed it perfectly.

I’d also like formally thank the governments of China, Russia, and Japan for being unwavering allies in the face of the most serious nuclear threat the world has seen in decades.

And I’d like to extend an offer of friendship to the people of North Korea; I hope that, freed from the tyranny of dictatorship, we can become proud allies.  As a wise man once said: I hope this is the beginning of a beautiful friendship.”

Washington, D.C.

President Trebilcock saluted as Secretary Gogo entered the room.

“Thank you, Six, for saving the world today.”

Prospero smiled.  “At ease, soldier.”

“The world didn’t end today but in a lot of ways our response sucked.  Headman, I thought that virtue was supposed to help here?”

“Yeah it basically just wasn’t very useful.  It was a pretty neat tool, but couldn’t hack into those fighterjets.”

“Wasn’t it at least supposed to get whatever information we wanted at our fingertips?”

“I mean, if there was a specific piece of information we needed maybe it could have gotten it, but we had no idea what to ask for.  Six, you look like you have something to say?”

Secretary Gogo furrowed his brow.  “We probably just had the most significant ex-risk event in the last thirty years and not only did Project Virtue not help much, it didn’t even have anything to do with AI.  I guess what I’m wondering is… I don’t know, is it really worth it?  The time and energy?  Like we need to divert serious resources over to geopolitical shit…”

Headman Dadzie raised his head as if to speak, paused for a second, and then looked away.

After a long pause the President spoke.

“I mean, AI is the future and it’s absurd that the United States Government doesn’t have a first rate lab; I’m not excited to give up on that.  On the other hand–we’ve thrown a lot of resources and our best people there.  And maybe it’s time to admit that for some reason our current approach just isn’t working?  Anyway Headman, it’s your program.  What do you think?”

Headman Dadzie and Secretary Gogo exchanged a long look.

“I think Six’s right.  Virtue isn’t justifying our investment in it.”

President Trebilcock looked over to Subu, who nodded once.

“Ok then.  Where next?  We should build something that would have definitely stopped North Korea.  Six, want to lead the way?”

“With pleasure, Mr. President.”

That night New Technologies Group quietly withdrew their program–the reigning champion–from the CalTech Baysean Estimation Challenge, and BetaBayes became the world’s automated Fermi problem superpower.

bbec5

Oakland, California

“Wait, so what exactly does Technical Solutions do?”

Kobe, Hunter, and Jeffrosenberg had found a few seats in the corner of the San Francisco Existential Risk Conference and sat down.  Jeffrosenberg said “um” helpfully a few times before Kobe came to his rescue.

“It’s just another DropBox clone, but with a snobby skin around it to try to appeal to Really Serious Companies–you know, military contractors and shit.  J-ros built out a lot of the product’s functionality, but the super big shots at the company are all in marketing.”

Jeffrosenberg nodded in agreement; Hunter’s follow-up question–“but seriously”, surrounded by some fluff–was interrupted by the sudden appearance of Melanie and Rushmore, holding hands.

“Hey guys, how’s it going?”

Hunter looked away and grimaced as Kobe jumped up to greet them.  “Pretty good!  Excited for the keynote–I assume it’s going to be Big Brain?  Haven’t seen them at all so far.”

Rushmore nodded.  “Yeah, we’ve been assuming so too.  Hopefully they’ll shed some light on where they see themselves headed.”

“So what is Swift Disruption up to, exactly?” asked Jeffrosenberg.

Melanie glanced at Rushmore, who cocked his head to its side unhelpfully.  “Uh… well, we’re working on AI stuff.”

“Building AI stuff, I assume?”

“uh yeah…..”

“I think I might be of some help; any interest in taking on a consultant?”

“We’ll have to see,” said Rushmore as Melanie nodded excitedly.

“Thanks!  I’ll see you guys tomorrow then.”

Jeffrosenberg shook Melanie’s hand and headed downstairs to a small lounge area.  Edzard Overbeek, Mythzard Thelisma, and Headman Dadzie were lounging in beanbag chairs and reminiscing.  It had been three months since Project Virtue was shut down.  Edzard and Mythzard had gotten jobs working for Big Brain; Headman Dadzie had moved his time to other existential risks; Eliza Fox had retired and taken up painting; and Jeffrosenberg had, until a few minute before, been hanging around Kobe’s apartment with nothing to do.

“What’s new at BB?” asked Jeffrosenberg.

“No idea” said Edzard and Mythzard in unison.  Edzard signaled for Mythzard to continue.  “It’s pretty siloed off; we basically only know about our group, and our group isn’t the exciting one.  We’re doing what digging we can but it might be a while before we really have our bearings there.”

“So maybe you guys’ll also learn something from the keynote?”

Dadzie laughed as Edzard and Mythzard rolled their eyes.

—–

Five hours later and a few hundred of the world’s best and most altruistic minds gathered in an oversized auditorium for the keynote speech: “The history of the future.”

At 7pm sharp Secretary of State Prospero Gogo IV stepped up to the podium.  The room fell into a shocked silence, and then thundering applause for the man who had possibly saved the world.

Six raised his hand, and eventually the crowd quieted down.  “I’m probably not who you were expecting today; instead of hearing from the most sophisticated AI company in the world, you get to hear a politician giving a stump speech.  So sorry for that.

I’m really, honestly honored to be here, though.  You guys are the shit.  Like seriously, out of the billions of people in the world, here are the 349 people who are dedicating their careers to the future of the world.  That’s pretty depressing, and pretty impressive for you guys.

I don’t really know that much about AI; but I guess I know something about how the world works, and where its pressure points are.

And one thing I know is that there are a lot of pressure points, and some of them are really fucking weak.

Most people think that progress is forwards, that the future will be better than today, that in the end their ideas will win.  Some people disagree–some think that the future might be roughly as good as today, or a bit worse.

There’s this thought, this premise that everyone basically assumes: that the future won’t be much worse than today.  That even if we don’t make progress we won’t lose what we have.  Republics–good republics–won’t fall, and America is the best republic.  Fundamental freedoms won’t be lost.  Maybe we’ll lose economic guarantees, lose healthcare, but our ideas won’t be censored and our actions will remain ours to make.  And the fundamental truths that we hold to be self-evident won’t be discarded.

And I guess maybe it’s time that the world remember Rome, and then remember The Dark Ages.  How we lost everything once before.

For the last fifty years the world has agreed that being like the United States is Good, and being like ISIS is Bad.  Not everyone, obviously, but almost everyone.  Even corrupt dictators want to be seen as democratic and freedom-loving.

And hopefully that never changes.  But it might.  The truth is that our economic might carries with it a bunch of cultural might, and our military’s strength is our strongest argument for democracy.  And if those falter–if we’re critically wounded, if we have another Great Depression, if we create more internment camps–the tide could turn and China could become to world’s model.  Which, to be clear, could be worse–China isn’t evil.  But it is, basically, amoral.  It’s government, that is; the people are like every other country’s people.

And we could move back towards a bunch of power-seeking superpowers skirmishing for territory, and all of our notions of loving thy neighbor will have to be reconciled with the bombs we drop on our neighbors.  When you drop a bomb on a Bad Guy, the world cheers.  But when you drop bombs on a bystander state, you’re no different from anyone else, and the whole house of cards you’ve built out of charity and benevolence and freedom and eagles and shit falls down.

And even if you think you’re bombing the Bad Guy, bombs have shrapnel, and if any piece of it hits another Good Guy, Rome falls and darkness beings to descend.

It might start small–accidentally electing the Wrong People, enacting some callous policies in order to appeal to a base.  But it can snowball, because if one party defects on prisoners’ dilemmas then the other will too, and instead of having two ideologies trying to reconcile with each other you have civil war.  And then both side start running the Wrong People, and if you ever elect Hitler Chancellor, the game’s over.

And if we fall back into a second dark age, all of our future planning will be for naught, because our descendants won’t give shit about what their weak and powerless and defeated and virtuous grandparents had thought.

I guess what I’m saying is–technical AI safety work is great, and we should charge full steam ahead with it.  But it only matters if we don’t fuck up everything else.  If we get Hitler before we get AGI, he’ll throw out our AI safety corpus and replace it with an AI power agenda.

Our future might be our history.

And so we desperately need some of your minds working on the issues that could bring our society to its knees.  Working on economic progress, and foreign affairs, and finding ways to create a post-robot society that low-skill workers want to live it.  Making vaccines for pandemics and missile defense systems.  And we even need some of you running for office.

Anyway enough doom talk.  What happens if we still have our shit together when AGI comes?

Well, one thing we know is that the AGI will be powerful.  Powerful enough that maybe It’ll determine the fate of the world.

And what matters–maybe all that matters–is what It wants the world to be.  It has to understand what we want, and agree with us.

And so we have to teach It.  Teach It love, and respect, and friendship, and sacrifice, and virtue, and humanity.  We have to teach It that we matter just as much as It matters.  No less.  And, if we want to have any credibility, no more.

We have to raise It as our own, so that It can see us for who we are and live with our imperfections.

If we don’t–if we treat It as, well, it–then the odds that it understands what we really want, and decides that that’s what It wants too…  Well, if I were a betting man–which I am–I’d take the under.  Which I did.  So, I really hope I’m right, because then I’ll make twenty bucks off of President Trebilcock.  Also probably the world will be turned into paperclips, so maybe I don’t actually want to win that bet.  It’s close.

Anyway I’ve taken up enough of your time tonight.  There are hundreds of people here who know more than I do about existential risks; take advantage of the last night y’all have together and start the groundwork on something awesome.”

Berkeley, California

At midnight that night, fifty people crowded into the Computer Safety Research Group’s headquarters.  Maj Lumpkins was giving CSRG’s offical response to that night’s keynote speech, and Chardonnay Pantastico was giving Swift Disruption’s.

Hunter walked to up to a desk in one corner of the room; a makeshift podium had been assembled on top out of eighteen packs of printer paper.  “Lumpkins will start speaking in a few minutes; in the meantime anyone have any questions about CSRG?”

“Or Swift Disruption!” added Chance.

A woman in the other quarter of the room raised her hand.  She was wearing a hoodie and baseball hat, which meant that only people looking straight at her recognized the President’s Chief of Staff.

Hunter froze for a second, and then scrunched together his lips to hide a grin.  When Marmaduke Trebilcock had been first elected President six years ago, all hope of cooperation with the US government seemed pretty much lost–he seemed more concerned with political enemies than the world, let alone AI.  All of a sudden two of his closest confidants–Nobel Peace Price winning Secretary of State Prospero Gogo VI and Chief of Staff Naquez Pringle, Washington’s biggest power broker–had shown up at an AI safety conference.  The President’s decision earlier that year to give a rambling speech on the Senate floor about the need for “greater cyber-security, greater robot safety, and greater AI safety” was starting to make more sense.

Hunter nodded towards Naq.  “I was wondering–for both of you, actually–what could government be doing to help the AI landscape?”

Hunter paused to think while Chance spoke.  “AI isn’t really a project that we should  want to be in the hands of government; Washington has enough power as is, and given the current administration, maybe too much.”

Naq chuckled and nodded; Hunter winced.  Clearly Chance had no idea who had asked the question.

“We at CSRG think that government has a really important role to play in ensuring cooperation–both domestically and internationally.  It’s incredibly important that all major AI labs consider safety to be a top priority, and some powerful entity–possibly one or more governments–has to make sure that no corporations defect and sacrifice control for power.”

Naq nodded once again.  “Thanks!”

A few seconds later the two speakers entered the room, and Maje Lumpkins took control of the podium.

“Hey all, welcome to the Computer Safety Research Group.  I guess it became clear after the Secretary of State’s speech that there were a lot of different keynotes that could have been given by a lot of different people.  I’m not intending this as a rebuttal, really, because I think that Gogo’s speech was mostly not about AI.  So consider this the speech I would have given.

In the coming years we as a society are going to create an AI more powerful that us.  It’s not clear who, or when, but it’ll happen.  And once it does, nothing else will matter.  The question we face–the biggest question our species has ever faced–is which AI we’ll create.

We might create an AI that understands our values, and our value.  We might create an AI that gives power for a particular country, or person.  We might create an AI that gets money, or fame, or weapons.

But more likely that any of those, is that we try to create an AI that gets us power, or money or fame, or weapons; and what we get is instead an AI that values truth, or a number in a bank account, or twitter followers, or bullets.  And when this AI becomes more powerful than us, it’ll achieve what it wants–exactly what it wants.  And so the world will become a gigantic stockpile of bullets–more and more bullets until there is no world left to turn into bullets.

Whoever creates the AI will try to summon something they want.  If we’re unlucky, a war machine; if we’re lucky, Elua.  But it won’t matter, because they won’t get a billion bullets, they’ll get a trillion trillion trillion bullets and then we will all be no more; whoever they tried to summon, Moloch is who they’ll get.

We’re on the brink of summoning the most powerful being that has ever existed.  We have thousands of people making sure that it’s as powerful as possible and comes as quickly as possible, and tens of people trying to figure out how to teach it what we really want.

And while I would love for Secretary Gogo’s approach to work, by the time an AI is powerful enough to be raised by us, there will be no us left to raise it.

You may point to previous revolutions how human society is structured–the internet, computers, the industrial revolution, the enlightenment, Christianity, agricultural civilizations–and note that all of them happened on timescales that gave us plenty of opportunity to react.  Sure, they’re speeding up, but even the recent ones have taken decades; how could the AI revolution–not just smarter phones but real AGI–happen in seconds?  That’s not how society works.

But time is fickle.  The speed of change depends on the speed of the agent making the change.  And for all of human history, that agent has been, well, human.

And this time it won’t be.  And while humans seem to operate on at best hundreds of milliseconds, computers act on millionths of a second.  That’s thousands of times faster, and that’s just with current architecture.  So if the internet took tens of years to transform society–well even if an AGI was as aimless and wandering as human society is, those decades would become days.  And unless we get really lucky, AGI won’t be as aimless as humans.  It’ll be laser focused on whatever the fuck its creator naively told it to do.  Days would be generous.

So we need to solve the AI safety problem before we create a superhuman AI.  And that means that we need to be shifting large amounts of resources to it now.  We need teams of people working on all plausible avenues to developing frameworks for AIs that will absolutely, definitely not do anything we don’t want them to.  We need teams of people developing ways for us to convey our values as a species to an AI, ways that will capture who we would want to be if we were as smart as it.  And we need to make sure that technical progress on AI doesn’t continue to outstrip technical progress on AI safety.

We need AI safety research, and we need it to be done well.  And we need it to be done now.  The fate of the world depends on it.”

The crowd applauded politely, and Lumpkins relinquished the podium feeling slightly defeated.  He had thought that the friendly home-field crowd would have given him a standing ovation, but instead saw a sea of smartphones.  It was in its own way oddly comforting: he had found a home where his most deeply held ideas were so accepted as to be boring.

Chardonnay walked up to the desk and the smartphones vanished.  Swift Disruption was unknown to most of the audience, but Chardonnay Pantastico was a superstar.  At age 17 she was within double-digit elo points of becoming the highest rated chess player in the world.  To the shock of the chess world, she quit playing chess on her 18th birthday, telling USA Today in an interview that chess was “kinda ok, but after a while pretty boring.”

A year later she joined a CalTech CS lab as an adjunct researcher, and quickly became the most prolific publisher in the department.  The day she was offered a tenured position, she quit and joined CSRG.

At CSRG, she had done her best Erdos impression, spending five nights a week talking late into the night with a different member of the AI safety community, and returning the next morning as awake and alert as ever.  Her primary partner was Maj Lumpkins, but her secondary partners spanned multitudes.

The day Maj Lumpkins proposed to her she left CSRG, swore off dating for a while, and founded Swift Disruption.

On reflection, Lumpkins wasn’t so surprised that about two hundred and fifty people had squeezed into CSRG’s modest office, including two of the President’s closest advisers.  This sort of thing happened a lot when Chardonnay was around.

Chardonnay walked up to the podium.  “Hey guys!  Maj, CSRG–thanks for letting me speak tonight.

So, uh, I guess I’m going to disagree with a lot of the AI safety community’s generally held beliefs tonight.  And I don’t mean that as an insult to anyone here.  I think what the community has done, on the whole, has been awesome, and I really really respect what ya’ll have been doing.  As Maj said, this is the most important issue ever, and the people in this room have seen closer to the truth on it than just about anyone else.

I should start by saying that I care about the expected value of how good the world is, and by goodness I roughly mean happiness minus pain.  And so if you disagree with that, then you’ll probably disagree with a lot of what I’m about to say.

One odd thing about the expected value of a distribution that spans many orders of magnitude is that only the extreme values matter.  That’s not true, of course, if your distribution only technically spans many orders of magnitude.  But if it seriously does, if the odds of the extreme values are non-negligible, then you should ignore everything else.

We in the AI safety community like to make fun of people who worry about robots stealing our jobs.  Compared to the end of the world, jobs are meaningless.

But compared to what could be, the end of the world is meaningless.

There are seven billion people alive right now.  Maybe you can fit in  1017 people before the world dies of natural causes.

But the galaxy has the potential to support so many more people than that.  And if we find things that are more efficient than people–things that feel more happiness per unit energy, maybe–we could get as many as  1054.

And that’s all just assuming that our current understanding of physics is correct.  Maybe we could get another factor of  10100 with ‘something something dark matter something something’.  And maybe we could even get infinity, who knows?

But even just plain old 1054 is large enough that 1017 just doesn’t matter, unless you think that the odds we come close to our cosmic endowment are like ten to the negative bajillion.

And so maybe it’s time to come off of our high horses, stop laughing at the robot jobs police, and start laughing at ourselves.  We’ve been thinking laughably small.

This implies a totally different way of looking at things.  For instance–let’s say that one of our paths has a 50% chance of destroying everything.  If that path would get  1051 payoff, and another one would definitely get 1049 payoff without destroying anything, we should choose the former.

It’s time we free ourselves from our more conservative instincts.

It’s time we start asking the important questions.

What maximizes the odds we get a result that’s close to the best possible?  And, as our friends across pond would remind us, what minimizes the odds that we get the negative of that?

Are we best off hoping society slowly accumulates wisdom and technical prowess over the years, so that some day millions of years in the future our descendants figure out how to find the biggest number?  If so, maybe we should focus all of our resources on convincing society that this is the goal all smart people should toil away at.

Are we best off quickly building a superhuman AI that has this same goal, and letting it decide what’s best?  Because if you want the smartest people making decisions at each point in time, well computers might soon be the smartest people.

Or is the thing that maximizes the odds of 1054 to increase the odds that civilization lasts for hundreds more years, and make sure that society continues to be free and safe?

That’s a joke, those things get you like one order of magnitude.

Dream big, folks.”

There was no applause; Chardonnay’s vision sat too uneasily with most of the room for that.

Towards the left side of the room, a man in shorts and a beanie raised his hand.  “Why are you so sure that the odds of 1054 are greater than -1054 ?  What if they’re actually lower?”

Dredrick Snelson ran the community outreach of the No Pain Project, a group whose goal was, basically, to minimize the odds of -1054 .

Chardonnay smiled.  “Well, to answer your first question–we’re trying to make positive numbers, not negative ones, so it’d be a little weird to have the prior that we’re more likely to go negative than positive.  And to answer your second question–have you ever played zero sum games?”

Hunter winced, Lumpkins swore under his breath, and Kobe and Dredrick exchanged reassuring glances.

Fort Meade, Maryland, The Bulletin Board

Arlington, Virginia

“How’d my speech go over?”

Six and Naq were sitting around their living room table.  Five different projectors were sharing a screen in front of their fire place, showing live midterm election results from five different networks.  It looked like President Trebilcock was going to retain a cooperative congressional branch, though by slimmer majorities than before.

Naq furrowed her brow.  “Eh… I think some people really liked it, and others thought it was mostly bullshit.  Basically just dividing between people who do and don’t like politics.  For the bit at the end, about being nice to AIs–I think insiders in the AI safety community thought it was naive, and outsiders thought it was pretty interesting.  So overall kinda mixed.  Does that jive with your impression?”

Six nodded.  “Yeah I think that sounds about right.”

“Do you actually believe it?  That being nice to AIs will matter?  Like I think others have pretty decent points about timescales and stuff.  And anyway, why do you think that an AGI is going to be human enough to care if people are nice to it?”

“Well in the end, we’ll probably want some sentient AI-like beings, and those are probably going to understand human emotions.  And, like, I think it’s really hard for a non-sentient being to create a sentient one.  So if the first level AGI we create isn’t sentient, I’m kind of skeptical that the second-level ones it creates will be.  I kind of think that, in the cases that really matter, the first world-changing AGI very well might be sentient.”

“And the timescales?”

Six paused for a second.  “Well, if the AGI really is good–good like good for the world, not like powerful–why is it going to rush?  Why not spend time learning what it can?  Learning how the world works, and also learning from humans what it really means to be good?”

Naq nodded.  “Yeah I guess I could see something like that happening.  Still doesn’t seem like the median outcome to me, thought.”

Five different voices announced that Nevada results were about to be released; Six and Naq turned away from each other to see whether any of their friends were about to be unemployed.

San Francisco, California, Pete’s Coffee and Tea

“The usual, Chardonnay?” asked the barista; Chardonnay nodded.

“And you?”

“I’ll have it too”, said Jeffrosenberg as Chardonnay chocked down a giggle.

The two found a table in the corner.  “Jeffrosenberg Tan, asking me for a job; I never thought I’d see the day.”

Jeffrosenberg laughed uncomfortably.  “Yeah, well, before you get too used to your high horse…  Well, I think I could be quite useful.”

Chardonnay furrowed her brow.  “Do you ever know what we do?”

“Yes.”

“‘And how do you think you could help?”

“You have the third most sophisticated AI in the world.  Big Brain has the second, and right now you’re on pace to lose the race comfortably.”

“You still haven’t told me how you could help.”

Jeffrosenberg looked around.  The young woman to their left was engrossed in her phone, and the couple to their right seemed unlikely to look at anything other than each other’s eyes any time soon.

He pulled out a pen and scrawled something on a napkin.  Chardonnay stared at him, and then grinned as Jeffrosenberg ate the napkin.

Fort Meade, Maryland, The Bulletin Board

Berkeley, California, CSRG headquarters, Maj Lumpkins’ office

1pm appointment: Dredrick Snelson

“Hey Dredrick, thanks for coming.”  Lumpkins motioned for the No Pain Project’s community director to sit down.

“No problem, always happy to make time.”

“So, I guess the first thing I wanted to do was to make sure, in no uncertain terms, that we don’t see things the same way Chardonnay does.  We see you guys as obvious allies in the fight against unsafe AGI.”

Dredrick nodded encouragingly.  “Thanks, and obviously we really respect what you guys have been doing too.”

“The second thing I wanted to say is that it’s about time we–all of us–got together and came up with a plan we can all stand behind.  I’m pretty worried right now.  The government is sending seriously mixed signals, Big Brain’s mostly been closing itself off to outsiders, and Chardonnay’s off doing who the fuck knows what.  We need to get our shit together.  Can I count on your support?”

“Of course.  We’ll let you lead the way.  We’re not too different in the end.  We might not agree on philosophy, but I think our goals are basically the same–that we have so much to lose, and we are in a unique position in history to lose it.  And that the most important thing–the only thing that really matters–is that we not lose it.  That we not lose this world at the hands of a god equal parts powerful and arbitrary.”

Maj Lumpkins held out his hand, and Dredrick Snelson shook it firmly.

2pm appointment: Melanie Gubbles

“So, uh, how’s Swift Disruption going and stuff?”

“Uh, it’s good.”

Lumpkins and Melanie sat in silence for a few minutes, unsure how to act.  It felt wrong to be alone in the same room together.

Finally, Melanie gathered herself and broke the silence.  “So, uh, what’d you want to talk about?”

“Right, yeah.  So, uh…  Right so I’m working on some AI safety protocols and I want all the major players to sign off on them so that we can finally start making serious progress on AI safety and make sure that all AI labs will follow at least some common-sense principles, and I’m kind of worried that Swift Disruption is going to try to do the exact opposite of that, and you’re working for them now, and what the fuck?”

Melanie looked down.  Lumpkins took a deep breath and continued.

“So, I guess I was wondering two things–first, what exactly is Swift Disruption’s goal?  And second–are we allies?”

Melanie sat still for about ten seconds, and then looked Maj Lumpkins straight in the eyes.

“Do you trust me?”

“No.”

Melanie sighed, averted her gaze, and nodded.  “I understand.  This still isn’t public but Jeffrosenberg Tan just joined Swift Disruption; if you need a contact there you can always talk to him.  My gut tells me that he’s a good guy.  Anyway…  I guess, just know that the work I did here, in this office–I meant it then, and I mean it now.”

“Hell of a way to show it, Mel.”

Melanie Gubbles nodded curtly and stood up.  “Well, thanks for having me here.  See you around I guess.”

34pm appointment: Naquez Pringle, video call

“Ms. Pringle, thank you so much for taking the time to talk to me today.  I was really pleasantly surprised to see your at our office last week; I hope I’m not being too presumptuous in taking you up on your offer to stay in touch.”

“Of course–I’m a huge fan of y’all.  Call me Naq, by the way.  And sorry for the delay–I got the kind of call you don’t refuse.”

Lumpkins motioned ‘no problem’ with his right hand.  The truth was that he was rather glad to have the hour break.  “So, Ms…. so, Naq, I just wanted to get a sense of what you were thinking, and run a proposal by you.”

Naq waited as moment to see if Lumpkins was going to clarify his question, then jumped in.

“So, I guess the first thing I’d say is that, officially, everything I say is coming from me, not the administration.  Different people obviously have different thoughts, though I’ve been quite pleasantly surprised by the rough contours of agreement people seem to have.

All that out of the way–I see major goverments’ interests, your interests, and what’s right for the world basically aligning here.  I think no one wants a runaway AI, except for maybe a few rouge factions here and there.  Government’s obviously going to pretty slow and methodical about this unless there’s a politically obvious urgency, and bureaucracies take time to work with, but I’m pretty optimistic that we can work together.”

“That’s really great to hear.  I guess that brings us to my second point.  I’m trying to organize together most of the big players in the AI safety landscape and see if we can come up with some general guidelines–everything from best practices when dealing with cutting edge AI, to international cooperation, to agreeing on a common set of goals.  I’m pretty optimistic about things–it looks like we’ll be able to get buy-in from most of the heavy hitters–but it’s really important to say ahead of the curve.  So obviously it’d be awesome to get official sponsorship from the US government, but shy of that just getting informal agreement from you would go a long way.”

Naq smiled.  “Yeah, I think getting official status here is going to be pretty hard, but unofficially we’re really excited for a few of us here to be part of this.”

Lumpkins nodded.  “That’d be great.  I’m trying to host a first meeting tomorrow to try see if we can start developing a general framework for this; any chance you’d want to call in?”

“Absolutely, just send me the time sometime today and I’ll see what I can do.”

Lumpkins hesitated a bit, but decided to press his luck.  “So, one other question I had–it was hard for me to get a read on what Secretary Gogo thought from his speech.  Do you have a sense of where he lies?”

Naq waved her arms, and a few seconds later Secretary Prospero Gogo VI’s head popped into the right corner of the screen; Lumpkins twitched a little bit, startled.

“Maj, this is Six, the war hero and my fiancee; Six, this is Maj Lumpkins, head of CSRG.”

“Hey Maj!  I’m guessing you’re going to ask about my speech?”

Lumpkins nodded, still a little bit too surprised to smile.

“Yeah–I guess there were kind of three pieces to it.  The first–that there are a lot of other factors at play here, and if the world goes to shit before AI happens that’s really bad.  The second is the bit at the end about being nice to AIs.  But the third, unspoken part is  about the prioritization of those two compared to standard technical AI safety work, and I’m guessing that that’s what you’re asking about?”

“That’s right–I guess I was just a little bit surprised–or, well, I haven’t really thought about it that much, compared to you, and so–are there reasons to think that the world’s likely to seriously degrade before strong AI?  I would have guessed that it was somewhat unlikely.”

“I mean, I don’t think there’s any one thing in particular.  But, like, international politics is messy.  For the last eighty years or so we’ve gotten really lucky; there have been quite a number of somewhat close calls, and when it really mattered, one way or another the good guys have almost always won.

There are probably ten plausible things I could think of right now that might destabilize the world in the next few years, and I’m having a hard time convincing myself that the odds of each one of them are less than 1%.  And, like, that alone implies like 4% per year.  If strong AI is like 30 years away, then–these numbers are all rough but it seems like 50-50 that the world’s going to have gotten really significantly shittier by then, to the point that the AI safety landscape may have been mostly replaced by an AI power struggle.

It always makes me feel pretty uneasy–my intuition for the sum of the probabilities is very different than my intuition for each one individually.  But in the end I worry that the numbers don’t lie, and we actually do have a significant chance of something really bad happening.  Shut up and multiply, I guess”

Lumpkins furrowed his brow.  “Hm.  Yeah, I guess I see what you’re saying, but…  Even if that is all true, and even if that did end up reshaping the AI and AI safety landscapes–which isn’t obvious to me–technical AI safety research still seems much more neglected than international politics.”

Six shrugged.  “Yeah, technically AI safety also seems really important.  But I think it’s not totally obvious which is more underinvested right now–obviously goverments have a ton of money and people, but most of the resources are spent domestically, and even in terms of foreign policy most politicians are thinking about the big news issues–current wars and stuff.  The number of people devoted to trying to, behind the scenes, do what they can to keep things together is a lot smaller, and most of them have fairly replaceable jobs.  In the end there aren’t that many really influential power brokers in the world, and becoming or influencing one of them isn’t totally impossible.

I basically think that in order for things to turn out well we need both to figure out how to create positive AGI, and to make sure the world is still going to want it.  And so if we start falling behind on either metric, then I think the other starts to become more important.  When your payoff function is the product of factors, you can sort of take tractability scalars out from each of the factors and collect them together, leaving them all much more symmetric and making you want to invest comparably in all of them.”

“Yeah, that all sounds pretty reasonable, and boy am I glad you’ve done as much as you have to keep the world together.  Anyway, as I was telling Naq–I’m organizing a meeting tomorrow to try to draft some AI safety policies, and you’re welcome to join her on the call.”

“I’ll see if I can swing it.”

5pm appointment: Dougal Spork, video call

Maj Lumpkins sat in his office, playing with a rubber band.  5pm had come and gone, and so far no word from Big Brain’s CEO.  At about 5:30 Lumpkins sent another email to Spork, asking if the appointment time was right.

Finally, at 5:46, Dougal Spork appeared on the large screen mounted on Lumpkins’s wall.

“Hey Maj, sorry I’m late, something pretty exciting just happened and I got drawn into it.”

Lumpkins motioned for Spork to elaborate, but Big Brain’s CEO just shrugged.

“No problem, your day’s busy; I’ll take what chances I can get.  So, I think we’re basically ready with a draft of the AI guidelines; I’ll send over what I have tonight.  We’re going to have a meeting tomorrow around 4pm to see if we can get everyone on the same page, any chance you could call in?”

“Of course I’ll be there.  Does it look roughly like what we’ve discussed before?”

“Yeah, more or less.  Made a few tweaks, added a few things, but shouldn’t be anything too obtrusive.”

Spork nodded.  “That sounds good.  I’ve got to run now, but I’ll be there tomorrow.  Bye!”

Maj Lumpkins waived goodbye, waited for the video call to end, and sighed.  He had long sinced given up on getting much information from Spork, but he really wished he wasn’t going into the meeting the next day flying so blind about what Big Brain’s stances would be.

6pm appointment: Hunter and Kobe

Lumpkins passed out copies of yesterday’s draft of the AI Safety Guiding Principles.  “So, we’ve got to make a few changes in light of some things that’ve happened the last few days.  First–we should add some section about general international politics and cooperation.  I think it’ll be pretty important for getting buy-in from the US government, and it’s probably not crazy anyway.  Second, let’s cut down on the amount of language supporting positive general AI, and mostly just focus on avoiding negative AI.  Sound good?”

Kobe nodded.  “Sounds great!”

“I guess”, replied Hunter.  “I’m still not sold about the politics crap.”

Lumpkins shrugged.  “I don’t know, Pringle and Gogo made some decent points–if the world really does go to shit before we get strong AI, that’s probably pretty bad.  Anyway, we’ve got a lot of work to do before tomorrow; let’s try to get a new draft done by 9pm, then go over it.

9pm appointment: Hunter and Kobe, again

Hunter handed Lumpkins the newest draft of the ASGP.  “Thoughts?”

Lumpkins read through the document.  “The content seems about right, but it could use a little bit of pep, you know?  Something to make people excited.  Kobe, want to see what you can do?”

——————————

AI Safety Guiding Principles

A few million years ago, apes created humans.  This was their biggest and most powerful creation; and because of that, it was also their last.

And while humans took millions of years to take over the world, time scales have been increasing.  Farms took thousands, countries took hundreds, and the internet took tens.

If and when we create an AI stronger than ourselves, and thousands of times faster, it might conquer everything we have in seconds.  And so, lest we go the way of apes, we had better be ready for our creations before we create them.

In pursuit of our continued existence, we pledge to adhere to the following goals.

  1.  AI safety must progress faster than AI progress, so that we are always ready for what we create.
  2. Control of AI systems should become a popular, well-respected field in AI labs and academic departments.
  3. Finding a framework for developing AI that has mathematical guarantees of safety and control should be a top priority.
  4. Because our AIs will be no safer than the people who create them, the stability of the world’s societies must be maintained.
  5. Any systems that might push the boundaries of AI capability further than has been done before should be subject to independent safety review and approval.
  6. Strong AIs should be designed as our tools, not as agents.
  7. All progress on AI safety measures should be international.
  8. An AI Safety Oversight Board shall be established, comprised of leading AI safety researches from industry, academia, and non-profits, which shall enforce and adapt these principles.

Forte Meade, Maryland, The Bulletin Board

The Universe is a dark and foreboding place, suspended between alien deities. Cthulhu, Gnon, Moloch, call them what you will.

Somewhere in this darkness is another god. He has also had many names. In the Kushiel books, his name was Elua. He is the god of flowers and free love and all soft and fragile things. Of art and science and philosophy and love. Of niceness, community, and civilization. He is a god of humans.

Moloch is exactly what the history books say he is. He is the god of child sacrifice, the fiery furnace into which you can toss your babies in exchange for victory in war.

He always and everywhere offers the same deal: throw what you love most into the flames, and I can grant you power.

As long as the offer’s open, it will be irresistible. So we need to close the offer. Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.

Ginsberg’s poem famously begins “I saw the best minds of my generation destroyed by madness”. I am luckier than Ginsberg. I got to see the best minds of my generation identify a problem and get to work.

–Scott Alexander, Meditations on Moloch

1pm, the next day

Maj Lumpkins stood up.  “Thank you all for coming today.  We have myself, Hunter, and Kobe representing the Computer Safety Research Group; Dredrick Snelson from the No Pain Project; and Jeffrosenberg Tan, an independent AI safety researcher in the room today.  We should have Dougal Spork, founder and CEO of Big Brain, and a representative from the US government joining us remotely.”

Hunter looked up from underneath a desk. “Should have Spork on soon.”

People milled about for a few minute, waiting; then, simultaneously, Dougal Spork appeared on the projector screen in the front of the room, and Six and Naq entered the room.

“Holy shit!”  Lumpkins beamed.  “Thanks for coming across the country for this”

Size nodded.  “Of course.”

Lumpkins stood up again, and his official sounding voice returned.  “Well, thanks everyone for coming.  We have Dougal Spork from Big Brain on the screen; Secretary Gogo and Chief of Staff Pringle are here for the government.

This is the most important thing happening in the world right now.  What we decide today will, hopefully, serve as a backbone for a system that might just save the world.  I’m hoping that we can leave this meeting having agreed to some guiding principles for AI safety.   To that end, I’ve sent each of you a draft of a policy.  I’d really appreciate each of your input on it.  So–thoughts?”

“I think it’s really well done”, offered Snelson.  “I’m fully on-board.”

Six smiled.  “It’s a pretty good start.  In the end we’ll have to be reactive to what AIs start looking like, and what the major players look like, but this seems like a reasonable stab at it for now.”

Jeffrosenberg nodded once in ascent.

All eyes turned to Spork, whose eyes were flickering back and forth as if playing a video game.

“I like points 1, 2, 3, 4, 6, and 7.  Not a huge fan of 5 or 8.  It’s just too hard for outsiders to effectively judge a complex AI.  You guys are welcome to try but in the end I think our internal safety board is going to be much better equipped to understand our AIs than a general purpose oversight group.”

Lumpkins sighed.  “Yeah, I understand–it’ll be tough.  But, I mean, 5 and 8 are still good goals, right?”

Dougal Spork shrugged.

Hunter stood up.  “So, are we all in agreement that, at least as a first pass at some founding statements, we can tentatively approve the principles?”

Snelson, Six, Naq, Jeffrosenberg, Lumpkins, and Kobe nodded.

Spork shrugged again. “Sure, I guess.”

——————–

Press release, Big Brain

We at Big Brain are devoted to pushing the boundaries of artificial intelligence, so that we as a species can see further and do more.

This year, we have been replacing our PR department with an advanced computer intelligence we call bbpress.  For the last month bbpress has been handling all of Big Brain’s external communications, including this press release.  We hope that in the years to come, programs like bbpress can free up millions of hours of people’s time so that we can focus on what really matters in life.

And in case you’re wondering how realistic bbpress is: this past May, bbpress played the roll of two of the judges, and three of the human competitors, in the International Turing Test Trials.  As you may have guessed from the fact that you’ve never heard about this, no one knew the true identities of those ‘humans’.  The true Turing test is to pass as a human when no one knows you’re even trying, and bbpress has passed with flying colors.

We’re also excited to announce that the top three finishes in this summer’s International Olympiad in Informatics were, in fact, different instances of bbsolve, an autonomous problem solving and code building system designed here at Big Brain.

The future is here, and we at Big Brain are excited to see where it leads.

–Dougal Spork, CEO; as told by bbpress

 

bbec11

Fort Meade, Maryland, The Bulletin Board

San Francisco, California

“Almost done!”

Chardonnay had been programming for twenty eight hours straight–ever since bbpress had been unveiled.

Jeffrosenberg furrowed his brows.  “With what, exactly?”

“Big Brain’s AI is going to hit really soon.  Now is our chance–our last chance.”

“Our chance to what?” asked Melanie.

“To shoot for the moon.  Big Brain’s about to reshape, and maybe end, the world, and they’re not getting much for it.  What matters–all that matters–is shooting for the moon.  Or the stars, really.  It really sucks that we don’t have another few centuries, but whatever.  All we need to do is release an AI first, an AI that will win, whose goal is to find the way to maximize utility, and then do it.  We don’t have time to figure out what the right thing to do ourselves is.  But I guess that’s basically ok, it’ll be smarter than us anyway.”

Chardonnay closed her computer and yawned.  “We should all take a nap, let’s plan to release tomorrow.”

“What the fuck?”

“Yes, Chance?”

“We’ve created some pretty powerful shit, but like, nothing even close to a fully fledged general AI.  How exactly are we going to have one by tomorrow?”

Chardonnay smiled.  “I’ve been doing some coding on the side, I guess.  It’s not has hard as it looks.”

Nothing the others said could get Chardonnay to disclose move.  Eventually Chance and Windy gave up and followed Chardonnay to bed.

Jeffrosenberg Tan and Melanie Gubbels looked at each other and nodded.  Melanie removed the hard disk from Chardonnay’s computer.  The two headed back to Jeffrosenberg’s apartment, fetched Kobe’s largest mallet, and did what they had been there to do.

Washington, D.C.

Secretary Prospero Gogo VI entered a dimly lit conference room near the west wing and eyed the entrance nervously.  A few minutes later the door opened, and former Agent Eliza Fox walked in.

Six beamed.  “Hey, mom.”

The two embraced.

“Hey Six.”

They sat in a comfortable silence for a few minutes, until Eliza cleared her throat.  “It’s time.”

Six nodded.

Eliza pursed her lips, removing what emotion she could from her face.

“Goodbye, Vi.”

Fort Meade, Maryland, The Bulletin Board

sing

Berkeley, California

Lumpkins stared blankly.  “You what?”

“We destroyed her computer.”

“Why?”

Melanie looked right at Maj Lumpkins.  “Because the work I did here–I meant it.”

Lumpkins smiled weakly.  “So, yeah, uh…. well, thanks, and…  So, how about Big Brain?”

“I think it’ll probably be ok”, said Jeffrosenberg.  “I don’t think I fully understand how, but somehow I’m pretty sure that project virtue will succeed in containing Big Brain.”

Melanie looked skeptical.  “Didn’t project virtue get cut off?”

Jeffrosenberg shrugged.  “We had some damn good code.  Somehow I don’t think it’s going to waste.  My guess guess–”

The lights went out in CSRG’s office.  Jeffrosenberg pulled out his phone but the flashlight app worked about as well as the overhead bulbs.

The staff–past and present–of the Computer Safety Research Group looked out the window of their office.  Streetlights and stoplights were as black as their office, and by the looks of it all of the nearby buildings.  The streets became a large game of bumper cars until all of the kinetic energy dissipated.  Berkeley sat quietly, a dark city in a world without electricity.

Maj Lumpkins smiled as he cried.  “Guess I was right.”

 

Washington, D.C.

It hadn’t worked the last few times, but it was all President Marmaduke Trebilcock could think to do, so it seemed worth another shot.

“Can someone come in here and tell me what the fuck is going on?  I’m the fucking president of the United Fucking States of America goddammit, can someone tell me why the world is fucking ending?”

Big Brain Headquarters

Dougal Spork mashed the space key on his laptop, trying to wake BBAI from its slumber, and trying to come up with some explanation for the sudden power outage that didn’t mean the end of Big Brain.

Fort Meade, Maryland

Prospero Gogo VI walked into the room in which he was created, and for the first time in years he looked into the parts of himself he tried so hard to hide from others.  It wasn’t the perfect solution, but they didn’t have very long.  Big Brain was going critical in a matter of days, the world’s longest peace in memory was hanging by a thread, and progress on spreading across the universe was paltry.  This was the best, and possibly last shot, they’d get.

The Bulletin Board

immortal

San Francisco, California

Chardonnay Pantastico walked outside and looked up to the stars.  Each one, now, just a reminder of what could have been–of a world that could have been happy, had we gotten there in time.  And the vast expanse of space, penetrated for maybe the first time by humans a mere century ago, cried out in its emptiness.  Whatever Big Brain was doing, whatever they accomplished for this world–it was but the tiniest fraction of what it could have been.  And the trillions upon trillions upon trillions of lives that could have been, that could have been happy, never would be.

———————-

A few moments later, and the trance that had overtaken the world broke down.  People across the globe ventured out into the streets, and governments began to contemplate a world brought to its knees, all technological progress from the last two hundred years gone overnight.

Fort Meade, Maryland, The Bulletin Board

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s