Simulating Society with an App
It is the year 2065. An app called Creator is poised to hit the market. The program, it’s rumored, lets anyone with an Apple device create artificially intelligent societies.
Creator is designed for the busy life. The modern businessperson, we’ve been assured, will have ample time to create 3-4 civilizations between, or perhaps during, bathroom breaks. The interface is simple. To prove it, a video has surfaced of a dazzled bonobo successfully navigating the display. Consumer excitement is high, and the tagline “so easy a monkey can use it” has gone viral on Twitter.
Some details have been leaked. Each digital civilization, claims the company, begins in a state of nature. No government. No laws. No customs. Whether the society turns to internecine violence or reasonable coexistence - the ancient debate between Hobbes and Locke - depends on the settings summarily selected at the onset. To be clear, these settings pertain only to the artificial minds within each simulation.
To squash sticky ethical concerns, the developers have insisted that these minds, though indistinguishable from human minds in terms of behavior, are not “technically conscious.”* And without consciousness, they argue, how can they suffer? Yet when asked to prove this lack of consciousness, the CEO suddenly received an important text, mumbled something profane, and briskly strode from the press conference.
Let’s place that concern to one side for the moment, because today you received the beta version of the program. In this variant, only one trait - “overall emotional intensity” - is adjustable for your artificial population. You toggle the dial all the way up. Then you are presented with a short video of your population interacting.
Two long-haired men, sinews erupting from tanned muscles, are locked in an unspecified dispute. Angry glares soon escalate to glancing blows and, as if infected by the mood, a host of surrounding people spring from all directions to join the brawl. Others watch from a distance, frozen in place, tears streaming down stricken faces.
Having seen enough, you press a button labeled “1000 years forward”. The screen turns grey. An error message appears. The population went extinct 935 years ago.
“Would you like to try again?” asks a glib British voice. Yes you would. But this time, you turn the emotion dial all the way down. The video loads after a short delay.
A group of children are being taught to make rabbit traps. With great deliberateness, the adults walk them through the steps. Cutting the bamboo. Tying off the reeds. Hinging the trapdoor. The children watch with rapt attention. Your attention, however, begins to stray. Again you jab the 1000 years forward button.
You have a bird’s-eye view of an ivory city. The scene shifts to a sterile street lined with lucerne walkways. Two identical men, boyishly handsome and sporting grey bodysuits, stroll by in conversation. The men are followed by two Nordicly-chiseled women in sky blue singlets; also perfect twins and also seemingly photoshopped. You begin to wonder if everyone here resembles an Athenian deity.
But then you notice an older gentleman staggering along the pathway. The standard-issue jumpsuit hangs loosely from his feeble frame. Pausing to lean on a pearly pole, he coughs a deep, sepulchral cough. Then he collapses.
A group of four youth approach. Surely they will do something. But they merely eye the scene with inscrutable expressions and mutter something about inadequate disposal. An aquamarine goddess glides by next and barely glances at the body. The street view is empty for a moment. Then you hear a metallic hum, which starts softly but momentarily becomes an industrial roar. The source of the noise - a colossal, blocky apparatus - descends from the sky, hovers just above the chalky corpse, and deploys a long tube from its armored hull. Finally, with a tremendous SQUONK, the body is vacuumed inside and the ship disappears from view.
You hit the X atop the screen. “Would you like to try again?” No you wouldn’t.
This thought experiment is little more than a modern take on an old philosophical question. The utility and morality of emotions, of course, have been controversial for millennia. On one side, Romantics have argued that emotions should be embraced to the fullest, letting the heart overwhelm any calm calculation that might occur. To do otherwise is soulless, cold, and inhuman. Take this to the limit, however, and we have something like Creator scenario number one.
Then there is the utilitarian view - the cost benefit analysis - the cool voice of impersonal reason. Emotions don’t steer us towards the best outcomes. Emotions are biased, irrational, and egocentric. I see this in my own life. As much as I care about climate change, this global concern simply can’t match many of my personal concerns, however trivial they may be. On any given day, I'm far more bothered by minor indigestion, a poor night's sleep, or a crick in my neck than by rising sea levels. It's hard to summon strong feelings for a problem that doesn't currently impinge on my biology - or the biologies of anyone I know.
Emotions, it's obvious, are not a perfect guide to pro-social - or even semi-rational - behavior. This is clearest in the realm of ethics, where feelings like empathy reliably bias our decisions in illogical ways. Yet in extremis, again we have a problem. There’s more to life, one hopes, than optimal consequences and proper resource disposal. (See scenario number 2). If all emotions were switched off, something important would surely be lost.
This is true, at least, for the simulation we presently inhabit. But if a few settings were adjusted, we might feel differently.
*Credit to philosopher David Chalmers and his “zombies” thought experiment
If you liked this article, would you mind sharing it?