You'll Likely Be a Murderer

essay artificial-intelligence philosophy

We have no clue what AI will be able to do in the future. It is young. And while we humans do, indeed, have fancy people working on making fancy new tech that does fancy stuff, no one has produced anything that even remotely resembles human-level general cognitive capabilities (the word general is important there, because we sure as hell get our asses kicked by computers at Go, chess, and a multitude of other “narrow” tasks and activities). Nothing created thus far has been able to perform as well as an average human would at any “general” task.

To sum up the current state of AI research and development, our attempts are still turning out to be pretty dumb. They’re best described as savants — extraordinarily brilliant in one specific area, but limited only to that one area. To some, that general stupidity is comforting. It serves as an indication that humanity is still quite some temporal distance away from creating anything that might fundamentally change our world. To me, that general stupidity is instead deeply, insanely worrisome.

The worry stems from a place of ethical concern. There’s a host of nuances and philosophical arguments that surround the ethics of AI, and we’ll be looking into one of the most pressing problems in an effort to be as prescient as possible.‍


Certain machines have moral status. Now, I know you’re probably saying, “Wait, what’d he just say? Machines? Moral status? Bullshit!” But please hear me out on this one.

It’s pretty well agreed that at the current time, our AI systems and ML models don’t actually have any moral status. We can terminate them, pause their operation, delete them, copy them, or rewrite the code that comprises them as we so please. But how might those actions be interpreted if a given machine did have human-like moral status? Would we be killing, cloning, and maiming without consent?

Things could get very sticky, very fast, so it’s probably a good thing to dive a bit deeper and ascertain which specific qualities are enough to warrant moral status in a being. Usually, it boils down to two things.

The first is sometimes called sentience — it’s the capacity for a being to have experiences. It’s the ability to be hurt and to suffer, and conversely to be happy and experience pleasure.

The second is often called sapience — it’s the set of capabilities associated with “higher intelligence”. These are the things that differentiate humans from animals — things like self‐awareness and reason-responsiveness.

A common view is that animals have sentience and therefore some moral status, but are limited because they’re not sapient. They can’t make complex deductions about the world as a whole, or often even recognize themselves in a mirror! Humans, however, have both sentience and sapience, and therefore full moral status.

Now onto the good stuff. A sentient AI or ML system should then be treated not as just some pile of computer parts — instead, it should be regarded in a similar way to a living animal. It’s morally wrong to maim, kill, and inflict pain on a bunny, so why should it be different for sentient AI/ML systems?

But wait, there’s more!

What if one day humanity popped out an AI/ML system that possessed not just sentience, but also sapience? Following our earlier logic, it then ought to have total moral status — the exact same as humans.

Perhaps this strikes you as grossly incorrect. If that’s the case, I highly recommend “pausing” here and trying to precisely reason why. After you’re done, come back and continue reading!

There are two core principles that lead to my above conclusion. The first is that of Substrate Non-Discrimination — to paraphrase Bostrom and Yudkowsky, if two beings have the same functionality and the same conscious experience, but differ only in the substrate of their implementation (read: stuff they’re made of), then they have the same moral status.

One way to think of this is as non-racism against computers. Just like it doesn’t matter in terms of moral status if someone’s skin is dark brown or pale white, it shouldn’t matter if their brain is made of carbon and water or of silicon. As Sam Harris puts it, there’s nothing special about ‘wet ware’— the fact that our brains are made of carbon and water does not give them inherently more moral status than exact copies made of silicon. If your entire brain were able to be emulated on a computer, neuron for neuron, memory for memory, thought for thought, and existed in a programmed world where it could run and jump and interact with other whole-brain emulations, you’d want it to be free, happy, and living a life free of suffering and pain! It seems as though it’d be morally disgusting to slaughter or inflict pain upon the computerized version of yourself.

The other core principle is that of Ontogeny Non-Discrimination — if two beings have the same functionality and the same conscious experience, but differ only in how they came into existence, then they have the same moral status.

We widely accept this today. The fact that a baby was born of a mother who ate more healthily and took her vitamins doesn’t give that baby any more or less moral status than one whose mother indulged in ice cream and booze everyday and didn’t much consider her health. Likewise, if we were to create human clones, we wouldn’t think that a cloned baby deserves less moral status than any other baby born through sexual reproduction. Similarly, just because a being came into existence as an AI/ML system through programming, it doesn’t deserve to be treated with less moral status than a human being who was born via sexual reproduction.

If you accept both of those principles, then it doesn’t matter if an AI/ML system was created by doofy programmers or runs on a computer instead of in a brain — it’s deserving of treatment that’s identical to how we treat one another.


Although this makes it far easier to develop an ethical code for how we treat AI/ML systems (i.e. treat them like we think we should treat other humans), there are crazy potential scenarios that arise because of how such systems may come into existence.

While we humans toil away trying to develop smarter and better AI/ML systems, it’s likely that we wouldn’t know if a system is sentient or sapient. There could be no surefire way of determining if it could feel pain or reason at a level comparable to humans.

Let’s concoct a fairly reasonable scenario where some programmers are using genetic algorithms to attempt to optimize an AI/ML system, and at some step along the way the systems become sentient and sapient. Because there might be no way of knowing ahead of time that such an outcome will occur, the developer keeps living his merry life and continues to run his algorithm for a few days on a future-era supercomputer. Over the course of those few days, it’s conceivable that trillions of versions of the AI/ML system being optimized were run. It’s also conceivable that the versions were tested based on their ability to complete some sort of task in a simulated world. Let’s say that our researcher is attempting to make an AI/ML system that can jump extremely efficiently. The consequence of this set of innocuous circumstances is that trillions of beings, all at least morally equivalent to humans, were mercilessly slaughtered after being thrown into a testing pit because of their inability to jump efficiently in the only “world” that they’d ever known.

The estimate provided by the National WWII Museum for the amount of deaths that happened during World War II, one of the largest losses of life in the history of mankind, is 60 million. Now imagine ~17,000 World War IIs occurring in the span of a few days, entirely on accident.

The potential for accidentally horrific moral deeds in AI/ML development is unfathomable.

If you are someone doing that development, you must consider all of your actions with the utmost care.

If you know someone doing that development, you must push them to think about their actions with the utmost care.

Peace.