Skip to content
Gary Goodwin

The new immortals: What legal status should be granted to artificially intelligent persons?

Immortals shall soon walk among us. They may also crawl, roll and perhaps hover. Yes, definitely hover. The immortals refer to artificially intelligent persons, and by “us” I mean natural persons.

The European Parliament Committee on Legal Affairs recently released a report recognizing that humankind stands on the threshold of an era of sophisticated robots and other manifestations of artificial intelligence. The committee saw the need to legislate this area relatively quickly as self-driving cars are making their appearance. The fundamental question is what sort of legal status should be granted to AIPs? Natural persons want to avoid any “Battle of the AIPs” future scenarios.

A reference to Mary Shelley’s Frankenstein; or, The Modern Prometheus dramatically starts off the committee’s report. The committee thought that by addressing people’s real concerns upfront, they could deal with the more substantive issues. The committee recognizes that people have fantasized about the possibility of building intelligent machines and of achieving potential unbounded prosperity. The committee does not mention drones with laser canons, but you just know they were all fantasizing about that.   

Other person “types” provide potential guidance. Corporations occupy a separate category of legal persons in an attempt to reach personhood. A corporation is a legal person by legislation. In the 1973 sci-fi film by the same name, Soylent Green may be people, but corporations are not people. “Corporations are people, my friend,” said U.S. presidential hopeful Mitt Romney in 2011, and Democrats took him to task for this statement. “I don’t care how many times you try to explain it,” U.S. president Barack Obama said at one point. “Corporations aren’t people. People are people.” A person falls into the legislative definition of a natural person, and the corporate experience shows where the AIP legal status question may end up.

Corporations have first amendment rights and can advocate for certain political parties. Should AIPs be provided similar rights, and if they could vote for a particular party, what sort of governmental structure would they prefer? Anarchy would be a good bet, and not the cloak and molotov cocktail carrying kind. German philosopher Immanuel Kant identified anarchy as “law and freedom without force.” AIPs would not have the billions of years of upbringing requiring force to deal with predators and competitors. They might learn that on their own, and perhaps to our detriment.

Corporations can own property but don’t have personal privacy rights. One can imagine AIPs creating new patentable types of software. If your AIP demanded privacy, what would your reaction be? Once your teenager makes the same request in your house, your first compulsion might be to sack the room and look for drugs or an old-fashioned diary. For an AIP, would you look for secret caches of information, or heaven forbid, mind-expanding cloud-based storage?

Corporations can also divide like an amoeba and create brand new little entities. One can easily imagine AIPs creating more advanced AIPs. Shelley’s creature demanded that Frankenstein “create a female for me with whom I can live in the interchange of those sympathies necessary for my being.” If AIPs created little AIPs, should we — could we — stop them? The age of consent does not apply here. An AIP could easily become “older” than any builder if an AIP can download the wisdom of the ages overnight and join the ancients. How could one supervise this potential procreation proclivity in AIPs? In Canada, the government has no role in the bedrooms of the nation. The U.S. government appears to be in every intelligent device, so it may be supervising already.

In describing artificial intelligence, the committee outlines how the present legislation does not encompass machines that become autonomous and self-aware. A machine can be built, loaded with software and then go on to learn from its environment. This new environmental learning suggests that the AIP can determine its own actions and learn from its experience and failures. AIPs have an advantage here since the majority of natural persons still struggle with learning from failure.

If an AIP can decide its own actions and causes harm, then legal liability can shift from the builder over to the teacher providing the environment. If an AIP can operate independently with its environment and be held accountable for its own actions, then it could be held strictly liable.  Strict liability requires that a plaintiff show that the damage occurred and a causal link. This differs from negligence in that there is no need to establish the same duty of care, standard of care and breach of that duty of care. Strict liability would be allocated between builder and eventual teacher. The teacher and the surrounding environment impacts the liability shift between builder and teacher. This shift would be extremely difficult to establish in that it may take a village to raise a child, but a vast social media network environment raises an AIP.

The committee suggests an ethical framework of beneficence, nonmaleficence and autonomy, and fundamental rights such as human dignity and human rights, equality, justice and equity, non-discrimination and non-stigmatization, autonomy and individual responsibility, informed consent, privacy and social responsibility. Whether these ethics and fundamental rights will be offered to AIPs remains unclear, but sauce for the goose is sauce for the gander. 

Natural people tend to anthropomorphize animals and objects, and this tendency may provide greater rights to AIPs. Do you feel bad if your kitchen table ensnares your Roomba? Would you feel even worse if it was trapped and you had earlier placed googly eyes on the Roomba? If so, then you would likely agree that AIPs are entitled to receive ethical and compassionate treatment. But would they need it or are we simply making ourselves feel better?

The committee suggests the need to include a kill switch (opt-out mechanisms). I will shorten this to “OOM.” The OOM euphemism provides somewhat of a guilt release. Humanity can delude itself in the belief it has control over any situation, but as Kurt Vonnegut Jr. wrote in The Sirens of Titan, “The only controls available to those on board were two push-buttons on the center post of the cabin — one labeled on and one labeled off. The on button simply started a flight from Mars. The off button connected to nothing. It was installed at the insistence of the Martian mental-health experts, who said that human beings were always happier with machinery they thought they could turn off.” If you have difficulty in OOMing your faithful Roomba, think how hard it might be if it asked you to reconsider.

To alleviate this OOM situation, I would recommend that readers take their favourite mind/body relaxant and consider the following: Consider if, instead of immortality, AIPs live a limited number of years. Science fiction covers both ends of the spectrum of planned obsolescence of the most brutal kind to the inability to self-terminate. If we incorporated a pre-determined lifespan, would we tell our AIPs the exact date? We could leave the date determination to a random number generator entitled Final Actual Time Expiry, or FATE. Perhaps again, sauce for the goose is sauce for the gander.