At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

If you see a robot resembling the Terminator pointing a weapon at you, you have a pretty good idea that a malign AI is engaging in bad behavior. This is exactly the sort of nightmare scenario that’s generating a lot of buzz lately: an AI weapon breaks free of human control to pursue its own objectives, which might include eradicating or subjugating humans.

Such a catastrophic event would not be hard to spot. But much more likely cases of AI breaking free of human control to create mischief could be far more difficult to detect, because such activities could be hard to differentiate from malicious human behaviors such as ransomware, botnets, social media disinformation and attacks on power and water infrastructure.

And knowing the difference between actions of malign humans and malign AIs is crucially important, especially in national security. For example, great powers such as the US, China and Russia very likely have policies for when an adversary cyber attack (e.g. taking down, water, power and telecommunications) rises to the level of an act of war, thereby justifying “kinetic” (bombs, missiles, ground assaults) retaliation (1).

What if a malign AI decided the best way to harm humans would be to get them to fight each other, by launching AI cyber attacks made to look like they originated from country X or Y?

Knowing that the opponent was synthetic or real could spell the difference between war and peace.

Recently, while preparing for a foreign policy conference on controlling AI, I asked a new, hot, experimental AI, simply called “Assistant” (2), how to differentiate rogue AIs from malicious humans. Here is a sample of what the state-of-the-art AI said.

Lack of human-like mistakes A rogue AI may not make the same kind of mistakes that humans typically make, such as

Granted, malicious humans could unleash malicious AIs that would present many of the above features, but such human-directed activity might still be differentiable from purely rogue AI behavior based upon such factors as timing of the attacks, geopolitical context, what is targeted and who is (or is not targeted).

In any case, research on differentiating AI behavior from human behavior will take on increasing importance for many reasons, one of which is to prevent the next world war.

Eric Haseltine, Ph.D., is a neuroscientist and the author of Long Fuse, Big Bang.

Get the help you need from a therapist near you–a FREE service from Psychology Today.

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

By admin