Bertrand Russell’s not entirely crazy dream of ending war via logic

By Robert Wright, Feb 22 2020

The truth, whatever it may be, is the same in England, France, and Germany, in Russia and in Austria. It will not adapt itself to national needs: it is in its essence neutral. It stands outside the clash of passions and hatreds, revealing, to those who seek it, the tragic irony of strife with its attendant world of illusions.     

                                    –from Russell’s essay “On Justice in War-Time” 

Among the many things Bertrand Russell is known for are these two: (1) laying the foundations of “analytic philosophy,” which values clear expression and fine-grained analysis over grand theorizing; (2) disliking nationalism, especially in its belligerent forms. I’d never imagined a connection between the two, but the philosopher Alexander Klein, in an essay published this month, says there is one. 

Russell, according to Klein, hoped that the rise of analytic philosophy would reduce the stature of grand philosophical paradigms with names like “German idealism” and “British idealism.” He wanted to “destroy a conception of philosophy as an articulation of a ‘national mind’,” Klein writes. 

This may sound like a pretty roundabout way to combat nationalism—and it would have seemed especially ineffectual at the time Russell was doing some of his writing on the subject, as World War I was engulfing Europe. But, Klein says, there was a second sense in which Russell hoped analytic philosophy could discourage national conflict. 

The methodology of analytic philosophy involves defining your terms with painstaking precision, thus crystallizing the meaning of propositions so they can be evaluated via strict logic. Russell’s “theoretical antidote to the irrational, sectarian vitriol between European nations,” writes Klein, “was to try to show how logic could function as an international language that could be used impartially and dispassionately to adjudicate disputes.” Well that would be nice!

Russell isn’t the only very smart and very rational person to have hoped that smartness and rationality could save the world. Harvard psychologist Steven Pinker’s book Enlightenment Now argued that putting more faith in reason, and less in such unreasonable things as religion and post-modernism, could smooth the path to salvation. A few years earlier, Harvard psychologist/philosopher Joshua Greene argued in Moral Tribes that people could coexist peacefully if only they’d abandon primitive moral philosophies (including religiously based ones) and embrace utilitarianism, with its coolly rational goal of maximizing overall human happiness.   

I’ve argued (in Wired and The Atlantic, respectively) that Pinker and Greene are in some ways naïve in their hopes for saving the world. It’s tempting to say the same thing about Russell, but I do think there’s an important sense in which his diagnosis of the problem is sharper than theirs. And, for that reason, Russell’s hope may be closer than their hopes to being realized, even if not exactly in the way he may have envisioned. 

One thing I like about Russell’s diagnosis is the breadth of the accompanying indictment. He pins blame for war not just on intellectuals of one stripe or another, but on the entire intellectual class. He wrote, “In modern times, philosophers, professors and intellectuals generally undertake willingly to provide their respective governments with those ingenious distortions and those subtle untruths by which it is made to appear that all good is on one side and all wickedness on the other.” 

Similarly, he doesn’t confine his suspicion of moral arguments to one kind of moral argument or another; any school of moral thought can be deployed on behalf of one tribe against another. Russell wrote, shortly after World War I began:

Ethics is essentially a product of the gregarious instinct, that is to say, of the instinct to cooperate with those who are to form our own group against those who belong to other groups. Those who belong to our own group are good; those who belong to hostile groups are wicked. The ends which are pursued by our own group are desirable ends, the ends pursued by hostile groups are nefarious. The subjectivity of this situation is not apparent to the gregarious animal, which feels that the general principles of justice are on the side of its own herd. When the animal has arrived at the dignity of the metaphysician, it invents ethics as the embodiment of its belief in the justice of its own herd.

That’s harsh! And it may sound too harsh as a description of modern-day philosophers, many of whom don’t come off as ardent nationalists. But Russell was writing when World War I had brought out the nationalist in just about everyone in Britain—somewhat as Pearl Harbor, and later the 9/11 attacks, would make American peaceniks a very rare breed.

The 9/11 attacks are especially instructive, because they led to a war that obviously made no logical sense. The premise of the war was that Iraq was building weapons of mass destruction, yet Iraq was letting UN weapons inspectors look anywhere they wanted to look—until the US ordered them to leave Iraq so that the invasion could begin! Crazy as this sounds, (1) it actually happened; and (2) among the invasion’s many highly intellectual supporters were representatives of all major schools of ethical thought: utilitarians, Aristotelians, Kantians, Christians, Jews, Muslims, Hindus, Buddhists, and on and on.  

So I think Russell was right to see the problem as broad and deep. He deserves credit for not buying into Greene’s comforting belief that the problem is people who don’t subscribe to a particular ethical philosophy, or Pinker’s comforting belief that the problem is people who don’t subscribe to “Enlightenment values.”  

But what about Russell’s comforting belief that the problem is people who don’t share his commitment to analytic philosophy—and what about his related hope for an “international language that could be used impartially and dispassionately to adjudicate disputes”?  

I can’t tell, from Klein’s essay, how literal this hope of Russell’s was. But I wouldn’t put it past him to have hoped quite literally—to have imagined a day when, perhaps thanks to the further evolution of analytic philosophy, disputes would be settled via formal logic, a kind of logic that yields conclusions as inexorably as mathematical proofs. 

Leaving aside for the moment the practicality of this hope, it has this much going for it: it recognizes that a recurring problem with intellectual tools is that they’re in the hands of human beings. The utilitarianism that Greene hopes will save the world—the idea that we should maximize total human well being—looks great on paper (and in fact I’m a fan of it). But if you converted all the Hatfields and McCoys to utilitarianism, they’d still fight, because their tribal allegiances would trigger cognitive biases that led each side to feel that it was the more aggrieved, and that retribution was therefore in order. (All that would remain is for them to reconcile the ensuing violence with utilitarianism by arguing that retribution against transgressors is in the long run conducive to overall societal welfare because it discourages transgression. I suspect that’s the move the Hatfields were pondering when this picture was taken.)  

Or, as the NRA might put it: Systems of ethical thought don’t kill people. People wielding systems of ethical thought kill people.

Russell seems to want to take the people out of the picture. If you had a sufficiently precise and rigorous system of language and logic, human intervention would, in a sense, not be needed. Two nations make their competing claims, their claims are plugged into the system, and the laws of logic do the rest: a judicial ruling basically just pops out, without the need for a judge who, being human, would be fallible. 

Well, a century later, this system of language and logic doesn’t seem to exist. But humankind has developed a different system of language and logic that, once set in motion, requires no human intervention. It’s called a computer program. 

Which raises a question: Is it too far-fetched to think that someday an AI could adjudicate international disputes? Fifteen years ago I might have said it was. But each year Google Assistant seems to do a better job of understanding my questions and answering them. And each year more and more highly skilled American workers see computers as a threat: paralegals, radiologists, sports writers... Can we be so sure that judges will be forever immune? Is it crazy to imagine a day when an AI can render a judgment about which side in a conflict started the trouble by violating international law? 

Obviously, the technical problems are formidable. But if you solve them, you’ve done more or less what Russell wanted to do. You’ve removed human bias from the analytical process by putting algorithms in charge.

Of course, humans would design the algorithms, and there are kinds of biases you can build in at that level—features that, whether you mean them to or not, will wind up favoring certain kinds of countries over others. Then again, you can say the same about  international law itself. Computers would at least lack one bias that will threaten to afflict international rulings so long as judges are human: favoring—even if unconsciously—one country over another just because of which country it is. “Russia,” “China,” “America” —none of those labels would tug at a computer’s heartstrings or stir its wrath, or trigger thoughts about how favoring the country could facilitate its career advancement. 

This may well be fanciful. But one reason to think it’s not is that, even now, in what will presumably turn out to be the primordial days of AI, we can start the process of finding out! My homework assignment for AI geniuses with too much time on their hands: Design a program that scours the news around the world and lists things it deems violations of one particular precept of international law—the ban on transborder aggression. Just list all the cases where one country uses ground forces or missiles or drones or whatever to commit acts of violence in another country. I’m guessing this is doable with a pretty high level of accuracy.

Now, strictly speaking, not all of these acts of violence would violate international law. If they’re conducted in self-defense, or with the permission of the government of the country in question, that’s different. And, obviously, with time you’d want your AI to take such things into account. But for starters let’s keep the computer’s job simple: just note all the times when governments orchestrate violence beyond their borders. Then at least we’ll have, in the resulting list, a clearer picture of who started what, especially along fraught borders where strikes and counter-strikes are common.

Of course, we could in principle have human beings compile such a list. But which human beings? Russell’s whole point is that there’s basically no one who can, with complete confidence, be trusted to do that job.  

Certainly not the people who run our media. Last year the US launched an estimated 5,425 airstrikes—drone strikes plus strikes by piloted aircraft—in four countries (and that’s just the main four countries, not all of them). That’s 15 airstrikes per day. How many days did you read about one of them? Can you even name the four countries? If someone had launched a single airstrike on an American town, don’t you think you’d have read about it? Apparently our interest in airstrikes is asymmetrical.

Russell wrote, about the runup to World War I: “Men of learning, who should be accustomed to the pursuit of truth in their daily work, might have attempted, at this time, to make themselves the mouthpiece of truth, to see what was false on their own side, what was valid on the side of their enemies.” Alas, “Allegiance to country has swept away allegiance to truth. Thought has become the slave of instinct, not its master.”

Yes, that’s what thought tends to do—not surprisingly when you reflect on the fact that we were created by natural selection. So maybe we should turn some of the thinking over to machines that weren’t created by natural selection. Even at this early stage of AI’s evolution computers might, via the objective assembly of lists, make people a bit more likely to see what is “false on their own side” and what is “valid on the side of their enemies.” And that would be a start.

Illustration by Nikita Petrov.

Most of the content on this site is published as a (free!) newsletter first.

Please, consider subscribing: