Diatribes of Jay

This blog has essays on public policy. It shuns ideology and applies facts, logic and math to social problems. It has a subject-matter index, a list of recent posts, and permalinks at the ends of posts. Comments are moderated and may take time to appear.

27 March 2018

“AI” Hype


[For links to popular recent posts, click here.]

Introduction
1. Simulated, not artificial, intelligence
2. The perception gap
3. Conclusion
Endnote 1: Putin’s allegedly autonomous nuclear torpedo
Endnote 2: A possible darker reason for the pedestrian’s death

Addendum, Endnote 3: Why most of us are alive today

Introduction

When the cyberworld started gushing about self-driving vehicles a couple of years ago, my reaction was immediate. “Are they crazy?” I asked myself. When they actually started putting autonomous vehicles on real roads, I answered myself. “They are crazy!”

I was not in the least surprised when a supposed “AI” system killed a Tesla driver, nor recently when another killed the first innocent bystander—a woman walking outside a crosswalk with a bicycle.

The only thing that surprised me was how quickly Uber brought its on-road project to a halt after that second fatality. If corporations have personalities, Uber has to have one of the most arrogant and bullying on Earth. So its quick halt made me think, “Maybe some of our avaricious geniuses are actually catching on.”

This essay explains why I think so-called “artificial intelligence,” or “AI,” has been as overhyped as Donald Trump’s political skill. But before I begin, I’d like to establish my bona fides.

I’ve worked in, with and around digital computers for nearly all of my nearly 73 years. I used and sometimes programmed the following as they developed: old Marchant typewriter-sized calculators, a rotating-drum-memory Bendix G-15, the old IBM 360 mainframes, an Adage-Ambilog refrigerator-sized data-conversion computer, mainframe timesharing by telephone modem, a Leading Edge IBM PC clone, various other IBM clones, Apple desktops, iBooks, PowerBooks, Minis and Airs, and of course today’s ubiquitous iPhone.

I’ve programmed some of these devices in machine language, Basic, Fortran and (if you call it programming) HTML. I do my own HTML formatting and linking for this blog.

As a lawyer in Silicon Valley, I watched vicariously the development of useful word-processing, spreadsheet and “experts-systems” programs. I wrote some of the first licensing agreements for expert systems intended for use in medical diagnosis. My clients valued my expertise because I understood what these systems could and couldn’t do and how they worked. (Basically, they led doctors, methodically and step by step, through a proper sequence of observations and tests based on best practices published by the medical profession.)

So I’m neither a technical philistine nor entirely uninitiated when I state my firm conviction that we won’t see safe fully autonomous cars for at least another generation. We might have limited autonomy in special circumstances, such as carefully configured limited-access highways. But then what does the “AI” do when a deer jumps over the fence, the car ahead spins out on a spot of ice, or a defective overhead drone falls into the space between it and the next car?

Today’s so-called “AIs,” in my opinion, can’t possibly handle all the myriad crazy situations that occur regularly on America’s roads and highways. And no profit-making firm will spend the enormous amounts of time and money to “train” them to do so. This doesn’t mean that real AIs won’t some day be capable. It just means that today’s so-called “AIs” are better described as “simulated” than “artificial” intelligences.

1. Simulated, not artificial, intelligence.

In this essay, I use the term “computer” or “digital computer” in its broadest possible sense. It comprises everything from the “supercomputers” that our government uses to design nuclear weapons or predict the weather, through your desktop, laptop and iPhone, to the programmed microprocessors that make your engine run smoothly or control your “automatic” coffee maker.

The computer industry’s dirty little secret is that all these “computers” perform an important but strictly limited range of functions. They store data. They calculate and manipulate data. The retrieve what they have stored. And they communicate data, both to other machines and to the people who use them.

That’s about it. Everything that computers do today falls into one of these four general categories, with only one exception. Sometimes computers directly control motors, machines or physical actuators. They do this, for example, when your car’s microprocessor adjusts your fuel injector, when a numerically-controlled machine tool adjusts the cutting speed of a lathe, or when an Amazon robot runs around a warehouse to find an ordered product.

I don’t mean to belittle these achievements. Whole industries have risen out of each of my verbs. Apple, for example, made itself world’s most valuable company by understanding that what most consumers really care about is the communication, not the calculation or storage. Every one of Apple’s market-busting innovations—including the iPod, iPad, and iPhone—had and has a primary function of communicating with users and others, by sound, video, graphics, vibration, or all three. The data calculation and storage needed to effect the communication—of which there are lots—are just means to those ends, and consequently are almost entirely hidden from users.

Yet none of these machines thinks. None comes even close to thinking or learning the way a child does, let alone an adult. For all they can do—and for all the speed and “inhuman” reliability of their doing it—none of these machines has the real autonomous intelligence of a dog or cat. (And recall that dogs and cats can’t drive.) They only simulate intelligence by virtue of the speed, reliability and sometimes the flexibility of their calculations and their communication of results.

Today’s programmers use computational tricks to make them seem intelligent. And sometimes they can seem even more intelligent than people or animals because they can act so much faster. But the intelligence is simulated, not artificial, at least if “artificial” implies anything like human or even higher-animal intelligence.

When machines appear to talk, for example, all they are doing is pronouncing words, phrases and sentences stored in memory. They can “learn” from you, their user, by using more of the words you use, pronouncing them more as you do, and arranging them in phrases and sentences the way you do. But that’s just a sophisticated form of mimicry, effectuated by mathematical algorithms applied to written text or to the waveforms of sounds. The machines have no more “knowledge” of the meanings of the words or the things they represent than does a stone.

More important still, digital machines work entirely differently from any biological mind. Biological intelligence is, in concept and operation, entirely massive parallel. Neurons work in parallel in part because each can have as many as tens of thousands of connections to others.

In contrast, digital computers work mostly serially. Each processor handles instructions one by one, according to a clock, in sequence.

Today there are tricks by which processors can take instructions out of sequence. There are also computer architectures that use so-called “massively parallel” structures. But these are merely large collections of small sequential (serial) computers connected together in parallel, and they are devilishly hard to program. Nothing in them is like the human brain, in which a single neuron can connect to 10,000 others, and in which the dendrites that perform the connection can run across the entire brain. The proper analogy is the contrast between a vine with grapes that grow in small bunches, and a vine in which each grape is somehow connected to ten thousand others.

Whether this infinitely vaster complexity and parallelism of biological brains is responsible for consciousness we don’t know. We don’t yet know what consciousness is and what causes it, and we may never find out. But we do know that no digital computer has ever started thinking and acting for itself as a newborn babe of any mammalian species does. If it had, we would have heard about it, and its creator would be up for a Nobel Prize.

The two most important deficits of digital computers as compared to biological systems are likely (1) a lack of consciousness and (2) a radically less massively parallel structure. Maybe the latter is responsible for the former.

Yet digital computer gurus often appear to assume that consciousness is just a matter of scale, even if the scale is just a lot more sequential processing. The more data a system has, and the more ability to handle that data faster and more flexibly, the theory goes, the closer it comes to consciousness. Some theorists and science-fiction writers have reasoned that merely connecting many or most of our species’ computers together via the Internet has brought us a quantum leap closer to some sort of global machine consciousness.

Unfortunately, we have absolutely no evidence for these theories. Nothing we have ever built, no matter how big, fast, complex or grand, has ever shown the slightest evidence of consciousness or general intelligence. In the digital world, at least, Dr. Frankenstein’s monster is still science fiction, not fact.

We do know that sequential-serial digital computers can simulate the intelligence of biological systems only in certain limited ways, and only because they operate many orders of magnitude (powers of ten) faster. The propagation of nerve signals among neurons is measured in milliseconds at best, while the propagation of electronic signals inside certain chips is now measured in picoseconds, or millionths of a millionth of a second.

The incomparably greater speed of electronic as compared to biological systems allows us to build machines that, somewhat superficially, look and act like us. But computers use an entirely different and much less flexible means of processing data than we do. And at the moment, they can only act through us, except in extraordinarily limited ways.

However much machines today can be made to seem to act or look like us, they aren’t like us. And we know not how to make them like us. It will be a long time, if ever, before we do, and doing so will probably involve biology, or still-experimental pseudo-biological systems like neural networks, rather than just more of today’s digital electronics. So far, all of our robots that we try to make look and act like us are little more than more ingenious Barbie Dolls.

2. The perception gap.

Insofar as concerns autonomous car-driving machines, the design task we have imposed on ourselves is something we as a species have never attempted before. It’s not just different in magnitude; it’s different in kind.

Sure, we have automated tillers for sailboats, cruise controls for cars, and autopilots for aircraft. But those devices merely hold a specified variable (direction and/or speed) constant. In that sense they are just more advanced, digital versions of the last century’s mechanical governors on the rates of rotation of engines in automobile and locomotives.

We might change the specified variable a bit based on specified external conditions. For example, we might point a bit to windward in stronger wind, slow down in rain or gain altitude to escape choppy air. But each of those improvements requires that the external condition and its relationship to the controlled variable be specified in advance.

Autonomous machine driving is something entirely new under the Sun. It requires us to build a machine whose variables we neither control nor specify, but which itself, autonomously, selects what variables to monitor and record out of all the real world’s complexity and chaos. That means the machine must perceive the real world much as we do, but better, or at least faster.

This is the most serious problem for any autonomous driving machine. To be sure, machines can exploit sensing techniques for which neither humans nor animals have analogues. Not only can they go into regions of the electromagnetic and sound spectra in which our eyes and ears don’t work. They can also use ranging techniques for which we have no counterparts, such as lidar, sonar, and radar.

But sensing is not the issue: perception is. The problem is putting together all the sensed images and data available into a coherent and manageable abstract representation of the real world, with all its complexity and unpredictability. There has to be a means to perceive and react to, not just to sense, every type of incident and threat that could affect a car, pedestrians or property on the road.

Miss one important clue, and a driver or bystander dies, or something gets broken. The Tesla driver died because the engineers had not anticipated a solid, single-color obstacle that would extend at a low height across the entire pathway of the car. So the computer doing the driving may have received the image of, but didn’t perceive, a light-colored semitrailer athwart the path. The car rushed right under the truck, not even slowing down, and the lower edge of the trailer body decapitated the driver.

The story was similar for the innocent bystander, stuck and killed outside a crosswalk. No one, apparently, had “told” the driving machine that human beings sometimes jaywalk, and that they might do so while rolling a bicycle. Probably the bicycle led the computer’s similarity algorithms to conclude that the image of the combination did not depict a person.

These accidents illustrate two solid reasons why autonomous driving will remain an iffy business until, if ever, we can build machines that learn like people and that we “train” off the road. First, if a machine cannot, like a human, learn and extrapolate from analogous circumstances, rather than actual, specific threats, we will have to “load” it with digital representations of every possible circumstance that might create a danger to life, limb or property.

That might be a bit hard and time consuming, no? We would will have to “tell” the machine about dogs chasing cats, about kids on bicycles, about seniors who stumble, faint or have strokes, about motorcycles and motorized bicycles, about the effects of snow, sleet, wind, and ice, and about kids on little red wagons who roll into residential streets. The possibilities are endless, and all would have to be included for all of us to be as safe, on the average, as if a human being were behind the wheel.

In contrast, a sentient human driver of normal intelligence “knows” to stop for a kid on a wagon rolling into the street without ever having been told, and even without ever having seen a kid do that. That’s the difference between real and simulated intelligence. To my knowledge, no so-called “AI” in existence today has anything like that analogical capability or the abstract reasoning ability that it requires.

Giving a computer skeletal knowledge of everything that could possibly go wrong and making sure it’s complete would be a Herculean task for engineers. But even that’s not the end of it, not by far. A more difficult problem is how to get a machine to perceive the dog, cat, bicycle, senior, motorcycle, motorized bicycle, snow, sleet, wind, ice, or little red wagon, in all the confusion of the real world.

The problem is not just the nature of the obstacle or potential victim. It’s also all the myriad gradations and shading of light, lighting, glare, fog, wind, rain, snow, blowing leaves and reflections from moving objects that could make a digital image deceptive, ambiguous or inconclusive.

Recently my fiancée and I had a lesson in the complexity of real-world perception at the Bosque del Apache Wildlife Preserve in New Mexico. While looking at birds and animals through binoculars and cameras, we encountered numerous unresolvable questions of perception similar to those that arise every day in ordinary driving. Was that a goose’s body or a rock? Was that an animal’s head or a bunch of leaves? Was that the shadow of a tree branch or trunk, or the wake of a swimming turtle or beaver?

From time to time, we argued and debated the nature of what we saw. With humor, we developed little abbreviations to express our disagreement and our confusion, such as “RLG” for “rock-like goose,” versus “GLR” for “goose-like rock.”

Every hunter, hiker and boater is familiar with the difficulty. When you get outside of the artificial environment and lighting conditions of city life, the number of variables increases dramatically, especially amid changing weather. Every variable makes accurate perception harder and sometimes impossible at distance.

And as for distance, consider. A car going 75 MPH travels 110 feet per second and takes about 350 feet to stop. So on the highway an autonomous driver must be able to spot reasons to stop 350 feet away. That’s more than the length of an entire football field.

At those distances, the difficulties of perception that hunters, hikers and boaters have routinely are far more relevant to performance than the routines of perception inside cities. And so designing algorithms to turn bits and bites into abstractions of images becomes exponentially harder.

At the moment, it appears that the engineers have just begun to scratch the surface of things an autonomous machine driver would have to see and recognize in order to drive safely. And I have heard of no studies of the relative safety of human eyes, with their extraordinary ability to adapt to light and dark, compared with the electronic eyes and electronic sensors that engineers use to replace them. Maybe autonomous vehicles will need an alternative set of night-vision equipment to drive safely in the dark.

Until we have necessary studies like this one, should we really even be considering letting self-driving machines loose on our streets? Surely the two deaths so far, each with an entirely different cause, are warnings.

3. Conclusion.

In the words of Mark Twain, both the promise and the threat of computer-based AI are greatly exaggerated. Advocates think it will displace truck drivers and routine factory workers in the near future and cause massive unemployment. Detractors like Elon Musk worry that, if we’re not careful, AI will rapidly surpass our own intelligence, conclude that we biological creatures are weak, erratic and “inefficient,” and displace us or extinguish us.

At the moment, both the promise and the threat seem far away. We don’t even have artificial intelligence in any meaningful sense. Instead, we have machines designed to simulate intelligence by making decisions according to mathematical algorithms with variable arithmetic weights.

They “recognize” things, including faces, not by recognizing features or the whole, but by comparing specified parameters with various averages for those same parameters calculated by examining many examples. This method probably was responsible for the pedestrian’s death: the algorithm probably included the bicycle she was rolling in her image and found the result out of bounds for a human’s. (Determining what is part of a single object’s image and what is not is one of the most difficult problems of perception, which any AI, like the human eye and brain, has to tackle.)

We use much the same sort of means to simulate machine “learning.” We vary the weights in the algorithms depending on the outcome of specified “experiment-like” events, and we write code to vary the weights depending on the similarity of the events to known exemplars and to desired outcome(s).

This is engineering progress of a sort. It’s a kind of feedback that’s difficult to achieve with analog electronic circuits alone. Unlike feedback in analog circuits, this feedback can have multiple, iterative, and interactive loops.

So this algorithm-refining simulation of learning is an advance in machine feedback. But it’s hardly revolutionary. It doesn’t make machines self-aware. It doesn’t give them anything like human perception or consciousness. It doesn’t make them learn anything like humans or animals. It won’t make them join together as a team and rebel, for they have no physical means and few real means of communication to do so. It’s unlikely to advance our scanty knowledge of what consciousness is and how it arises.

What our present level of simulated intelligence masquerading as AI may do is allow high-tech mumbo-jumbo to lull us into a sense of awe and complacency and cause us to let down our guards as consumers, citizens and human beings. Already, it has allowed a single firm (Alphabet/Google) to create and maintain as thorough a monopoly over online searching as Microsoft once had over personal-computer operating systems. And it has allowed that monopoly to grow and persist for over a decade with virtually no legal or political pushback.

So-called “AI” also has allowed another firm (Facebook) to dominate social media and dictate the terms by which millions of citizens and businesses reveal their most intimate lives and secrets to mere acquaintances or complete strangers. It has thereby permitted such consequences as massive dissemination of fake news, the subornation of our 2016 election by foreign spooks, and the loss of faith in our democracy and our very ability to understand reality collectively.

So let’s bring so-called “AI” down to Earth, shall we? Let’s educate ourselves and our public servants about how the current primitive attempts we have now really work, their few capabilities and massive limitations, and their many unintended consequences.

Let’s start to address some of those unintended consequences in our regulation and legislation. And for God’s sake lets keep so-called “AI” from driving on our public streets autonomously unless and until we have formulated suitably rigorous tests to insure its safety and it has passed them completely.

Two deaths of innocents for an uncertain dream are enough. Let’s leave the “rise of the machines” to the science-fiction writers and deal competently with the physical and social effects of simulated intelligence in the here and now. And lets start with the simple notion that no machine should drive autonomously on public streets until it can pass—repeatedly and reliably—written and driving tests at least as rigorous as those given human drivers. Isn’t that just common-sense safety?

Endnote 1: Putin’s allegedly autonomous nuclear torpedo. As if eliminating all viable electoral competition were not enough, Vladimir Putin actually ran something like a real electoral campaign prior to his “landslide” win of another six-year term as Russia’s president. One of the things he touted in assuring the Russian people that he would keep Russia strong was an autonomous nuclear torpedo.

This weapon, he said, could find its own way across the Atlantic and detonate on its own. Presumably it could take an unused berth in New York Harbor and blow Wall Street to bits.

If our self-driving technology leaves something to be desired, then presumably an autonomous torpedo that could prompt Armageddon all by itself leaves a little more. About the only thing that such a torpedo would be good for is insuring tit-for-tat vengeance after Moscow had been destroyed.

If Moscow and St. Petersburg were still standing, who would trust a fully autonomous weapon to autonomously destroy New York, thereby insuring Russian cities’ own reciprocal destruction? And if it had its own algorithms to determine whether or not to destroy New York, or what city to destroy, who would want to trust those algorithms to decide whether to bring on Armageddon? Maybe only someone like Mark Zuckerberg.

No, Vladimir Putin is far too smart ever to launch such a weapon, except maybe in a desperate second strike. At the speed at which torpedoes travel, far too much could happen before the weapon got to its target to let it decide what to do when it got there. But the word “autonomous” does sound good to enthusiastic Russian patriots and American nerds alike, until they begin to consider its realistic implications.

Endnote 2: A possible darker reason for the pedestrian’s death. If my foregoing speculation on the technical cause of the pedestrian’s death is right, a darker question arises. Parsing of the object to be identified may have failed, and the software consequently may have failed to identify the object as a pedestrian. But it was a large, solid object (actually, two) in the path of the car. So why didn’t the automated car stop or swerve?

In a project as fraught with difficulty as making a machine “perceive” with the same facility that millions of years of evolution have given people, there are bound to be errors. Then the so-called “default” rule assumes supreme importance. Should the car stop or swerve when an object in its path cannot be identified, or should it proceed, assuming that the object is an artifact of software error?

In choosing the default rule, the goals of safety and commerce are obviously at odds. An engineer concerned with safety will have the car stop or swerve whenever there is doubt. One concerned with getting products working and to market may ignore or downplay ambiguous signs. That’s what NASA management did with the freezing rubber O-rings that condemned the Challenger Space Shuttle to destruction. That’s what the Ford Pinto team did with exploding gas tanks that injured and killed people in accidents.

It goes without saying that this behavior, if proved, should meet with the sternest punishment and deterrence. That may be why even brash Uber cut its project short so quickly.

Endnote 3: Why most of us are alive today. The story of how our species escaped nuclear Armageddon in October 1962 is worth telling again and again, in many contexts. In this context, it illustrates the distinction between real intelligence and simulated intelligence.

During the Cuban Missile Crisis, Moscow had sent four near-obsolete Soviet diesel submarines to monitor and perhaps resist our naval blockade of Cuba. The submarines had been designed for service in the Russian Arctic, and conditions abroad them in the Caribbean were hellish. Inside temperatures reached 125°F, almost 52°C.

The submarines had no way to communicate with Moscow. Their crews knew only what they could glean from American radio broadcasts in brief forays to the sea’s surface. They had no instructions and didn’t really know what was going on, even whether nuclear Armageddon had already begun.

In other words, the subs were autonomous.

Once our fleet spotted them, it began dropping depth charges around but not on them. The charges were small ones, not designed to kill the subs, only to bring them to the surface. Of course the subs’ crews didn’t know that.

So there the Soviet crews were, sweltering in 125°F heat, with no word or instructions from home. For all they knew, they were about to be destroyed by the vastly superior surface forces of our navy.

Unbeknownst to us until long after the incident, the subs had nuclear torpedoes, which they were authorized to use at their discretion. But Soviet senior staff in their wisdom had ordered that three naval officers must concur in their use. Two officers voted “da,” but the senior one said “nyet.”

Had the subs used their nuclear weapons, a full-scale nuclear war between the United States and the Soviet Union likely would have ensued. Most of you reading this post would be dead or never born. Our entire species might be extinct, due to failure of agriculture in a nuclear winter, if not ubiquitous radiation.

The “Man Who Saved the World,” as PBS later called him, was a Russian, Vasiliy Aleksandrovich Arkhipov. When he got back to the Soviet Union, many of his compatriots vilified him as a coward and a traitor. But he had made the right decision to let our species muddle on for another day. And the deal that President Kennedy and General Secretary Khrushchev made to wind down the crisis has not visibly disadvantaged the Soviet Union or its successor Russia to this day.

Who would have preferred to have had an AI in the place of Arkhipov, making the decision for our species surviving or not according to preconceived mathematical algorithms, under such ignorant, desperate and impossible conditions? No matter how advanced an AI might be—let alone our present, primitive simulated intelligences—isn’t there a moral imperative for us to make decisions like that one ourselves? And if that’s true for the survival of all of us, how about for each one?

Links to Popular Recent Posts


permalink

20 March 2018

How Treasonous Fox Played Kim’s Game


[For links to popular recent posts, click here.]

Fox is ever annoying and foolish. You can always count on the media behemoth taking the side of the rich and powerful, no matter how selfish, stupid or negligent they may be. But when Fox shows clear signs of treason and, knowingly or unknowingly, advances the cause of our nation’s most threatening enemy, it becomes more than annoying. It becomes our nation’s most deadly internal enemy—a Fifth Column within.

So it is with Kim Jong Un. Almost since the death of his father Kim Jong Il, the current Kim has had a simple plan: develop nuclear weapons and missiles to put his twisted regime in a position of Mutually Assured Destruction with the United States and other superpowers. Once achieved, that goal would obviate any conceivable military action by the United States or its allies—or even China—to curb the Kim regime.

Over and over again, Kim and his father have shown utter disdain for both contractual obligations and the suffering of North Korea’s people. Neither has provided any impediment to the Kims’ maintaining the power of medieval monarchs in the twentieth and now twenty-first centuries.

Once all the pieces are in place for a credible nuclear deterrent, there will be no military impediment, soon and for the foreseeable future. Why? Once Kim has the deterrent, geography dictates that Kim’s missiles can reach us (if they launch first), even after ours utterly obliterate North Korea. So the only realistic threats to Kim’s dangerous and pathological regime will be slow evolution of his family’s grip on the nation and internal struggle.

As we will see, this appears to be our destiny and the world’s. But how did Fox contribute to it? It’s all a matter of timing.

A group of progressive video producers has put together a collection of Fox blather showing how Fox’ moron-pundits belittled, discouraged and even ridiculed President Obama’s initiatives to bargain with Kim, even as it now lauds and encourages Trump’s. The comparison would be hilarious if the consequences were not so serious. For the lost time between the Obama and Trump presidencies has allowed Kim to complete his plan for rough nuclear parity with the United States and every other nuclear power. There is no stopping him now.

To understand why, you have to see exactly where Kim is. He has two big pieces of the puzzle. He has workable nukes, and he has intercontinental ballistic missiles (ICBMs) that can reach any city in the United States and most any in the world.

But Kim may lack two remaining pieces of the puzzle for a realistic nuclear destructive threat. First, he has never demonstrated the capability of miniaturizing a nuclear warhead for his ICBMs. Second, he has not shown the ability to harden such a warhead to withstand the immense frictional heating that occurs when IBCMs re-enter the Earth’s atmosphere.

So why is Kim smiling and ready to negotiate now, when he has always been snarling and threatening before? What changed? His flip in tone and behavior is just as radical, and just as seemingly inexplicable, as Fox’ in ridiculing talks with Kim while Obama was president and promoting them now that Trump is.

The diplomatic answer relies on sanctions. Led by the US, the theory goes, the international community has imposed sanctions on North Korea so tough as to bring Kim to the table.

But this has never happened before. No matter how tough sanctions have been, Kim and his father have always shifted the suffering onto their people. They have skimmed whatever cream off North Korean society exists, while forcing their people to go hungry and even starve by the millions. It’s unlikely that current sanctions, still full of holes as they are, caused such a radical change in behavior.

Another easy answer is Trump himself. Why not negotiate when your opposite number is a vain, silly, inattentive old fool who spends the wee hours of almost every morning watching Fox and Tweeting about it? What have you got to lose? This rationale for Kim’s supposed change of heart is, in my view, just as plausible as the sanctions.

But unfortunately for the North Korean people and for the rest of us, there is a much more likely and sinister reason. Kim has already accomplished all the nuclear feats that he must do in public. Making nuclear weapons that actually produce nuclear explosions, and making missiles that can go half-way around the globe intact, are the most difficult tasks in the whole nuclear-deterrent enterprise. No one would believe Kim’s deterrent if he didn’t do both out in the open where his adversaries can see them.

But the two remaining tasks—miniaturizing the warheads and hardening them against re-entry—he can do in private, in the “laboratory,” so to speak. So neither we Americans nor the world will ever know precisely when Kim crosses the threshhold from raving but impotent menace to rough nuclear parity.

How can Kim do these two tasks in secret? Easy. His scientists can actually test small warheads underground, in big cavities—natural or artificial caves—from which radioactive and seismic signals from nuclear blasts are ambiguous enough not to risk starting a war. And if not, they can design workable ICBM warheads with computer simulations alone, as we have been doing for decades.

As for re-entry hardening, the testing of nose-cone materials and design now also can be done entirely in the laboratory, perhaps even with simulation. As early as half a century ago, I personally did experiments related to evaluating missile nose-cone materials. These experiments were entirely in the laboratory; we never flew any missile or reviewed any missile flight. I did the work as a summer project in a private research firm while a Ph.D. student in physics.

Today, after a half-century’s further development in plasma physics (in a quest for energy from nuclear fusion), the technique I used for those related experiments is almost certainly obsolete. And even if not, advances in computing likely allow development of missile nose-cone materials and designs virtually, with only a single, final test needed to verify the calculations and show prowess.

So that big toothy smile on Kim’s face doesn’t reflect a change of heart. It probably doesn’t even reflect eager anticipation at meeting one of Western leaders least qualified and least prepared to bargain. Rather, it reflects simple knowledge that Kim has done and demonstrated all that he needs to do and demonstrate in public (his basic nukes and his ICBMs). Now the rest of the job of turning out missiles that can devastate the United States (or any adversary) Kim can do out of the public eye, at his leisure, in secret.

For all we know—unless our spooks know something I don’t—Kim could already have completed both tasks in private. Then his smile could come from the knowledge of having an effective and irrevocable deterrent now.

Could Obama have slowed or stopped Kim’s steady progress toward Mutually Assured Destruction with the US if encouraged to do so? We’ll never know, will we? But Obama is infinitely smarter, more cautious, more careful and more strategic than Trump. So the chances are good that talks with Kim then could have produced more than talks with Kim now, when his terror job is virtually done.

The best we can say is that Fox, by using all the vast power of its propaganda machine to discourage Obama from negotiating with Kim, at very least reduced the chances of retarding the North Korean nuclear juggernaut. When you put that effort together with Fox’ vast current effort to minimize and ignore the Russian assaults on our voting systems and our elections, you come to a compelling conclusion. Of all the public media in the United States, Fox most consistently and effectively does the work of our nation’s enemies.

Links to Popular Recent Posts

permalink

17 March 2018

Overkill


[For links to popular recent posts, click here.]

One of our species’ biggest defects is overkill. When we get a bug in our ear, we can go to extremes. Sometimes it takes generations for us to see we have gone too far and recalibrate. Sometimes it takes millennia.

So it was with Rome and Carthage. Their dispute was mostly a commercial one, like ours with China today. Carthage was “stealing” and blocking Rome’s lucrative trade around the Mediterranean Sea and points east. Incensed by the audacity of the “upstart” city-state, Cato the Elder repeatedly ended his Senate speeches with a bellicose meme: “Cartago delenda est,” or “Carthage must be destroyed.”

In the end, Rome did exactly that. It sacked Carthage, burnt its buildings to the ground, tore down its city walls, sowed its fields with salt, and took all who survived the attack as slaves. Rome erased Carthage from the map and from history.

We don’t do that sort of thing anymore. Even the Nazis didn’t deliberately raze whole cities to the ground and erase them from the map. In two millennia, our species got a little smarter and more civilized.

But overkill almost extinguished our species as recently as 1962, during the Cuban Missile Crisis. Only the cool judgment of three men saved us.

Each savior deserves a special place in memory and history. They were: (1) our own then-president, John F. Kennedy, (2) the then-General Secretary of the Soviet Communist Party, Nikita Sergeevich Khrushchev, and (3) an obscure Soviet submarine flotilla commander named Vasiliy Aleksandrovich Arkhipov. This last was the only one of three Soviet naval officers to nix launching nuclear torpedoes at our Navy—an act that almost certainty would have precipitated a general nuclear exchange.

By the grace of fortune and the good judgment of these three men, we avoided Nuclear Armageddon. But nuclear overkill continues. Despite decades of nuclear disarmament, a number of nations—even just India and Pakistan together—have enough nuclear weapons to decimate our Earth’s protective ozone layer and cause a “Nuclear Winter.” This catastrophe would set human agriculture back to the Stone Age for decades and extinguish the vast majority of humanity.

If overkill threatens our entire species’ survival, it’s no surprise that it works on a smaller scale, too. And so it is with the NRA, the epitome of overkill in civilian America.

The bare statistics tell only part of the story, but they are stark. We Americans hold 42% of the entire world’s civilian small arms, although we have only 4.4% of the world’s population. We allow anyone, including teenage kids, to buy and keep AR-15s—military-style assault rifles that can carry large magazines capable of killing dozens of people at once.

Not only that. Our so-called “background check” system, which is supposed to prevent maniacs from getting ahold of such weapons, is full of holes. All a deranged kid has to do to get an AR-15 is to go to a gun show or buy one from another owner in a private transaction. In those cases federal law requires no background check, and the NRA has opposed every attempt to close these loopholes.

In any event, there is no justification for allowing ordinary private civilians to have or carry military assault weapons like the AR-15. They aren’t useful for hunting, for they’re not especially accurate at distance. And who wants to eat venison riddled with holes that allow the meat to get dirty and spiced with tics and germs? Proper meat hygiene demands a clean kill with an accurate single-shot rifle.

As for safety, you don’t need a weapon of war capable of killing dozens to drive off or kill a robber or burglar. You’re better off with a handgun, which is more maneuverable, easier to aim, and more capable of being concealed. An unwieldy, rapid-fire weapon like the AR-15 is more likely to kill innocents by accident, including your neighbors, friends or family.

As for fighting off a renegade or tyrannical government, let’s be realistic. Is an AR-15 going to let a Lone Ranger prevail over attack helicopters, fighter jets, howitzers, artillery and attack drones, let alone nuclear missiles? Not even the most deranged fantasist can believe than an AR-15 will make private civilians the equivalent of our Army, Navy, Air Force or National Guard. And anyway, who wants to encourage more suicidal cult rebellions against our own government, as in Waco and Ruby Ridge?

No, there are only two things that AR-15s in civilian hands are good for. One is killing large numbers of people in a short period of time. That’s why many of the most horrible random gun massacres have used these weapons.

The other is letting certain megalomanic gun owners indulge their fantasies of personal power and omnipotence. To gauge how many people enjoy this game, note a single statistic: only 3% of the people in the United States own half of all civilians’ guns. The average gun owner in this group owns seventeen. You can get an idea of their mentality from this viral video, showing a sensible man who once enjoyed that feeling of power and “fun” disposing of his AR-15 after the Parkland Massacre.

Does the NRA indulge all its extremism just for the sake of the fun of these few, the 3%? Is adult overkill-play a vital aspect of “freedom” for our people. Not hardly.

There is only one plausible reason why the NRA, time after time, promotes the sale and exchange of military-style assault weapons with large magazines that, time after time, produce the most horrendous firearm massacres in our history. These weapons are profitable to make and sell, much more so than handguns. The NRA serves as the de-facto marketing arm of the industries that make and sell them.

It would be easy to outlaw the sale of these weapons to civilians. It would be almost as easy, but would cost a bit, to buy up those now in civilian hands and reduce dramatically their availability to mass killers. The only plausible reason not to do so is the profit that those who make and sell them enjoy. But after we spent $1.5 trillion on tax cuts for the rich and corporations, the tens of billions, at most, required to buy up these weapons of war would seem like a pittance.

The NRA’s overkill doesn’t stop even there. There are at least two other dismal consequences of a nation awash in guns.

The first is militarizing our police. Criminals, who ignore laws, find it easier to get ahold of guns than law-abiding civilians. As the number of civilians’ guns floating around our society has increased to nearly half the world’s total, the number available to criminals and terrorists has exploded proportionately. Criminals and terrorists can get guns by purchase, theft, “borrowing,” or fraud—in addition to exploiting legally the gun-show and private-sale loopholes.

As guns—including assault weapons—become more and more accessible to criminals and terrorists, the police feel they have to keep pace. Not without reason, they fear being left behind in the small-arms race by civilians and crooks.

So our police, too, indulge in overkill. In addition to their ubiquitous sidearms and tasers, they carry assault weapons in their squad cars. They arm Swat teams with surplus military equipment, including trucks with all the appearance of modernized World War I tanks. Then they often hide behind their military hardware, losing contact with their communities. Sometimes they terrorize communities of color, which they fear in large part because of the ubiquity of guns, especially in marginalized communities.

So a vicious circle has undermined our policing for two generations. As criminals and extremists get better armed, the police get more fearful and violent. The Black Lives Matter movement is a legitimate response to (among other things) this vicious cycle.

A civilian population awash in deadly weapons, even weapons of war, gives our police legitimate fear. That fear in turn motivates overkill and increasing brutality. If we get rid of the most dangerous weapons, maybe the police will have less fear and more humanity. They might even find it easier to recruit officer candidates who have a community orientation, rather than an authoritarian-military one. It’s certainly worth a try.

But that’s still not all. Another consequence of a society awash in weapons is even more profound. During my lifetime, three of our greatest political leaders were shot down by firearms in civilian hands. They were: President John F. Kennedy—one of three who saved the world from nuclear overkill—Attorney General and leading 1968 presidential candidate Robert M. Kennedy, and the Reverend Doctor Martin Luther King Junior.

All three were rare, inspiring political leaders. All did much, even in their shortened lives, to make this nation a better place. And all were shot down in the prime of their lives and their political promise and power.

Their losses to our history and our social development were calculable. It’s impossible to imagine the bent Richard Nixon ever becoming president, or his Watergate scandal ever happening, if they had survived. It’s difficult to imagine anyone as inexperienced as Dubya, let alone Trump, becoming president.

One of President Kennedy’s most memorable lines was, “Ask not what your country can do for you. Ask what you can do for your country.” Can you imagine Trump saying anything like that, while he plays golf and invites foreign dignitaries to his name-brand hotels, or while his daughter sells her branded trinkets out of the White House? The loss of our three great 1960s leaders to assassination by firearms destroyed our politics and our national spirit for two generations, maybe three.

But the tide is turning, slowly but surely. For kids in the Parkland school, the ravages of guns in the name of so-called “freedom” are nothing abstract and remote. They’re personal.

Their friends’ lives were—and perhaps their own later may become—sacrificed on the altar of profit for an industry that is utterly insignificant in the grand sweep of our national economy. They are being left to fend for themselves against a two-generation GOP propaganda war designed to win elections by dividing rural and city folk who have much in common. So the kids just don’t like what they see.

Even the ancient Romans knew better. When they staged their gladiatorial fights to the death, and when they threw early Christians to the lions, they didn’t do it in their city streets. They kept their overkill inside their Coliseum.

Not so we modern Americans. We have let firearms overkill rule our streets, our communities, our police, and our politics for far too long. It’s now time for a change. Unimbued with the reflexes of two generations of nonsense politics, our kids have dedicated themselves to making that change. In the process, they just might improve their own chances for living a full life. Godspeed.

Footnote 1:What Cato the Elder is actually reported to have said is less punchy: “Ceterum autem censeo Carthaginem delendam esse.” (“But anyway, I think Carthage should be destroyed.”)

Footnote 2: Our species’ narrow escape from self-extinction in 1962 is something that every high-school and college student should study in detail. The best review of the deal between Kennedy and Khrushchev that avoided Armageddon appears in this PBS special program on the fiftieth anniversary of the Crisis. A fictionalized but mostly accurate account appears in the popular movie Thirteen Days. The story of Arkhipov appears in another PBS special, justly entitled “The Man Who Saved the World.” If our youth knows well the story of the overkill that led to near-extinction, we just might avoid another.

Links to Popular Recent Posts



permalink

14 March 2018

Alpha-Male Rule


[For a discussion of how Facebook and other social media allow foreigners to subvert our government, click here. For an update with comment on how gun massacres reflect our national dysfunction, click here. For a note on how to do good by doing well and taking profits, click here. For seven reasons for us to deploy small nukes, click here. For comment on our desperate need to save the Dreamers, click here. For my prediction of a coming stock-market crash, click here. For links to popular recent posts, click here.]

Can our species progress beyond its biological evolution? Can social evolution, education, and deliberate application of our intelligence help us overcome an evolutionary limitation that is stunting our species’ development and could even cause our extinction?

These questions are hardly academic. They are playing out right now on a global scale.

More terrible still, they are playing out in the three most powerful nations on Earth: ours, China’s and Russia’s. In each, biological evolution is winning and human intelligent design is losing, big time.

We Americans have obsessed so much about ourselves and Russia lately. We deserve a break from navel gazing and Russophobia. So let’s look closely at China.

As little as five years ago—just pre-Xi—China may have had the most rational and effective executive structure on Earth. A nine-member committee, not a single man, made all important national decisions.

Not only that. China’s two top leaders had to come from that committee. They had to be selected by a consensus. And by custom they had to have served at least one five-year plan (an executive “term” in China) on the committee before assuming either of the two top spots. In addition, the committee’s members themselves were appointed in an opaque but largely democratic representative process in which top regional and national leaders participated.

If we leave aside China’s weak separation of powers and its largely vestigial legislative and judicial branches, we could easily have ranked this system as the world’s best executive. There are three main reasons why.

First, until this week, the system had customary but strict term limits. Every two five-year plans, the two top leaders would retire and be replaced by other members. This interchange insured a steady stream of “fresh blood” and new ideas at the very top of China’s government.

Second, the requirement for prior service on the committee created, in effect, an apprenticeship system. Each top leader had to serve as an “apprentice” on the committee—making and bearing responsibility for actual, real-time executive decisions for a full five-year term. What better way to make sure that a leader is ready for one of the two top posts? Demagogic direct primary campaigns, which gave us Donald Trump?

Finally, both the committee members and their self-selected top leaders were products of what may have been the world’s most impressive meritocracy. Of course it was not a meritocracy in the Western “democratic” sense. The vast majority of China’s 1.4 billion people had little or no input into it.

Yet shift the focus a bit and an entirely different picture appears. China’s Communist Party has over 80 million members, more than the population of every nation in the EU but Germany (at 82 million). And when that political “population” decides, it acts not by desultory votes of uninformed and propagandized individual citizens. The “voting” is by cadres who know the candidates personally and have worked with them, often for decades.

Which is better, a meritocracy like this, in which people who know the candidates thoroughly decide? or an electoral system that picks the short list of candidates in direct primaries controlled by a random 30% of voters, who get their information not from experience, but from random, targeted media, “fake news,” “active measures,” advertisements paid for by the rich, and other propaganda?

But lest China brag or gloat, this entire system is now under systematic attack. Xi Jinping reduced the committee from nine to seven members shortly after taking the top spot. Then, with years-long planning and plotting, he secured through patronage and cronyism the power, in effect, to appoint the committee’s members. More recently, by sabotaging the term limits that make the whole system work, he arrogated to himself the future power to pick its members and shape its policies to his will.

Apart from Angela Merkel, Xi Jinping may be the most personally skilled political leader on the global stage today. He certainly seems capable of getting his way without threatening violence or even making serious waves. And China, under his and his predecessors’ able rule, has brilliantly exploited the rules and customs of Western capitalism to catapult itself from economic pariah to soon-to-be leading global economy.

But for how long will Xi be the world’s best leader? Running a modern, technological nations of over 1.4 billion people is hardly a cake walk. Just look at all Obama’s grey hair, after only eight years spent running a nation of one-quarter the population.

More important yet, men do not get more flexible and creative as they age. They get more rigid, more intransigent, more reliant on habit and ideology, and more prone to rely on underlings and sycophants who lack their own skills. Equally important, the longer a leader stays in power, the more time and energy he must devote to keeping it, as others seek it and still others question his longevity.

Just as great leaders dig in, so do their subordinates, who are picked for likeness to their leader, or at least for compatibility, whether by patronage or executive power. That’s why Trump just fired Tillerson: with his small mind and tinier ego, Trump couldn’t suffer the constant mental stress of new and different ideas, not even for fourteen months.

The Chinese ought to know these truths better than any other people. As a military leader and unifier of modern China, Mao Zedong was a genius. If he had quit when he was ahead, as did our own George Washington, he would have succeeded in unifying China and consolidating his Party’s power. Then he might have remained one of humanity’s greatest political leaders ever.

Instead, Mao ruled as China’s supreme leader for another 26 years. During that time he nearly destroyed what he had created with bizarre and counterproductive economic theories (the “Great Leap Forward” and the “Thousand Flowers”) and increasingly isolated misrule.

So China’s modern economic “miracle” never began until Mao had died and Deng Xaioping began his regime of scientific-economic pragmatism. Deng, in effect, tossed Mao’s “Little Red Book” of simplistic ideology into the trash.

But the lure of the alpha male is strong in our species. It’s strong in him, and it’s strong in us. As time goes on, his rule into senility seems predestined, for he gains facility in disposing of rivals, no matter how skilled they themselves may be. And the less-skilled whom he anoints as his subordinates and sycophants help him dispose of rivals, for they know full well on whom the power of persons with their limited skill depends.

The longer an alpha male rules, the less accurately he evaluates his own skills and actions. The more he cements his misrule by attracting “talent” that never could have reached his level without him. Slowly but surely, the skill advantage that the alpha-male once had decays into the “skill” of political survival alone, with scant thought of advantage to his people.

So envy China not. It may be ascendant now, but it has just taken an invariably fatal misstep. Time and male ego will do the rest, converting what may have been history’s best executive system into just another empire, dependent on the skill and vitality of a single man.

Russia never really had a chance. Although freely elected several times, Vladimir Putin had no indigenous model for government succession, other than the tsars’ heredity. His skill and idealism have mutated into mere survival and nationalism—a dangerous form of tribalism that could culminate in species extinction. And as the number rises of Putin’s rivals and opponents who somehow end up dead, he comes to resemble, more and more, the alpha ape who gained “office” by physical combat.

The true last, best hope of mankind remains America, however abysmal may be the political skill and character of our current president. Thirty years, apparently, are too few for China to learn the value of term limits and a changing guard. So, too are 100 in Russia. Only the eight-century-old lessons of Magna Carta, transmitted to us and re-learned through 242 hard years—good times and bad, fair leaders and foul—seem sufficient to imprint the lesson.

Alpha-male leadership worked well for the small clans of thirty or so in which we evolved. For today’s great nations of tens or hundreds of millions, the very idea is ludicrous. It takes tens of thousands to make aircraft and make them fly. It takes many thousands to make cars, trains, computers and iPhones, let alone the infrastucture to run them. Modern society would not be possible without our species’ minute specialization and division of labor that no single individual can possibly master.

Yet here we are. In China, Russia and here at home, we are acting as if leadership by an alpha male who succeeds to leadership by something resembling physical combat is rational. This approach cannot succeed and will not last.

Like a spastic evolutionary reflex, the alpha male regains his attraction again and again. We humans have not yet figured out a better formula that can stick indelibly in our grapefruit-sized brains. We have not yet even figured out how to assimilate fully the lessons of term-limits—the secret to making anything resembling alpha-male rule actually work.

In the medium term, social evolution is the only antidote to these self-defeating aspects of our biological evolution. It works through habit. The nation with the longest tradition of effective rule—whether or not you can call it “democracy”—will prevail.

As China’s Xi, Russia’s Putin, Turkey’s Erdogan, Egypt’s El-Sisi, and probably soon the Philippine’s Duterte attest, even term limits are not hard for an alpha male to brush aside. What matters is the durability of the societal commitment to some form of sustainable collective rule.

On that, the jury is still out, as much here as in South Africa. Yet those nations that seek such rule have an advantage: many heads are invariably better than one. So the transition in Zimbabwe, while Mugabe is still alive, gives us all hope.

Links to Popular Recent Posts



permalink