(Feature) Immunity & Disease in the Internet of Things

As far as we know, the alphabet was invented only once* — in Phoenicia (covering modern-day Lebanon, Syria, and Israel). It spread and persisted because with it, you can render any word in any language. It is an example of what the physicist David Deutsch called “the beginning of infinity” in his book of the same name. It is the point at which domain-specific (parochial) modes are traded for universality. The evolution of DNA is another example, as is the CPU, a generic computational machine theoretically capable of mimicking any other.

In real terms, this strength is also a weakness. Once an organism’s genetic code is no longer species-dependent, for example, it can be hijacked by any number of invaders. At the smallest (and simplest) level is the virus, which has no machinery of its own. A virus is an “inert” strand of genetic material wrapped in a protein syringe, and it’s dependent on its host’s cells to reproduce and spread.

With a universal system, then, security becomes paramount. And indeed, the immune systems of large, multicellular organisms are typically quite complex. In fact, I would argue they are surpassed only by the brain, which is why we are still making radical discoveries, most recently that intestinal flora can affect mood.

Despite their complexity, or rather because of it, immune systems often fail (as does the brain). After all, a universal system, which can theoretically mimic anything in its domain, also has a theoretically infinite number of potential configurations, most — but not all — of which are going to be failure modes. Some are going to be advantageous, but since there’s no way to know which those are in advance, distinguishing friend from foe, or signal from noise, becomes quite complicated.

Those infinite potential configurations also create a second problem beyond porosity: error checking. A dedicated device with a finite number of correct configurations is relatively easy to fix. At the highest level, you know that anything other than one of those correct configurations is an error. One can look at a hammer and know immediately whether or not it is broken.

Figuring out what’s wrong with my novel, another device (for transmitting story) made from a universal system (language), is not only complex but vague and imprecise. And I don’t just mean spelling and grammar, which are relatively easy (but still difficult). I mean what’s wrong with the novel itself, which is something more than the sum of its components. With biological systems, a failure of error checking is often fatal — in the form of cancer.

I like knowing why the world is the way it is and not some other way. I like big-picture answers, which in my experience most people tend to overlook. To illustrate the different types of answers, I often use the question, Why did the 8 ball go into the corner pocket? The reductive, scientific answer, popular with engineers and accountants, who like wrestling with the devils of detail, will involve trigonometry and the angular momentum of the cue ball and all that kind of stuff. The statistical answer will involve the observation that four of the six pockets are corner ones. The answer from the humanities and social sciences is that the player’s wife went shopping, leaving him the afternoon for leisure, making it very likely that balls would be falling into holes (of one kind or another).

I make no secret of the fact that I prefer the latter category of answers and for one very good reason: they are explanatory, whereas the others are merely descriptive. And that’s why I love this short talk by Thomas Dullien — first shared by computer security maven Bruce Schneier, whose blog I follow. It helps explain our moment in history by explaining why the internet is sick: it suffers from both maladies mentioned above. On the one hand, it’s growing cancerously. On the other, systems are porous and riddled with invaders.

Security, Moore’s law, and the anomaly of cheap complexity

Of course, in an open universal system, some degree of that is unavoidable. In the case of DNA, for example, if you want to reap the benefits of evolution, you need to allow some minimum amount of mutation, which leaves the door open both to cancer and to an arms race with pathogens, who are themselves evolving — in short, universality leaves open the problems of internal errors and external invaders.

Dullien’s “anomaly of cheap complexity:”

  • For most of human history, a more complex device was more expensive to build than a simpler device
  • This is not the case in modern computing. It is often more cost-effective to take a very complicated device, and make it simulate simplicity, than to make a simpler device.

He says that as if it’s a somewhat unexpected result, and for engineers, it probably is. The culture of computers is eschatalogical and so sees little value in what has come before. Thought leaders in the world of AI are just now waking to flaw in their research programme first raised by philosophers in the 1980s and subsequently ignored. Dullien’s observation was made succinctly by Blaise Pascal in 1657 when he wrote: “I only made this letter longer because I had not the leisure to make it shorter,” by which he meant the very same thing: that it is faster and easier to speak crudely, to dump words onto the page, than it is to produce terse, pithy prose.

I am not, by the way, suggesting that computers (or the internet) are directly analogous to the human body, or even to life. I am saying that both systems suffer the endemic drawbacks of universality.

If, however, naturally-evolved biological systems are any guide, then in order to ever be “safe” (however defined), the internet will require an “immune system” almost as complex as itself, and that the current spate of solutions — walled gardens, like the boy in the bubble — are clunky and prone to catastrophic failure, several of which we have witnessed in the last few years.

The problem with immune systems is that they stop everything that passes and demand to see their papers, which is costly, time-consuming, and — in terms of information systems — raises privacy concerns. For now, the trick is how to balance the two.

I suspect eventually some talented engineers will hit upon a clever solution we haven’t contemplated yet (or which isn’t yet feasible to implement for whatever reason). But until then, we will retain a cancerous, infection-laden system. Be wary.


 

*The term “alphabet” is used by linguists and paleographers in both a wide and a narrow sense. In the wider sense, an alphabet is a script that is segmental at the phoneme level—that is, it has separate glyphs for individual sounds and not for larger units such as syllables or words. In the narrower sense, some scholars distinguish “true” alphabets from two other types of segmental script, abjads and abugidas. These three differ from each other in the way they treat vowels: abjads have letters for consonants and leave most vowels unexpressed; abugidas are also consonant-based, but indicate vowels with diacritics or other systematic graphic modification of the consonants. In alphabets in the narrow sense, on the other hand, consonants and vowels are written as independent letters. The earliest known alphabet in the wider sense is the Wadi el-Hol script, believed to be an abjad, which through its successor, Phoenician, is the ancestor of modern alphabets. [Wikipedia]