(Feature) AI and the future of humanity

The question posed to me was whether there’s cause to be optimistic about AI. My answer is that that’s an oversimplification almost to the point of farce.

To be clear, I’m not anti-AI. It really does have the potential to make life better for everybody. And anyway, as with genetic modification (of both food and humans), you couldn’t stop it even if you wanted to. The issue, really, is that there’s not going to be a straight line from here to there, and history suggests things will probably get worse — potentially, A LOT worse — before that end state, if it even comes.

So I’m not a pessimist. I believe problems in life are both inevitable and solvable, which means we shouldn’t fret unnecessarily when a new one appears. But we do need to understand the typical course:

~A new invention or discovery

~Optimists (typically liberals) foresee what we CAN do with it and assume therefore we will

~We do that a little, but we also do lots more bad shit than anything else — often completely unanticipated shit. For example, the internet facilitates commentary like this, but measured by weight it’s mostly porn. By far. After that, it’s gambling, cat pictures, selfies, scams, misinformation, and organized crime, both private and public.

~Things get better for elites but typically a little worse overall and the novelty is soberly re-examined for what it really is versus what we wanted it to be. We’re at this stage now with social media/big data.

~Bans and prohibitions are enacted, addressing symptom rather than cause

~The wealthy quickly claim beneficial exceptions, often in isolated “walled gardens” separate from the rest of us

~Generations of inequality and/or outright suffering follow, such as after the invention of the steam engine, which ripped limbs from children for decades before anyone did anything about it

~Some time later, people start to contemplate the responsible governance they should have been considering at the point of irrational exuberance

~Changes are introduced gradually, through trial and error and over the objection of political conservatives, which improve life gradually for everyone but the poor, who are pretty much ignored by the mainstream Right and Left alike

For me, the problem with AI isn’t just that it introduces another asymmetry, such like what we’ve seen repeatedly in history going all the way back to the invention of agriculture, but that it introduces the ULTIMATE asymmetry. In the past, the wealthy and powerful were always — at some point, once you got far enough down — dependent on the poor and working classes: to grow food, to staff factories, to fill armies, to clean house, etc. That meant there was a floor to human suffering. Things could only get so bad before workers went on strike en masse or the population rebelled, and so introduced a correction. This is the fodder of history, the dates and conflicts you were forced to memorize in school.

Side Note: For those who want a really great overview of those kinds of forces across the span of human history, read William McNeill’s Plagues and Peoples. Pay particular attention to his discussion of the two kinds of parasitism on society’s producers: microparasitism (disease) and macroparasitism (the ruling class).

Artificial intelligence — particularly true AI, in the classic sense, which is different than the kinds of algorithmic “intelligence” that get the label these days — has the genuine potential to render the great mass of people not just superfluous but an outright burden. There will be no reason, in a politico-economic sense, for most of them to exist since neither their labor nor their vote will empower the ruling class. There will no longer be anything to stop the powers-that-be from reproducing and extending the “solutions” that have repeatedly occurred throughout human history across many times and circumstances: from Stalinist Russia to the pogroms of medieval Europe to the Khmer Rouge to the Rwandan genocide and on and on right up to what’s going on in Myanmar right now.

Wherever we go, there we are. With whatever tools we invent, the hand that holds them is still a human one.

Now, AI might eventually reach such ubiquity that all (or most) humans alive will benefit. That’s definitely possible. But as I said, there’s not a straight line from here to there. It’s like traveling back in time to the year 1900 and telling a group of people:

“Okay, look… I gotta be honest. Things are gonna be bad. Really bad. There’s gonna be a horrible global war in a few years that will kill unprecedented numbers of people, not just through violence but also famine and disease. After that, a massive economic depression will put huge chunks of you out of work and force you to move from your homes. Many will get sick and die. That will only end because of another world war, this one even larger and more devastating than the first, if you can believe it, and which will see the invention and use of weapons capable of wiping out the entire planet. In the aftermath, the European colonial empires will retract, which sounds great, but they’ll leave a power vacuum, and the developing world, from South America to Africa to Asia, will experience repeated waves of civil war, ethnic cleansing, and famine — often with a great deal of external meddling. The end result of all that is: many of you will die, and even if not, your family lines probably will. BUT… there’s a silver lining! For those who survive, things will actually get better than they are now. Democracy will spread. Basic social safety nets will be introduced. There’ll be a minimum wage. Women will get the vote and racial integration will be the law of the land, if not always the practice. Health care will improve as well as opportunities for education and home ownership. So chin up! Smile on your faces! And back to work!”

Those who made it through all that — us — look back and say “gee, that must’ve sucked,” but we didn’t have to live through any of it. We’re the beneficiaries. It’s just something bad that happened, like the Inquisition.

The question before us is, is there reason to be optimistic about AI? I dunno. Do those people in 1900, knowing what you told them, have reason to be optimistic?

It’s great to say “We’re gonna have universal basic income,” except we’re probably not — not anytime soon anyway. Maybe one day, who knows? But right now there’s zero political will. We couldn’t even get universal health care coverage in this country under a nominally Democratic administration. The tax bill under consideration in Congress at this very moment tilts the complete opposite way. So good luck with that.

Maybe eventually, after we go through all the pain, people will realize something like UBI is a good idea and enact it. If so, that will be great for the people alive then. But as a practical solution, those high hopes don’t really address what’s coming.