on the artiface of intelligence…

Had one of those early-morning, overly vivid dreams: Two pairs of perfectly human-looking robots (they were ‘robots’ in the dream, not ‘androids’, so let’s use that word here). And these were Manichaean robots, apparently: One pair felt ‘good’– or at least I was supposed to feel more sympathetic towards them– and the other pair were ‘bad’.  No particular reason for the distinction was given, it was Just So.

Actions took place in an old house that was also, somehow, a laboratory environment. Right down to a few old white grey-haired men in white lab coats wandering around the periphery, clipboards and all. Something was ‘out in the car’, and the two robot-teams were ordered to compete, Hunger Games style, to retrieve the item (again, never specified– though I vaguely remember thinking it was a book or a scroll, locked in a car out in the parking lot).

The end was inevitable: The ‘good’ robots dutifully started running towards the open door. The ‘bad’ robots just grabbed them from behind, picked them up over their heads, and smashed the ‘good’ robots all-too-human faces into bits up against the top of the door frame.

That said:  Neural networks, like mini-skirts, sideburns and bell-bottoms, come in and out of fashion roughly every twenty years or so. The cycle is thus:

  1. It’s been a period of algorithmic stagnation, but damn… the hardware’s gotten a lot faster. What shall we do with it?
  2. Hey! Neural Networks! (Though maybe we should call it something else…)
  3. Great early successes. Even a few new applications. Wonder why these work so well on these problems?
  4. Clever people study the why, discover new algorithms, perhaps even new classes of algorithms…
  5. …which end up proving the old quip about “the second-best solution to any problem, once you know the first-best”.
  6. Hey, look: The hardware folks have been busy… We can run these new algorithms we’ve found much faster… let’s optimize!
  7. Possible optimizations eventually all get found. The field begins to stagnate…

Rinse, lather, repeat. But at least some interesting papers get generated.

Elon Musk went from warning us against AI in an open letter to becoming a co-founder of the OPENAI initiative in less than half a year. Obviously, his thoughts are his own, but I suspect he’s decided that the Rise of AI is inevitable, and that rather than warn against or try to prevent it, he can at least help ensure that the forefront of the research take place out in public (as it used to in bygone ages), and not in the shadowy halls of some Deep Private research lab (i.e. companies: some known, some unknown, but None To Be Named).

No one would ever accuse me of optimism, but I don’t worry much about rampant AIs taking over the planet through Terminator-style military force, or converting Earth’s surface into a puddle of grey goo. The thermodynamics of it just don’t work out: Our meat-brains are highly energy-efficient when compared to our current (and near future) computers. Yes, a computer can now beat you at Go. But it consumed several orders of magnitude more energy than you did to do so. And yes: it can also recognize your grandmother’s face, but it consumed tens of watts over a few billion instructions to do so. Your brain did it in less than 100 steps, consuming a few thousandths of a watt in the 1/2 second it took (while doing innumerable other things at the same time, I might add).

So: All dramatic competition aside, Thermodynamics Always Wins. Always. (Though that of course doesn’t preclude nasty things happening on the way to local optimum). So IMO we are ‘safe’ from that scenario, for now.

People should be more worried about the potential abuse of AI by Those Who Know Better. For now, the PTB are satisfied with using AI models to anticipate our desires  for monetization. But social media companies are already using AI to ‘shepherd’ mass opinion towards a certain flavor of bland, corporate centrism (sometimes at the behest of governments, sometimes on their own). How soon before each one of us has a virtual ‘minder’, that will watch our every move to form a complex model of our psychological internals so accurate that it could not only anticipate our responses, not only control our responses– but eventually the man-in-the-loop becomes… unnecessary. Or at least atrophies into nothing more than the slimy rock or rotted tree stump upon which something else lives and grows.

Once you’ve been successfully modeled, you can be successfully replaced. Think of it that way.

So no, giant killer robots aren’t going to autonomously slaughter us en masse, (unless of course they were ordered to by other humans– in which case, the problem was us, not them).  I worry more about (what’s left of) the Human melting away into a dank little whimper of a world that eventually dies of boredom from watching itself, like some mad self-feeding, world-sucking Silicon Ouroboros.

Here’s the tl;dr version: Your brain is not a fucking computer… Please stop acting like it is.

feelings, nothing more…

Andrew Sullivan has apparently chosen neuroscience as a pet topic for the past few weeks. Key quote:

Without religion or a shared culture, science has assumed a role it is not qualified to play: a judgment of the whole, not just of its relevant area of inquiry. Don’t get me wrong: science is a vital mode of human thought; it is also just part of it. History, aesthetics, prudence, morals, virtues: these it cannot understand; and when it tries to explain them, it is not wrong, so to speak. It’s just irrelevant.

What strikes me so much here is that, out of that list in the final sentence, only ‘History’ is something that arguably takes place (mostly) outside of the human head. The rest are essentially all subtly-colored synonyms for ‘feelings’.

Aesthetics is concerned with shared sub-cultural tastes and values, which are, at their root, feelings-based. Imagine two people: the art professor trying to develop a new formal theory of aesthetics, and some random person in their car choosing a radio station for the drive to work: Whether they realize it or not, each is trying in their own way to answer the same essential questions on some level: What sub-cultural tribe do I identify with? What am I supposed to like or dislike, based upon that? What makes me feel good? What makes me feel uncomfortable, or in territory unfamiliar?.

Tell me your socioeconomic background, age, gender, marital status, where you’re from and your religion, and I’ve got a pretty good chance at guessing your taste in movies, books, art, music and politics. That is ‘aesthetics’.

“Prudence, morals and virtues” have even less essential substance. The study of ‘Morals’, as Nietzsche pointed out over a century ago, is largely a subset of aesthetics (you inherit your initial set of morals from your originating tribe, perhaps modifying them as your tastes change over time as you interact with and move between different sub-cultures), so the above applies. I’d argue that ‘Prudence’ is essentially the urge to avoid public shaming or other consequences. And I’d argue that ‘virtues’ are ultimately derived from pride, both private and public. Prudence, when it works, is the thing that keeps you from getting shunned or expelled from the group. Virtues are the things that will help you to acquire social capital (and therefore status) within a group.

Feelings, all.

Emotional life is of course a valid mode of human experience… but we must remember that nothing is more easily manipulated than human emotion. And if you can control that, then you control almost every other aspect of a person’s psyche. I’ll just post my favorite Richard Feynman quote here:

The first principle [of the scientific method] is that you must not fool yourself, and you are the easiest person to fool.

I pick on Sullivan here because he’s a kind of ‘cultural canary’, quite canny at sniffing out overall trends, and this general anti-science backlash is definitely something I’ve noticed coming back into prominence over these past few years.

The slow, inexorable move towards post-scarcity economics seems to be driving two general trends. First, more and more goods are becoming positional goods, i.e. things that derive their value mostly from their artificial inaccessibility to the general crowd. And secondly, the increased prominence of social capital in the lives of ordinary people.

Up until a few centuries ago, only the ruling class and their courtiers had to worry much about social capital: the so-called lower orders, generally too busy surviving to indulge in much intrigue or social games, tended to derive all their value from their originating tribe. Move forward to the present, and the Everyfolk now seem to spend much of their of time displaying cultural signifiers and tribal identifiers to one another on Facebook, Twitter, tumblr, YouTube, etc. Much like the old Courtiers at Versailles, (though minus the scented handkerchiefs), and frankly not too different from Baboons flashing their red haunches at each other– we are primates, after all, and it’s just part of what we do. There are even forums like Kickstarter now, which give the Everyfolk a chance to actually monetize any social capital they acquire to fund a new product or service.

My point: We seem to be moving from the old American notion of ‘Every Man a King’, towards a new aesthetic (there’s that word again) of ‘Every Man a Courtier’. And I do think this is a big part of what’s driving the anti-science movements, because the notion of objectivity threatens this new culture.

Essentially, you discriminate between tribe A and tribe B by what they believe– but if there is only One Right Answer, that distinction disappears. A positional good rarely has any objective excess value of its own (the $10K Rolex and the $10 Timex are objectively equivalent in function, for example). And most of the baubles of social capital are either matters of opinion (i.e. aesthetics), or have no essential substance at all (i.e. they are completely inside our proverbial heads).

Small wonder, then, that there’s so much anti-scientism in popular culture: If you derive almost all of your personal value from these things, you will unconsciously see science, with its potentially corrosive truth-seeking, to be a dire threat to your very self.

IMO this will only get worse as time moves on: But this tension between our tribal primiate natures, and the new global machine-culture that’s being born as we speak, will be one of the primal forces driving everything from our art to our politics for at least the next few decades.