Let’s Be Clear About AI

Artificial Intelligence (AI) is Presently Ubiquitous, Limited, and Widely Misunderstood

The key misunderstanding today is that AI describes one thing, and it doesn’t. There are different strengths of AI. The most flimsy types are great at what they do, but on their own they will never suddenly “wake up” to human-like consciousness. There’s nothing it’s like to “be” them; and there never will be (or certainly not as they stand).

When beginning to explore the subject more deeply, a stumbler you may hit on quite early is that a range of different approaches to the science and technology of AI cut across the different strengths of AI that people are working on. For example, you could have someone deploying artificial neurons in a flimsy setting; but you could also have someone exploring artificial neurons towards the human-like intelligence goal.

Today’s Approaches Include:

  • Emergentist: As science this is about creating artificial neurons (in an artificial neural network (ANN), one type of machine learning) in the expectation that complexity will build up to the right outputs or conscious behavior eventually. As philosophy this is emergentism / connectionism.
  • Symbolist. As science this is about systems establishing meaning across symbols – GOFAI (good old-fashioned AI). As philosophy this is “the psychological assumption” – something challenged by Kurt Gödel.
  • Hybrid (both of these working together)

There are others. See also: philosophy of artificial intelligence and the Wikipedia entry on AI to get a sense of how these fields developed, stalled, merged, and progressed.

Today’s Strengths Include:

“Weak” or “Narrow” or “Applied” AI.

This describes a kind of AI geared towards completing a specific set of kinds of task as best it can. Real-world 2018 examples of WAI include:

The robot cat is a good example of the earlier point. Alone, the neurons at work in the motion of the cat are “WAI” – weak AI like the ones that control motion in the bodies of my Brobots. But at the same time, if one day deployed as part of a bigger structure, these neurons could assist a system which as a whole is a much stronger form of AI (as is the case in my imagined human-like robots).

My current fave way to think of WAI: You can walk down a set of stairs without consciously thinking anything other than perhaps that you need to do a task involving something down at least one floor. The operation of descending the stairs is not part of your conscious thought as an adult able to descend stairs unassisted.

I said strengths plural. There is a stronger form.

Wide AI and AGI

Wide or general AI (AGI) is AI which is capable of more than just completing a limited set of types of task and improving its performance on execution of those tasks and ones very similar to them.

Wide, as the opposite of narrow, is quite easy to get a sense of here. It’s a nice neat bit of nomenclature. General is a bit less neat, although it does imply “AI generally everywhere” as opposed to “AI for one task.”

A 2018 example of wide or general but not strong AI might be:

  • IBM Watson

Strong AI

Where strengths of AI get confusing is that by saying AGI or wide AI one might be referring to what most others call “strong AI.” Consensus among experts seems to be that you should only really say “strong AI” if you are referring to AI’s holy grail: human-like intelligence. (And even if you use one of the other non-weak phrases like AGI or wide, you should still explain what you mean.)

A 2018 example of strong AI might be:

  • Nothing at all. It doesn’t exist.

It isn’t even remotely close to existing.

The Hardness of the Intelligence Problem Has Nothing to Do with Moore’s Law

The “hard problem of intelligence” continues to elude us. We’re edging closer, but our progress is painfully slow. The speed of computing power is only half the problem, or less than half – that’s another big misunderstanding right now. It’s certainly true that every time we think Moore’s Law effect is over engineers find a new way to keep it going; but, whether it does or it doesn’t, reaching the AI holy grail carries its own set of hard problems; it’s hard because it’s hard, not because we need better computing resources. To put it another way, better computing resources might provide the paper we need to write the book “AI holy grail,” but they don’t write the book for us.

We’ve had “AI winters” before: long periods of funding drying up because a particular approach has proved disappointing. We could well have other “winters” ahead of us and if we do we could be a hundred or more years away from achieving human-like intelligence. And even without a “winter” Kurzweil’s bet could still be way off. We just don’t know.

I hope that helps.

Disclaimer

Since written in haste, I welcome any feedback that will make this blog post better.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s