arrow-left arrow-right brightness-2 chevron-left chevron-right facebook-box facebook loader magnify menu-down rss-box star twitter-box twitter white-balance-sunny window-close
Darwin and AI
1 min read

Darwin and AI

Ability without understanding

Darwin was one of the greatest minds of our species (no pun intended). His hard work and intuition gave us a framework for how every single living thing on this planet came about. The why and the how. It’s all just random variation — a collection of mutations. Richard Dawkins’ “blind watchmaker.”

Darwin’s theory basically says that an exemplary machine can be built without knowing how to make it. The eye for example. The eye evolved (multiple times) over many generations of deleterious and beneficial mutations, until finally a process was converged upon because it not only worked well enough, but proved more beneficial than not.

Somewhat relatedly, Alan Turing said, “It is possible to invent a single machine which can be used to compute any computable sequence.” At some level this means that a suitable machine can compute anything, even if it has no idea what it’s computing.

In the beginning “computers” were people (mostly women) who had to have at least some rudimentary understanding of math, logic, number theory, etc., all of which Turing realized could be reduced to nothing more than a list of discrete operations carried out by a machine.

AI, like natural selection, works only on state + input. If the output “works,” or is “better” (or at the very least not harmful), it’s kept (i.e., reproduced).

If we connect Darwin’s theory with where AI is headed, it seems a fait accompli that humans are doomed. Machines, blindly, will iterate, iterate, iterate as they redesign themselves — and at a rate nearly infinitely faster than humans can reproduce — until, given enough training data, there’s nothing they won’t be able to do better than us.

We’re losing our grip on understanding why the machines are doing what they’re doing, but what makes you think they know any better than us? They don’t (yet). They’re flying blind (yes, we imbue them with some purpose or direction, but then they’re off to the races without us) — much like evolution — ingesting data and almost accidentally getting “better.”

I guess the connection I’m trying to make here is that both evolution and AI seem to converge on the notion of competence without comprehension.

You've successfully subscribed to Justin Blanton.
Success! Your account is fully activated, you now have access to all content.