The machine doesn't learn anything

Tom Simonite, "AI Has a Hallucination Problem That's Proving Tough to Fix", Wired, 2018-03-09.


I'm still chugging along in CS498 Applied Machine Learning. For good or ill. One of the surprising things to me about machine learning is that the machine doesn't really learn anything. There's no understanding. You try to decide which model works best, direct those algorithms at a pile of data, and then try to optimize some of the parameters in the model as the algorithms are graded on their predictions.

The machine doesn't learn anything. I guess, after just a little consideration, that point is incredibly obvious. But I'm susceptible, as are others I'm sure, to believing that machine learning and artificial intelligence are so powerful that they're basically magic.

From the article:

Humans aren’t immune to sensory trickery. We can be fooled by optical illusions [...] But when interpreting photos we look at more than patterns of pixels, and consider the relationship between different components of an image [...]

I just started reading a book called The Tacit Dimension by Michael Polanyi (it was a reference in Gary Klein's Streetlights and Shadows). I haven't finished it yet, but one of the leading ideas is: we know more than we can say. Tacit knowledge. And if we can't say it, we can't code it. And if it—whatever factor is relevant to our model—has some bearing on the output of the systems we work with, and we don't know how we know or what we know, then it's too much to expect that our machine learning algorithms can also know.

Leave a Reply

Your email address will not be published. Required fields are marked *