500 Words — Day Seventeen: AI

William Greer
3 min readJan 27, 2022

One of the things that fascinated me growing up was artificial intelligence. Mainly the idea that you could create a machine and grant it the capability of thought and agency. The idea was very cool and much more compelling than a website or a word processor or even a video game. The allure of building a mechanical agent almost seemed as if you playing god. The idea of creating something purely artificial that could think, that could feel. What would that say about our humanity if we could be so easily replicated in silicon?

In the summer of 1966, the AI group at MIT conceived computer vision as a summer project. The final goal of the project was object identification by which objects would be identified by some computer and then matched to the correct object name based on a know vocabulary of known objects. The summer project would fail to reach that goal and it would take decades for computers to reliability perform these tasks. Even to this day, computer vision programs will still make mistakes in object identification despite the vast amount of effort put into it. From a summer project, this problem has grown into a major subfield in AI with billions of dollars spent exploring the idea of artificial “sight”.

However, one could hardly say that these machines are actually seeing what they are detecting. Rather amongst the gigantic attribute space, these machines have learned patterns in data that represent these objects in 0s and 1s. Artificial intelligence can be simply described as the ability to transform and filter data and then applying statistical and algebraic techniques to detect patterns in data that can be made to make simple decisions. With enough data, and a very narrow context, these pattern recognizers can be even better than human at certain simple tasks. But the more complicated the task is, the harder it is for a artificial intelligence to learn a pattern good enough to apply to the task. Edge cases are hard to learn and often lack the training data to sufficiently learn without overfitting to that edge case. Ambiguity is basically impossible.

When you are relying on patterns, given some context, it is very hard to say A and not A are both acceptable answers given inputs that are mostly the same. What is a self-driving car supposed to do if it sees a unicorn on the road if it has never seen a unicorn? Maybe it thinks it is a barrier and will turn. Maybe it thinks it is an animal and will run over it is if there is no room to move over. Who knows, unicorns don’t exist. What would you do? It depends. A lot changes if is a guy in a unicorn suit versus a giant inflatable balloon. But AI don’t have concepts of costumes or balloons unless trained specifically for them. And most likely this AI just knows there is a foreign object on the road because it has only been trained to understand regular road patterns.

When I discovered the state of artificial intelligence, initially I was rather disappointed. We were nowhere near creating a thinking machine. Even today, you here all this hype about artificial intelligence and decision making, but the decision making is very limited and dumb at best and is often used in placed of the less cool, but more accurate description of applied statistics. I don’t think we should fear the rise of a general AI. If anything, what we should fear about artificial intelligence is not that it is too smart, but when given some important edge case that may impact human lives, that it is too dumb.

--

--

William Greer

Full time software engineer, part time experimentalist, ready to build the future one small step at time.