In a recent post, Ray Lucchesi outlined why AI is at a crossroads. The piece points to research showing that most deep learning methodologies require a vast amount of input, but still end up with very brittle solutions. Research scientist and neural network enthusiast Janelle Shane perfectly illustrates this with the example of sheep in computer vision analysis.
At its core, a neural network is simply a pattern recognition system trained off of an immense dataset. The network itself has no context or intelligence at what it is actually analyzing, other than to see how it fits into its pattern recognition schema. The network can assign a confidence metric against what it claims to have seen, but as the piece illustrates, this can become problematic.
For sheep, the neural network is easily fooled when presented with pastoral imagery. Lush green fields devoid of the creatures are associated with them because of the dataset used for training. In fact, the actual presence of sheep doesn’t automatically trigger recognition. Put a leash on a sheep and it comes a dog. Put orange coats on them in a picturesque field and they become flowers.
Perhaps the most interesting takeaway is that network is at a loss for any surreal imagery. The bizarre sight of sheep in a tree resulted in the network thinking it was a flock of birds. There doesn’t seem to be any way around this for neural networks. To train them to be particularly aware of outlier images would (I think) decrease the overall certainty for ordinary object recognition.
This proves Ray’s point from his earlier piece. Current neural network and AI research is remarkable, but limited given how they must be trained. We’re starting to see the limits of pattern recognition, whether we’re talking about sheep or other objects. As unsupervised learning continues to advance, we’ll hopefully see neural networks that need less depth in training, and return more robust results.
Janelle Shane comments:
If you’ve been on the internet today, you’ve probably interacted with a neural network. They’re a type of machine learning algorithm that’s used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies – including Google, Microsoft, IBM, and Facebook – have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes.
Read more at: Do neural nets dream of electric sheep?
- AMD Wasn’t Built In A Day | Gestalt IT Rundown: August 14, 2019 - August 14, 2019
- SaaS Backup Isn’t My Problem – The On-Premise IT Roundtable - August 13, 2019
- Jira and the Definition of All | Gestalt IT Rundown: August 7, 2019 - August 7, 2019
- What’s In Your Bucket | Gestalt IT Rundown: July 31, 2019 - July 31, 2019
- VPNemy at the Gates | Gestalt IT Rundown: July 24, 2019 - July 24, 2019
- Germany Drops the Hesse on Microsoft | Gestalt IT Rundown: July 17, 2019 - July 17, 2019
- FUD: Fear, UK, and DNS | Gestalt IT Rundown: July 10, 2019 - July 10, 2019
- The Traditional Office is Dying – The On-Premise IT Roundtable - July 9, 2019
- Cloudfail | Gestalt IT Rundown: July 3, 2019 - July 3, 2019
- HCI See What You Did There | Gestalt IT Rundown: June 26, 2019 - June 26, 2019