Chances are, your company is awash in a tsunami of data these days. And you’ve thrown everything from open-source databases to machine learning algorithms — as well as an army of data scientists — at the problem. But I bet if you asked your data scientists how effective those tools are, they’d say they can still barely keep afloat. This is exactly why businesses today need to kick the tires on deep learning — there is simply too much data, and too much variety, for smart people to feature engineer their way to the right solutions.
Previously, data science staff had less issues keeping up with a company’s data. One, there was less of it. But two, it was mostly homogenous. Companies were keeping tabs on simple business indicators, like tracking clicks on a checkout button or looking at sensor data streams to predict repairs. Now we are living in a world where Google and Facebook collect 14 types of data about their customers alone. And that variety of data will only increase exponentially as we continue to index the physical world in the form of pictures, videos, audio and sensor data.
To tackle this evolution, companies need to look beyond traditional machine learning, where algorithms easily grasp numerical information that is computational by nature. Traditional machine learning requires feature engineering. This is where humans provide context for data so a machine can make better predictions. In short, feature engineering is a human-in-the-loop on most machine learning projects, where data scientists use domain knowledge of the data to make machine algorithms more accurate.
There is finesse to feature engineering, but it somewhat defeats the purpose of machine learning — the idea of feeding any data into a machine and getting insights back, without much human mediation. It stands to reason, for many emerging use cases, feature engineering must be, at least partially, automated. This is where machine learning applications morph into deep learning applications, which simplifies feature engineering in many ways, putting more of the work on machines, and ever more complex, self-learning models.
As deep learning’s sophistication grows, it will be able to tackle other challenges, like natural language, which is often even more complex than images to decipher.
Take video or images. Traditional machine learning models can’t make heads-or-tails of complex images, but deep learning can, relatively easily, teach itself the difference between cats and dogs. Vision and image detection are great deep learning applications. With it, businesses can track sentiment from pictures on Instagram. Or they can build image recognition into their apps, so users that want to repurchase an item can simply capture it on the camera — a current feature of Amazon’s app — to place the item in their shopping cart. In medicine, probabilistic deep learning models can focus on detecting cancer in MRIs and CAT scans, and could do so inexpensively — providing more patients access to a lifesaving area of medicine.
Computervision is getting so good, in fact, that engineers are now focused on confusing the machine, a field called adversarial learning. In many cases, a modicum of image noise is used to challenge an algorithm’s perception of what an object is, even via methods that are totally imperceptible to the human eye. A recent blog by Elon Musk's nonprofit, Open AI, shows that adding a layer of noise to a picture can trick an algorithm into thinking it’s a totally different object. Once adversarial learning reaches full maturity, deep learning could prove even better than humans at processing visual information.
As deep learning’s sophistication grows, it will be able to tackle other challenges, like natural language, which is often even more complex than images to decipher. An AI that understands a shopper, for example, could serve as a personal assistant in a store, providing a one-on-one level of service often lacking in big box stores. Interactions with current virtual assistants, like Siri or Alexa, will become much more intuitive and enhance the user experience.
Truth be told, user experience is exactly why executives need to keep their eye on deep learning. If a business can provide a better, more intuitive method of interfacing with their products, they will be providing a value add their competitors can’t keep pace with. Additionally, whoever knows their buyer best is ultimately going to win out, and deep learning is the premier way to turn data into dollars.
So, how do you do that exactly? Come back next time as we explore how different industries are actually using deep learning to impact business outcomes.
Atif is the Global VP, Emerging Practices, Artificial Intelligence & Deep Learning at Teradata.
Based in San Diego, Atif specializes in enabling clients across all major industry verticals, through strategic partnerships, to deliver complex analytical solutions built on machine and deep learning. His teams are trusted advisors to the world’s most innovative companies to develop next-generation capabilities for strategic data-driven outcomes in the areas of artificial intelligence, deep learning & data science.
Atif has more than 18 years in strategic and technology consulting, working with senior executive clients. During this time, he has both written extensively and advised organizations on numerous topics, ranging from improving the digital customer experience to multi-national data analytics programs for smarter cities, cyber network defense for critical infrastructure protection, financial crime analytics for tracking illicit funds flow, and the use of smart data to enable analytic-driven value generation for energy & natural resource operational efficiencies.