記事

How to Get More ROI—Faster—From Machine Learning

Find out how to harness machine learning and AI to contain costs, increase revenue, and grow your organization’s customer base. Read more.

2021年7月12日 3 分で読める
Getting More ROI from Machine Learning

In 2016, Gartner’s Hype Cycle rated machine learning and AI “the most disruptive class of technologies.” Companies were quick to incorporate the promising capabilities into their advanced analytics efforts.
 
But the technologies haven’t delivered real value for most organizations.
 
Senior executives tell McKinsey they’re “eking out small gains from a few use cases” and “failing to embed analytics into all areas of [their] organizations.” And Gartner has since estimated that over 80% of machine learning projects fail to reach production
 
Why have machine learning and AI failed to deliver?
 
Two barriers holding back your analytics
 
The traditional approach to analytics—per-application custom data feeds, multiple data copies, and nine months or more for implementation cycles for a single model—doesn’t work for analytics at scale.
 
To succeed at scale, enterprise analytics programs need to overcome the two largest barriers to ROI: scale of analytics and scale of data.
 
Barrier #1: Scale of analytics
 
Machine learning algorithms perform best when given tasks that are discrete and specific. As the size of the problem space becomes larger and more complex, single models fail to perform. 
 
Case in point: A child’s scooter, a wheelchair, and a city bus all have wheels, but each operates very differently. A self-driving car needs to understand the differences between these “vehicles”—while also knowing how to detect and behave at red lights, stop signs, and other traffic signs. A single model approach can’t manage that complexity at scale.   
 
The solution? Break the problem space into small units and deploy machine learning at the lowest level possible. Future-ready businesses will need hundreds of thousands—to millions—of algorithms working together. Hyper-segmentation—where a single algorithm is trained against each customer’s experience and data, instead of a single algorithm trained against all customers—will be necessary in some cases.
 
Barrier #2: Scale of data 
 
Why does Google always—and eerily—know what you’re about to ask it? 
 
Data.
 
Google has amassed trillions of observations from billions of daily searches—plus millions of data points from interactions with individual users. For machine learning to perform well, enterprises must use all their data. That includes data from across the enterprise, across products, across channels, and from third-party data vendors. 
 
However, we don’t just need more data. We need more data in context—cataloguing it by party or organization, by network, by time, and with geospatial and biometric overlays. Pennies, Pounds, Euros, and Rupees are all names for money, but we don’t want machine learning to try and understand each currency. We want it to understand credit risk, predict probability of a large loss, or determine optimal inventory levels.
 
Data in context also needs to be relatively clean, so it can be well understood by all users—including the analytics algorithms, end users, auditors, senior executives—or worst-case scenario, opposing counsel.
 
Building reuse, flexibility, and ROI into analytics at scale
 
Consider this:
 

  • Data processing accounts for 80% of any given project’s time expenditure
  • Close to 65% of the processed data can be shared, even in remotely similar use cases 
  • Leveraging this data can save organizations hundreds of thousands of hours 

 
But traditional solutions require a time-intensive process of copying and moving data to each application.
 
That’s why scaling analytics isn’t just a matter of investing more money in analytics but investing in the right data management design. To be future-ready, organizations require a connected multi-cloud data platform to cut through complexity and deliver useful, actionable answers to any problem.
 
Find out how to harness machine learning and AI to contain costs, increase revenue, and grow your organization’s customer base. Sign up for our Analytics 1-2-3 webinar.

Tags

Chris Hillman について

Chris Hillman is the Senior Director, AI/ML in the International region and has been responsible for developing and articulating the Teradata Analytics 1-2-3 strategy and supporting the direction and development of ClearScape Analytics. Prior to this current role, Chris led the International Data Science Practice and has worked on a large number of AI projects in the International Region focusing on the generation of measurable ROI from Analytics in production at scale using Teradata, open source and other vendor technologies. Chris has spoken regularly at leading conferences including Strata, Gartner Analytics, O’Reilly AI and Hadoop World. Chris also worked to establish the Art of Analytics practice, promoting the value of producing striking visualisations that draw people into Data Science projects, while retaining a solid business-outcome foundation.

Chris Hillmanの投稿一覧はこちら

最新情報をお受け取りください

メールアドレスをご登録ください。ブログの最新情報をお届けします。



テラデータはソリューションやセミナーに関する最新情報をメールにてご案内する場合があります。 なお、お送りするメールにあるリンクからいつでも配信停止できます。 以上をご理解・ご同意いただける場合には「はい」を選択ください。

テラデータはお客様の個人情報を、Teradata Global Privacy Policyに従って適切に管理します。