When analyzing the data and analytics marketplace, it’s easy to get confused. So many products, so many claims – how can you sort it all out?
One way to make sense of a technical product is to look for differentiators. That is, what role does the technology play in the ecosystem and, within that role, what makes the product – and the company – different from the competition.
Since its founding over 40 years ago, Teradata has been helping the world’s largest and most complex enterprises effectively capture, integrate, process and share data on a massive scale, supporting a variety of applications, from experimental to business-critical. Teradata Vantage
– the latest evolution of our enterprise data and analytics platform – remains focused on that same role within the ecosystem while setting the standard for technology advancements and innovation. And beyond the technology, the Teradata philosophy and continuously advancing approach guides organizations as they exploit the technology to deliver business value quickly while contributing to a coherent and trustworthy data resource at the same time.
So, why is Teradata is uniquely positioned to play this role within a modern enterprise? Here are a few reasons:
Platform choice and portability
Vantage runs on premise and on cloud platforms
offered by the major providers – Amazon AWS, Microsoft Azure, and Google Cloud Platform (coming soon). With Vantage, you are not locked exclusively in the cloud or on any one cloud platform. And Vantage is the same software on all these platforms, which means you can easily move data and applications from one cloud provider to another, or from on prem to the cloud, and back again if the need arises, without re-writing applications, redesigning data structures, or making any other changes, except as may be needed to optimize performance due to any network constraints. And, while Vantage has many features not available in previous releases of Teradata software, it is a simple upgrade to move to Vantage from prior versions, as most of our clients have already done. Again, no application changes required.
Cloud-only technologies, such as Snowflake, do not allow you to run their software on your own on-premises infrastructure. This all-or-nothing cloud approach presents challenges, as most large organizations will have at least some on premises presence for years, maybe indefinitely. If you are forced completely into the cloud, you will not be able to co-locate data with applications on premises when that is an optimal choice, which could severely affect performance.
And if you plan to migrate any existing systems into the cloud, you’ll have to re-write all the ingestion, all the data structures and processing, and all the application access. That’s not really a migration. That’s a re-do. If you do manage to find, decipher and then re-code all the complex data structures and interfaces that exist, after a few years of even perfectly executed project work, you’ll have exactly the same results you had before you started, likely at a higher cost and risk. All this while distracting attention from the new
value associated with important business initiatives.
The situation is even more restrictive with technologies offered by cloud infrastructure providers, such as Amazon’s RedShift, Google’s BigQuery or Microsoft’s Azure Synapse Analytics. Not only do they deny you the freedom to run their software on premises, they also restrict you from running their software in any other provider’s cloud environment. Can you guess why?
With “data gravity
” driving the optimum location for applications, you should expect the flexibility to move data and applications around as needed, choosing physical locations and platforms that allow the most direct communication possible between each application and the data it needs.
Massive scalability without cost explosion
Effective scalability is about rapidly and easily expanding capacity and
doing so without an exponential increase in cost. Other technologies do offer the ability to flex quickly, but there’s a catch. They also offer you the ability to dramatically increase your costs over time far more than you ever would have anticipated. Vantage offers considerably more predictable and controllable costs in proportion
to new workloads – more data, more users, more query complexity and so on. To ensure optimum use of resources, Vantage automatically transforms every query for efficient execution using a sophisticated optimizer that directs queries, individually and collectively, to take advantage of massive parallelism
and highly refined data retrieval. And with the most mature workload management in the industry, applications can be managed and controlled automatically, allocating resources where needed while mitigating the impact of unruly queries, thus further containing costs associated with consumption. Our competitors will tell you this level of sophistication is not needed. After all, with their technology, you can just add more capacity – and more zeroes to your next invoice.
To keep the costs of storage under control, Teradata has always had a philosophy that considers the “temperature” of data to determine the best storage mechanism. That is, the more frequently accessed or “hot” the data is, the more performant the storage should be, while high-volume “cold” data is kept on lower cost storage, accessible seamlessly as needed. The latest manifestation of this concept is what we call Native Object Store
(NOS). This feature allows you to store massive amounts of data on very low-cost storage such as AWS S3, Azure Blob Storage, or Google Object Store and access that data as if it is stored directly in Vantage. This extends our long-standing ability to access data outside the Teradata platform to be integrated as needed for applications.
Production capability for a variety of large-scale, business-critical workloads
Teradata Vantage is used for the most demanding production data and analytics applications in every major industry, and in the public and private sectors. Further, Vantage supports a mix of these business-critical applications as they share enterprise data to support cross-functional initiatives and a wide array of independent departmental applications. If you deploy data application-by-application with little concern for a coherent enterprise data strategy, the ability to share data may not be as important. But if you want to deliver data incrementally to support multiple workloads – any query, any time, against any data, for any number of users – without having to deploy the same data over and over again, you’ll need technology capable of sharing data easily, at the scale required for large enterprises. You don’t want to find out the hard way that your chosen technology can’t handle it.
In addition to supporting production-class applications, Vantage’s self-service capability
allows end users to provision and explore data quickly, without waiting on IT. Having both these elements promotes a strong interplay between experimentation and production, applying the appropriate level of governance to each.
Ecosystem fit while shouldering the heavy lifting
Teradata Vantage takes on a specific role while blending cooperatively into the larger data and analytics ecosystem. End users and developers access data in Vantage in several ways, including industry-standard SQL, RESTful APIs, or using popular tools such as R, Python, SAS, and many others. Vantage has engineered the ability for these tools to pass the heavy-lifting of data access and manipulation – including advanced machine learning – to Vantage while allowing end users and developers to analyze data and develop solutions using familiar environments. In addition, Teradata has long-standing relationships and engineering partnerships with major data management and data integration tools, again, coupling the productivity of these specialty tools with the needed performance and scalability under the hood.
Forty years of experience building world-class data and analytics programs
If you’ve been involved in data and analytics within a large organization for any length of time, you know success requires much more than choosing the right technologies. You need people who are skilled and experienced at planning and deploying these types of solutions. You need people who understand pragmatic approaches to data modeling, data quality analysis and management, metadata management, security and privacy, and more. And, if you want to responsibly deploy data and analytics to be shared across the enterprise, you have the added need for just-enough data governance, data stewardship and architecture planning, while leveraging agile and DevOps approaches applied specifically to data and analytics at an enterprise scale.
Teradata has cultivated these capabilities for decades. No other firm can match our experience because no one else has been as deeply embedded in the enterprise data and analytics programs across every major industry as Teradata has. Not only can we help you design, develop, and implement a successful enterprise data and analytics program, we can help you institutionalize the program, linking to the larger operating model including strategic planning, enterprise architecture, funding processes, program and project management, and solution development, thus establishing data and analytics planning and delivery as essential cogs within the larger machinery of the enterprise.
Teradata Vantage does not solve every data and analytics problem for every enterprise. We’ve positioned Vantage to play a specific role within the ecosystem and partner for everything else. Our focus is on the complex and demanding data needs of large, modern enterprises. We do this not only through our specialized yet standards-based technology, but also through our expertise in professionally building and deploying the necessary organizational structures, roles, processes and capabilities, just like we’ve always done. Just like we always will.