What is cloud native?
Exploring the fundamentals and value of cloud-native operations requires more than simply defining the term—but that's always a good place to start.
Cloud native describes applications, resources, architecture, and technologies that are designed to exploit the advantages offered by the cloud deployment and delivery model.
The app development style
"Cloud-native applications" may simply mean apps that were designed to run in the cloud alongside other cloud-based resources. This typically doesn't mean they're restricted to the cloud: Some cloud-native apps operate just as well when they are deployed in the cloud environment as they would if hosted on premises. Other apps can work in on-premises or cloud infrastructure but may be less effective in the former setting—perhaps because on-premises data center infrastructure in a given location isn't able to support optimal functionality.
Some cloud-native apps are directly developed within the cloud—using a platform as a service (PaaS) development environment or similar system—while others are developed in on-premises contexts and then immediately migrated to the cloud. These days, the former is becoming more common.
The operational approach
When discussing the cloud in a broader context, cloud-native can refer to architecture, apps, and resources that are designed so they—and their workloads—benefit from operating within the infrastructure and app ecosystem of one specific cloud service provider (CSP).
The Amazon Web Services (AWS) object storage system, Amazon S3, would exemplify this definition of a cloud-native service, as would its respective Microsoft Azure and Google Cloud counterparts, Blob and Cloud Storage. By contrast, a solution like Teradata VantageCloud is "cloud native" in the sense that it has been engineered to run effectively and efficiently in the cloud, but it's compatible with AWS, Azure, and the Google Cloud platform.
The rise of cloud-native infrastructure and applications
Given how many organizations embraced cloud migration over the past decade-plus, it's no surprise that a philosophy centered around maximizing cloud resource usage would take off. All of the biggest CSPs directly or indirectly encourage the sole-vendor approach to being cloud-native, as it drives their enterprise customers to use the cloud provider's broader catalog of cloud services.
However, it would be dismissive and inaccurate to say that cloud-native is solely some sort of coordinated marketing push by the industry's biggest players. The Cloud Native Computing Foundation (CNCF)—an offshoot of the Linux Foundation—is one of the most prominent proponents of cloud-native adoption. In addition to crafting the vendor-neutral standards that dictate how cloud-native operations should work, the CNCF focuses on supporting open-source development projects. Also, several of the most important systems for maintaining modern cloud-native environments are themselves open source, like Kubernetes.
Last but not least, DevOps teams within many organizations have themselves been key contributors to the rise of cloud native. While creating apps with platforms hosted by on-premises architecture certainly hasn't been phased out entirely, numerous developers are deciding they prefer the scalability and elasticity that cloud-based development enables. Dev team members with this mindset want not only to move app creation into the cloud, but also to focus strongly on developing apps that most effectively leverage the benefits cloud computing can offer.
Cloud-native services: Fundamental features
Cloud-native resources rely on key features to function most effectively. Not all of them are exclusive to cloud-native apps and resources that are also platform specific, however. They are all characteristics that were specifically cited by the CNCF when the organization established guidelines on what it meant to be cloud native.
Cloud-native applications—along with their associated data—are hosted on infrastructure that is called "immutable" because the servers aren't modified, improved, or even repaired in any way after the apps or resources have been deployed.
If members of the dev team or other relevant stakeholders determine that additional computing resources are needed for the optimal performance of a particular app, existing servers are discarded and new ones are provisioned. The same applies if original servers fail completely or require repairs. Automated processes govern new server provisioning when the task becomes necessary, so that app deployment remains predictable and stable without requiring developer or engineer intervention.
Apps built according to the microservices model are made up of independent segments — i.e., services — that can operate separately but together form an entire application. Each service performs a specific business function, is backed by its own data store, and can be updated without affecting the entire app. Netflix and Spotify are two major examples of enterprises that use microservices architecture, for functions ranging from end-user menu functions to back-end storage of media files.
The microservices approach stands in stark contrast to a monolithic app architecture. Monolithic apps are layered and thus can't be updated in part without affecting the whole, often slowing down or interrupting operations. This reduces flexibility by preventing DevOps teams from scaling specific areas of an app as needed and increasing the chances of not living up to service-level agreements (SLAs).
Containers are the components within a microservices app architecture that allow each of the app's disparate services to operate independently. They serve as delivery systems that make it possible to deploy cloud-native apps within the cloud, on premises, or on hybrid cloud architecture.
Some of the most common containerization methods are open source, making them particularly easy for enterprises of varying sizes and capabilities to adopt. But whether an organization chooses to run Podman, BuildKit, or Docker containers, it needs a container orchestration platform to manage the container runtimes' dispersal of various microservices. Enterprises can choose open-source orchestrators like Kubernetes and Rancher, or opt for the infrastructure-native orchestration platforms offered by the major CSPs.
API gateways and service meshes
An application program interface (API) serves as the connective tissue between at least two different apps. APIs allow cloud-native apps to easily share data. API management systems oversee these connections by organizing and directing their request traffic with various tools, including API gateways that manage requests requiring multiple microservices.
For example, a Netflix user looking to change their membership tier sends a request using the service's desktop or mobile web interface. This engages one microservice that verifies the user's current membership level and changes it as requested, and another microservice responsible for charging their default payment method.
Service meshes are similar to API gateways in that they provide proper direction for data traffic. The main difference is that service meshes facilitate direct communication between specific microservices, which is strictly internal, whereas API gateways handle external requests. Sticking with the Netflix example, the microservices that respectively oversee original file storage and changes to audio/video format or playback quality remain in constant contact via service mesh.
The benefits of cloud-native
Choosing a full-fledged cloud-native approach may be ideal for companies aiming to move most or all of their apps, workloads, data, and other related resources to the cloud. Such moves are typically most successful when enterprises start by migrating the most mission-critical apps and systems and then complete the replatforming, refactoring, or lift-and-shift migration step by step, rather than attempting to rush the process.
Key advantages that companies can realize when embracing cloud-native include—but aren't limited to—the following:
Steadier, more consistent operations
The ways in which cloud-native apps are set up and deployed—using the principles of microservices architecture and container orchestration—contributes positively to the stability and consistency of critical enterprise operations.
Containers house the code, resource files, and other assets that each app microservice needs to run, and facilitate deployment on premises, via single- or multi-cloud infrastructure, or on a hybrid cloud. Meanwhile, the microservices model means that essential app updates occur in real time as availability permits don't interrupt other app functions or otherwise create costly downtime. This is called continuous delivery (CD), which is considered one of cloud native's key tenets.
Cloud-native apps set up to be native to a specific CSP will benefit from the ease of integration between their architecture and that of other relevant services in the CSP's catalog. Cloud services specifically design their apps to have efficient interoperability between one another. Ultimately, this can allow for a certain level of simplicity and day-to-day efficacy that end users truly appreciate, while also making it easier for different business units to collaborate.
Opting for a fully cloud-native architecture isn't important because it automates various server and infrastructure functions. It's critical because of its ability to make essential apps more flexible and easier to operate—and, in turn, bringing those advantages to drive improvements within the enterprise at large. This applies regardless of how infrastructure must be scaled up or down to match specific business needs.
Application and infrastructure failures are sometimes unavoidable despite developers' and engineers' best-laid plans. But the inherent features of cloud-native computing technologies ensure that these events impede operations less than would be the case with traditional enterprise apps and infrastructure. As one major example, orchestration platforms like Kubernetes arrange containers in clusters that allow for easy scaling—but also facilitate quick restarts and recoveries when difficulties arise.
Cloud native vs. cloud agnostic
These two cloud trends are regularly compared and can reasonably be considered parallel approaches—though they have too many similarities to be called opposites. For example, cloud-agnostic apps use container-based deployment and orchestration.
Also, like cloud native, there's no single overarching definition of cloud agnostic that applies to every context. The simplest explanation is that it refers to cloud apps and resources—and, particularly, their workloads—that can easily be moved between different cloud platforms. But it's also sometimes used to describe apps that run simultaneously—and seamlessly—in the different clouds of a multi-cloud deployment.
Furthermore, cloud-agnostic can be an organizational approach to the use of cloud services. Enterprises characterizing themselves as cloud-agnostic generally don't use cloud apps or platforms that only work in conjunction with one CSP's infrastructure and app ecosystem. The main reason organizations choose cloud agnostic over cloud native as a philosophy is because they consider platform-specific cloud-native computing synonymous with vendor lock-in.
To some extent, that belief is correct. And there is inherent flexibility in cloud agnosticism. But it's not perfect, either: Initial setup of purely cloud-agnostic apps and infrastructure can be more costly, time-consuming, and demanding of expert oversight than cloud native. Also, in multi-cloud deployments, apps and workloads from one cloud to another can be both difficult and expensive due to data transfer complexities.
Maximizing value for every cloud-native resource
There's no "one or the other" mandate for using platform-specific cloud-native apps or "agnostic" apps that are themselves optimally designed for cloud deployment. Nor is there any such rule saying that you must adopt either strict cloud agnosticism or vendor-specific cloud-native computing as your operational approach. Many modern enterprises use a mixture of both.
The key is to realize the greatest possible value of the cloud-native tools you are using—whether that means an entire platform-specific deployment or single-platform apps for specific clouds within a larger hybrid multi-cloud deployment. What follows are some ways you can start working toward this goal right away:
Follow the Twelve-Factor method
This set of best practices will help your dev team establish a stable process for creating smooth-running, scalable applications. The Twelve-Factor method includes everything from code-build strategies to steps for proper process formation.
Agile processes are ideal for development, but they don't have to be limited to that. They can also help others essential to a cloud deployment's functionality—such as the enterprise data management team—become more flexible and adapt to the rapidly changing circumstances enterprises often encounter.
Deploy apps with care
When determining how apps should be designed and deployed, choose carefully. Not every app and workload will be ideal for the cloud-native treatment—be it platform specific or otherwise—at any given time. Some might be better off on premises for the moment. Base your decision on current importance, projected strategic importance, and likely ROI.
Be mindful of application security
Cloud apps, no matter how they're deployed, will always be at a certain level of risk, one that can easily be greater than on-premises risks. But the other advantages of cloud operations are too great to let this be a major deterrent. Instead, use the most cutting-edge cloud security tools to protect apps, including secure web gateways (SWGs), zero-trust access policies, and next-generation firewalls (NGFWs).
Monitor app performance with analytics
Having a comprehensive single source of analytics truth at your disposal keeps you appraised in real time of your cloud applications' performance. By being built for cloud-native deployment but compatible with multiple CSPs' infrastructure, Teradata VantageCloud is the ideal complete cloud analytics and data platform for this purpose.
Connect with us to learn more about VantageCloud.