Most 2nd generation applications were deployed on
monolithic, proprietary servers that contained all of the associated applications
such as communications, databases and security. For the 3rd
generation developers have to assemble applications using a multiplicity of the
best available services, and must be prepared for those applications to be
deployed across a multiplicity of different hardware environments, including public,
private, and virtualized servers.
·
Fifteen years ago, virtually all applications were written
using well-defined stacks of services and deployed on a single monolithic,
proprietary server. Today, developers build and assemble applications using a
multiplicity of the best available services, and must be prepared for those
applications to be deployed across a multiplicity of different hardware
environments, included public, private, and virtualized servers.
The assembly of applications
sets up the possibility for adverse interactions between different services and
difficulty in migrating and scaling across different hardware offerings. Managing
a matrix of multiple different services deployed across multiple different
types of hardware becomes exceedingly difficult.
There is a huge number of combinations and permutations of
applications/services and hardware environments that need to be considered
every time an application is written or rewritten. This creates a difficult
situation for both the developers who are writing applications and the folks in
operations who are trying to create a scalable, secure, and highly performance
operations environment.
A useful analogy can be drawn from the world of shipping.
Before 1960, most cargo was shipped break bulk. Shippers and carriers alike
needed to worry about bad interactions between different types of cargo (e.g.
if a shipment of anvils fell on a sack of bananas). Similarly, transitions
between different modes of transport were painful. Up to half the time to ship
something could be taken up as ships were unloaded and reloaded in ports, and
in waiting for the same shipment to get reloaded onto trains, trucks, etc.
Along the way, losses due to damage and theft were large. And, there was an n X
n matrix between a multiplicity of different goods and a multiplicity of
different transport mechanisms.
Containerization of applications and the virtual computing
infrastructure can be thought of as an intermodal shipping
container system for code. Containerization of code and its supporting
infrastructure enables any application and its dependencies to be packaged up
as a lightweight, portable, self-sufficient container. Containers have standard
operations, thus enabling automation. The same container that that a developer
builds on a laptop will run at scale, in production, on VMs, bare-metal
servers, server clusters, public instances, or combinations of the above. Most
importantly, consistent security can be applied to all of the components that
reside in the container.
In other words, developers
can build their application once, and then know that it can run consistently
anywhere. Operators can configure their servers once, and then know that they
can run any application.
After a multiplicity of
applications and VMs are in a container, the following will apply:
- Each application in the container runs in a completely separate root file system.
- System resources like CPU and memory can be allocated differently to each process container.
- Each process runs in its own network namespace, with a virtual interface and IP address of its own.
- The standard streams of each process container arecollected and logged for real-time or batch retrieval.
- Changes to a filesystem can be committed into a new image and re-used to create more containers. No templating or manual configuration required.
· Root filesystems are created using
copy-on-write, which makes deployment extremely fast, memory-cheap and
disk-cheap.
No comments:
Post a Comment
For comments please e-mail paul@strassmann.com