It's been a busy time for tech's ongoing infatuation with containers. Amazon just announced EC2 Container Registry to simplify container management. The new Azure container service taps into Microsoft's partnership with Docker and Mesosphere. You know when there's a standard for containers on the table there's money on the table, too.
Everyone is talking containers because they reduce a ton of development-related challenges and make it much easier to move across production and testing environments and clouds. Containers are the technology that, many believe, deliver on the long-promised portability in the cloud to avoid vendor lock-in, and put developers, system administrators and their enterprises in the driver's seat.
Getting up to speed about containers is not easy, but the good news is, there is a way for developers and their enterprises to become an instant part of the container revolution.
It involves moving the applications that exist in enterprises today into containers so they can "build, ship, and run any app anywhere," as Docker says.
The key is to move just the application, not the entire machine -- and not an image of the machine. Let's call this approach "Machine to Container" or M2C. Packaging up an existing application into a container so you can move just the application and not the operating system makes them completely portable. You can leave the app in the container to be moved again, use it as a distribution system, or dissolve the container and leave your app installed on the new machine or cloud. M2C can be viewed as evolutionary rather than revolutionary.
In fact, the model for encapsulating physical machines into virtual machines (VMs), each with an operating system and programs, dramatically changed the makeup of enterprise data centers. Machine virtualization allowed developers to spread workloads around on servers that weren't being fully utilized. VMware became the commercial pioneer of machine virtualization with the automation to manage these new environments leading to its dramatic growth. Cloud computing evolved from this virtualization, bringing efficiency, automation and scale of operations for tremendous cost reductions.
Then along came Docker, with some 800 million downloads afforded by its container tech, arguably another form of virtualization. Like virtualization, containers redefine how applications are deployed. Sub-dividing compute resources has huge advantages and everyone knows it. Google is now rattling sabers with Kubernetes, its version for container orchestration. Startup CoreOS, which has its own containers, has adopted Google’s management and provisioning approach to compete with Docker. Even VMware is transforming offerings to participate in this emerging market.
While there might not be a guerilla in the container market anytime soon, it is clear that the container approach is here to stay. With a new computing paradigm, there are many opportunities to add value, and it's clear the market will evolve from a single product offering into a robust ecosystem of companies serving this market.
So what's the fastest way to start using containers? One of the biggest challenges to the adoption of virtual machines was how to convert the old physical machines to these new virtual machines in order to realize the cost savings and agility that come with machine virtualization. A set of companies emerged in mid-2000 to help system administrators migrate from physical to virtual machines. The same problem exists when it comes to containerizing existing applications.
With 99.9% of the applications in use today not containerized, it makes sense to get these applications into the container world fast for all of the reasons virtualization, containerization and the cloud make sense. We want to move from machine to container by migrating existing applications from inside a physical or virtual machine to a container.
So how does machine to container work? Migrating existing machines and applications into containers can be compared to image migration. Image migration migrates an entire machine, including the OS and the applications. Post migration remediation includes removing physical machine device drivers and replacing them with suitable virtual devices. While this approach works well for physical-to-virtual and virtual-to-virtual use cases, the unit of work is the whole machine and there is no visibility into any of the layers inside a machine (operating system, management, web-server, app-server, database server, etc.).
On the other hand, M2C migrates apps by separating them from the operating system and copying them into a container. Once the application (with all its binaries, configuration, data and all its dependencies) is separate from the OS and replicated from the source machine into a container, the resulting package can be copied or provisioned to another machine, including a newer platform like Windows Server 2012.
The destination can have different machine characteristics (physical, virtual, on-premise, or cloud) and different characteristics inside the machines (OS, management apps, Terminal Services, application, etc.). The unit of work is granular (an app), allowing the characteristics of the host machine to change, but the application configuration within the machine can change as well. This flexibility results in the agility system administrators and infrastructure architects seek. It avoids getting locked into a deployment stack, and lets one keep up with new and emerging deployment offerings, like new OS releases, data center management suites, and cloud offerings.
Most of today's enterprise applications are Windows based, and, by nature, difficult to move. With containers gaining momentum, M2C can be viewed as the box that moves this software into the modern world.