3 Steps to Introduce Docker Containers in Enterprise

Docker container technology has seen a rapid rise in early adoption and broad market acceptance.  It is a technology that is a seen to be a strategic enabler of business value because of the benefits it can provide in terms of reduced cost, reduced risk and increased speed. Unfortunately, enterprises do not know how to introduce Docker to get business value, how to run Docker in dev, test and prod or how to effectively use automation with Docker.  We propose a 3 step yellow brick road to allow enterprises to take on the journey of using Docker.  This journey starts with ad-hoc Docker usage in the Evaluation phase followed by increasing levels of usage and automation through Pilot and Production phases.

Step 1. Evaluation

In the early phases, engineers ‘play’ and ‘evaluate’ Docker technology in dockerizing a small set of applications.  First, a Docker host will be needed.  Ubuntu or Redhat machines can be used to setup Docker in a few minutes by following instructions at the Docker website.  Once Docker host is set, at least initial development can be done in a insecure mode (so need for certificates in this phase).  You can login to the Docker host and use docker pull and run commands to run a few containers from the public Docker hub.  Finally, selecting the right applications to dockerize is extremely important in this phase.  Stateless internal or non-production apps would be a good way to start converting them to containers.  Conversion requires the developer to write Docker files and become familiar with Docker build commands as well.  The output of build is a Docker image.  Usually, an internal private Docker registry can be installed or the public Docker hub can be used with private account so your images do not become public.

Step 2. Pilot It

In Pilot phase, the primary goals are to start bringing in IT and DevOps teams to go through infrastructure and operations to setup Docker applications.  An important part of this phase is to “IT-ize” the docker containers to run a pilot in the IT production so that IT operations team can start managing docker containers.  This phase requires that IT operations manage dual stacks – virtualization platforms like VMWare vCenters and vSphere infrastructure for virtual machines as well as new infrastructure for running Docker application containers.

In this phase, management systems such as BMC CLM. BMC BSA and BMC DevOps products will be needed in 4 primary areas:

a) Build Docker Infrastructure: Carve out a new Docker infrastructure consisting of a farm of Docker hosts to run containers along side with traditional virtualization platforms and hybrid clouds.

b) Define and deploy your app as a collection of containers: These products also provide blueprints to define application topology consisting of Docker containers, spin them up and then provide day 2 management of docker containers for end users such as start/stop and monitor Docker applications.  They also integrate with Docker Hubs or Docker Trusted Registry for sourcing images.

c) Build your delivery pipeline: BMC DevOps products offer CI/CD workflows for continuous integration and continuous deployment of Docker images.

d) Vulnerability testing of containers: BMC BSA can be used to do SCAP vulnerability testing of Docker images.

Step 3. Put it in Production
Finally, in the ‘put it in production’ phase, Docker containers are deployed to production infrastructure.  This will require not just DevOps, and deployment of containers to a set of Docker hosts, but also requires security, compliance and monitoring.  Supporting complex application topologies is a degree of sophistication many enterprises will in fact desire to allow gradual introduction to the benefits of containers while keeping the data in the traditional virtual or physical machines.  Another degree of sophistication is the introduction of more complex distributed orchestration to improve datacenter utilization and reduce operational placement costs.  While in the previous phase, we had used static partitioning of infrastructure resources into clusters, this phase will use more state of the art cluster schedulers such as Kubernetes or Fleet.  Finally, governance, change control, CMDB integration and quota management are some of the ways enterprise can start governing the usage of Docker as it grows in the enterprise.  Container sprawl reduction through reclamation are additional processes that need to be automated at this level.

Each enterprise should evaluate the business benefits at the end of each of this steps to determine if there is ROI achieved and goals accomplished.  We believe that having a 3 step phased approach to introducing Docker with increasing sophisticated usage and automation would make it easy to test drive and productize Docker inside enterprises.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s