Bimodal IT – How IT needs to change?

Gartner defines BiModal IT as an organizational model that segments IT services into two categories based on application requirements, maturity and criticality. “Mode 1 is traditional, emphasizing scalability, efficiency, safety and accuracy. Mode 2 is nonsequential, emphasizing agility and speed.”   A recent Forbes article also highlighted the differences.

Now what does this all mean to IT.  There are 4 impacts IT needs to consider.

a) Applications in mode 2 are more agile, likely built with microservices and running in cloud, mostly public clouds.  They will also be web scale and likely supporting the mobile apps or big data analytic workloads.  IT needs to enable back end infrastructure to support such applications and services and must be willing to work out solutions both on-prem or in public clouds.  For example, microservices will be delivered through containers such as Docker, and hence IT should focus on preparing for the latest technologies for building these infrastructures to help application developers and operations teams to deploy these apps.  Some of these technologies can range from a simple cluster of Docker hosts being managed as a cluster through a Cloud Management product (CMP) like BMC CLM to more complex cluster managers and schedulers such as Google’s Kubernetes or even datacenter operating systems like Mesos.  These are the next generation infrastructures needed for running cloud applications and IT should be able to be proficient in these as well as be able to manage this infrastructure efficiently.

b) Rate of change of these apps is going to be much higher than the traditional 6-12 month delivery cycle for legacy apps.  The apps team needs to implement a DevOps pipeline that allows quick release of application through build, test and deploy stages.  IT should get out of the way of apps team and also provide API based services that can be called from DevOps pipelines such as security services that they wish to enforce.  IT must also provide API based environments to be created and destroyed in public or on-prem private clouds, and IT should enable this instead of being a roadblock.  Even shadow IT is alright.  If there are minimalistic policies on security and compliance on the applications or infrastructure, these should be provided by IT as a API based service.  For example, if IT requires security compliance to check for vulnerabilities in Docker container, IT must provide an API based security compliance as a service.

c) Application updates with creative ootb deployment strategies in production. Application changes have to be pushed into production using modern techniques such as blue-green deployment models that is quite extensively used by companies like Netflix on AWS or a rolling deployment update model as well.  Both are based on the common theme that application releases have newer versions being deployed together with existing versions, and they need to be validated by releasing some traffic to the newer version, verifying them in production, and then switching gradually to the newer version of the application.  DevOps tools must support much of these deployment orchestration to production.

d) Less governance and control.  Although some minimal viable governance is certainly needed for even the new agile IT, the level and evaluation of governance differs from mode 1.  In mode 1, governance is stringent and rigid and is based on approvals and change control boards.  We loosen this a bit in mode 2, the new agile way.  Instead the governance is based on permissions and s built into the Devops process itself, such as basic compliance and vulnerability testing of applications is a part of the DevOps pipeline itself.

By adapting to this new mode 2 agile IT, IT and application development teams can achieve the goals of building and delivering new applications faster.  IT products such as cloud management tools, DevOps tools, deployment tools, configuration management tools, compliance, patching and monitoring tools need to adjust to ensure that both mode 1 and mode 2 are supported based on the use cases and type of application these tools are supporting.

Advertisements

3 Steps to Introduce Docker Containers in Enterprise

Docker container technology has seen a rapid rise in early adoption and broad market acceptance.  It is a technology that is a seen to be a strategic enabler of business value because of the benefits it can provide in terms of reduced cost, reduced risk and increased speed. Unfortunately, enterprises do not know how to introduce Docker to get business value, how to run Docker in dev, test and prod or how to effectively use automation with Docker.  We propose a 3 step yellow brick road to allow enterprises to take on the journey of using Docker.  This journey starts with ad-hoc Docker usage in the Evaluation phase followed by increasing levels of usage and automation through Pilot and Production phases.

Step 1. Evaluation

In the early phases, engineers ‘play’ and ‘evaluate’ Docker technology in dockerizing a small set of applications.  First, a Docker host will be needed.  Ubuntu or Redhat machines can be used to setup Docker in a few minutes by following instructions at the Docker website.  Once Docker host is set, at least initial development can be done in a insecure mode (so need for certificates in this phase).  You can login to the Docker host and use docker pull and run commands to run a few containers from the public Docker hub.  Finally, selecting the right applications to dockerize is extremely important in this phase.  Stateless internal or non-production apps would be a good way to start converting them to containers.  Conversion requires the developer to write Docker files and become familiar with Docker build commands as well.  The output of build is a Docker image.  Usually, an internal private Docker registry can be installed or the public Docker hub can be used with private account so your images do not become public.

Step 2. Pilot It

In Pilot phase, the primary goals are to start bringing in IT and DevOps teams to go through infrastructure and operations to setup Docker applications.  An important part of this phase is to “IT-ize” the docker containers to run a pilot in the IT production so that IT operations team can start managing docker containers.  This phase requires that IT operations manage dual stacks – virtualization platforms like VMWare vCenters and vSphere infrastructure for virtual machines as well as new infrastructure for running Docker application containers.

In this phase, management systems such as BMC CLM. BMC BSA and BMC DevOps products will be needed in 4 primary areas:

a) Build Docker Infrastructure: Carve out a new Docker infrastructure consisting of a farm of Docker hosts to run containers along side with traditional virtualization platforms and hybrid clouds.

b) Define and deploy your app as a collection of containers: These products also provide blueprints to define application topology consisting of Docker containers, spin them up and then provide day 2 management of docker containers for end users such as start/stop and monitor Docker applications.  They also integrate with Docker Hubs or Docker Trusted Registry for sourcing images.

c) Build your delivery pipeline: BMC DevOps products offer CI/CD workflows for continuous integration and continuous deployment of Docker images.

d) Vulnerability testing of containers: BMC BSA can be used to do SCAP vulnerability testing of Docker images.

Step 3. Put it in Production
Finally, in the ‘put it in production’ phase, Docker containers are deployed to production infrastructure.  This will require not just DevOps, and deployment of containers to a set of Docker hosts, but also requires security, compliance and monitoring.  Supporting complex application topologies is a degree of sophistication many enterprises will in fact desire to allow gradual introduction to the benefits of containers while keeping the data in the traditional virtual or physical machines.  Another degree of sophistication is the introduction of more complex distributed orchestration to improve datacenter utilization and reduce operational placement costs.  While in the previous phase, we had used static partitioning of infrastructure resources into clusters, this phase will use more state of the art cluster schedulers such as Kubernetes or Fleet.  Finally, governance, change control, CMDB integration and quota management are some of the ways enterprise can start governing the usage of Docker as it grows in the enterprise.  Container sprawl reduction through reclamation are additional processes that need to be automated at this level.

Each enterprise should evaluate the business benefits at the end of each of this steps to determine if there is ROI achieved and goals accomplished.  We believe that having a 3 step phased approach to introducing Docker with increasing sophisticated usage and automation would make it easy to test drive and productize Docker inside enterprises.

Running Docker containers on Swarm, Mesos and Kubernetes Clusters

We believe that the next gen cloud and datacenter will be based on cluster managers such as Kubernetes and Mesos.  Some call it the “Datacenter operating system”.  These cluster architectures can not only support big data analytics and real time apps like Hadoop and Storm but also can support Docker containers and many other type of application workloads.

Mesos is a cluster management framework heavily used by Twitter, Airbnb and Google.  . Kubernetes is a cluster manager based on internal Omega  cluster manager that Google has been using for over a decade managing their  Linux containers. Even though large scale web and social internet companies have used these infrastructure, these infrastructure architectures are now slowly finding their way into the traditional enterprises.   I ran two experiments to get familiar with these technologies and convince myself that running Docker container workloads is truly as easy as it sounds.

I was able to successfully deploy Mesos, Marathon and Zookeeper to build a datacenter OS on Amazon EC2 and provision several Docker containers on it. through  a UI and REST API.  A couple of hours on a lazy Saturday was enough to get this done.  I also setup a Amazon ECS as well.  Once these two clusters were ready, it was very easy to provision containers to either of the two clusters.

In a later post, I will post the instructions to get this done.  All you need is an AWS account.  It is really exhilarating to see that without IT I can deploy large complex infrastructures and push Docker workloads on it so easily.  This is clearly going to be the wave of the future.

Best practices for containerizing your application – Part-I

We have a team working to dockerize a few of our applications to Docker over the past few months. As we went through this dockerization process, we have collected a set of challenges and best practices in using Docker.  We don’t yet have all the answers but at least we know the questions and issues that any application team will face when converting a traditional application into Docker containers.  Broadly, we have categorized our best practices into 3 categories:

  1. Design containers for your apps
  2. Getting the devops pipeline ready
  3. Running containerized app in operations (production)

We will cover each of these in three parts.

1. Design your containerized App

Break your app down

One of the first steps is to understand the overall architecture of your application in terms of tiers, complexity, stateful/stateless, datastore dependencies and such.  It would also be useful to know the running state of the application, such as the number of processes as well as the distributed nature of the application.  Based on this information, it might be useful to break a monolithic application into components that would each represent a container or keep the monolithic application as one container.  If there are multiple containers, then the complexity of the solution increases as communication/links between containers will need to be designed.  Of course, keeping one big monolothic container is also difficult to use if it becomes huge (multiple GB) and replacing components would require a full fork lift upgrade where you lose the container benefits. The rest of the discussion applies to each component container.

Base image selection – standardize and keep it updated

It is important to standardize on a base image or a small set of base images that you need for your containers.  These can be RHEL, Ubuntu or other operating system images.  Traditionally, these are maintained by an infrastructure team or can be maintained by apps team as well.  Note that these images will go through a number of compliance and possible patching and rebuilt due to vulnerabilities and hence require careful selection and attention during their lifecycle to keep it updated.  Finally, standardization is a key aspect so that multiple application teams use a common set of base images.  This will help enterprises achieve simplification in day 2 operations and management of these images.

Configuration and Secrets

Configuration and secrets should never be burned into a container (Docker) image and must always be externalized.  There are a number of ways to achieve this such as through environment variables, through scripts with Chef/Puppet and through tools such as Zookeeper and Consul.  Secrets also require a store outside the container ecosystem especially in production such as vault and many others such as Keywhiz, HSM and others.  Externalizing configurations help in making sure that when the containers are provisioned in DEV or QA or PROD, each have a different set of environment variables and secrets.

Datastores and databases

The stateful containers such as databases and datastores require a careful analysis of whether these should be containerized or not.  It is alright to keep them non-containerized in the initial releases of the application.  There are ways in Docker to use data containers and mounted volumes but we haven’t investigated this further.

Once you have taken a few of the above critical decisions on a plan to containerize your application, you are on your way.  In part-II, we will cover how to build DevOps pipeline and best practices for tagging builds and testing images.  And finally, in part-III, we will cover the best practices for running container applications in production.