AWS Lambda and Serverless Computing – Impacts on Dev, Infrastructure, Ops and Security

Developing software as event driven programming is not new.  Most of our UI is based on events through libraries such as jQuery and AJAX.  However, AWS Lambda service from Amazon has taken this concept of event driven reactive programming to the backend processing in cloud and made it simple and intuitive to use in a few clicks.  AWS Lambda requires an event and some code associated with the event to be executed – that’s it.  No server provisioning, no auto-scaling, no server monitoring and no server patching and security.  The “events” that can be specified with AWS are growing and includes external events such as HTTPS REST endpoint API call or internal events inside AWS ecosystem such as a new S3 file or new record in a Kinesis stream.  The “code” is simply a Node Javascript or Java “function” that encapsulates the business logic. If I just have “events” and “code”, and no servers, then a number of questions arise:

  • What types of use cases are best suited for serverless apps?
  • How does my app design change due to Lambda?
  • What happens to my DevOps pipelines?
  • How do I do  “infrastructure as code” when I don’t have any infrastructure?  No dynamic infrastructure to manage?
  • Do I still need operations?  And do I still need security scans?

This article answers these key questions by analyzing the impact of AWS Lambda on cloud native app design, devops pipeline tools, cloud lifecycle management tools (eg. CMP), operations and security.  It also discusses how serverless infrastructure could become the next disruption after containerization and virtualization in cloud by fundamentally changing the way we design, build and operate cloud native applications.

Use Cases for Server-less Computing

The key use cases where serverless computing such as AWS Lambda have been used in the last year since AWS announced this service in November of 2014 are categorized below.

  1. Simple event driven business logic that does not justify a continuously running server.  There are many use cases of the form of IFTTT (If-This-Then-That).  For example, I want to build a Git push to my custom social tool integration.  This can be easily done by writing a function in Lambda that listens for a webhook from Git push and does some action in my “social tool”.
  2. Event driven highly scalable transformation, notification, audit or analysis of data such as transforming media files from one format to another, or other such utilities can be easily written as Lambda functions and triggered as data arrives.
  3. Stateless microapplications – These applications are simple self-contained utilities or scripts that can be deployed in cloud but again do not justify a server or PaaS to run.
  4. Extreme scaling such as taking a task and breaking it down into 100’s of subtasks each doing independent invocation of the same business logic coded as Lambda function.
  5. Mobile and IoT backend that requires extreme scaling
  6. Rule based processing – Rules can be designed as Lambda functions to drive business logic.  For example, operational metrics and AWS infrastructure management itself can be done through rules (See Netflix for example).

This shows that a wide variety of applications from simple apps to complex backends can be developed in Lambda when automatic unlimited scaling and minimal operational overhead are key drivers.

Cloud Native Application Design

Cloud native application design is heavily based on microservices that can be scaled easily.  With AWS Lambda paired with AWS API Gateway services, the microservices design becomes even simpler and scalable without writing any code or management.  Each microservice is clearly focused on a single responsibility and business function, that is coded as a Lambda “function”.  The REST endpoint for this is provided by API Gateway and becomes the “event” triggering this microservice Lambda funciton.

Application = Collection of Lambda Functions: A cloud native application can be thought of as a collection of microservices or a collection of Lambda functions each with its REST endpoint or triggered from another event or called from another Lambda function.  The composition and chaining of Lambda functions can easily be done to build more complex microservice orchestration.  Also, Lambda requires a more functional and event driven programming approach when designing and building apps on it.

Hybrid: Except for some simple use cases outlined earlier, most of the complex cloud native apps will contain a mix of IaaS, PaaS, containers and Lambda functions.  This requires that a declarative blueprint of the application allow not just servers and PaaS resources but also Lambda functions as resources.  Cloud Management Products (eg. BMC CLM) should support such blueprints.

Development, DevOps and NoOps

Lambda based applications will require tools and technologies to have new integrations to AWS Lambda.

Local Development:  Developing and testing code locally by developers is currently a challenge with AWS Lambda.  Lambda requires a lot of manual steps such as creating multiple IAM roles, dependent AWS resources; zipping up code including dependencies and uploading it to AWS, retrieving logs from CloudWatch and so on. This is an opportunity for a new breed of tools and integration with existing tools to simplify the developer experience.  Source code systems such as Git,  IDE tools such as Eclipse and CI tools such as Jenkins can have more tighter integration with AWS Lambda. Imagine that I just wrote a Lambda function in Eclipse, and with a right click, I am able to test it out by pushing this new version to Amazon and running jUnit test cases before check-in without using AWS CLI/console and creating the half a dozen or so manual steps in setting up my code on AWS Lambda service.

CI/CD Pipelines: Lambda functions based microservices still require the traditional release pipelines where the code “Lambda function microservice” is built, tested and deployed in test environments.  This will require creating AWS Lambda environments for testing the Lambda function as well as integration with AWS Lambda and after the tests are run, decommissioning the Lambda environments.  Many code delivery and release management products have started to add plug-ins to Lambda in their products to achieve this capability: and

Lifecycle Management of Lambda Functions: Cloud Management Product (CMP)

Lambda functions are just microservices that are event triggered, and hence require the complete lifecycle management.  This can be accomplished by Cloud Management Platforms (CMP) such as BMC CLM.  As noted earlier, complex blueprints could have both Lambda resources as well as traditional cloud resource such as AMI machines.  Developers and admins require ability to declaratively specify applications consisting of Lambda and regular cloud resources as well as multiple deployment models such as dev, test, staging and prod.  CMP then takes these blueprints and provisions resources.  In case of Lambda, the resources such as API gateway and Lambda functions will be provisioned together with all the security and configuration needed today to set them up properly (such as IAM roles, connecting API gateway linkages to Lambda and so on).  Orchestration and visualization of a set of Lambda and non-Lambda cloud resources are additional capabilities which CMPs will provide.  End users and operations require capabilities to visualize all the Lambda offerings and Lambda functions that they have provisioned for development or production workloads.  There are also a number of day 2 actions possible on Lambda functions such as increasing the memory or timeout for Lambda function.  Many cloud provisioning tools have started to build provisioning support for Lambda – see

Security, Logs and Monitoring

Even though Lambda functions abstract the servers, security is still needed.  Vulnerability and web pen test of the Lambda functions will be critical in assessing that there are no application level vulnerabilities.  Compliance rules can also be written for such functions that would require evaluation.

Lambda functions as microservices require log analysis and monitoring and traditional tools will require plug-ins to be added to support this.

What next?

We believe that serverless computing thinking will be a disruptive force similar to containerization.  We also believe that event driven computing is not suitable for all applications and we expect to see microservices that are implemented using a mix of containers, virtual machines or Lambda functions.  AWS Lambda will require our DevOps, application release management, cloud management and security tools to adapt for effective operations as described here.


Bimodal IT – How IT needs to change?

Gartner defines BiModal IT as an organizational model that segments IT services into two categories based on application requirements, maturity and criticality. “Mode 1 is traditional, emphasizing scalability, efficiency, safety and accuracy. Mode 2 is nonsequential, emphasizing agility and speed.”   A recent Forbes article also highlighted the differences.

Now what does this all mean to IT.  There are 4 impacts IT needs to consider.

a) Applications in mode 2 are more agile, likely built with microservices and running in cloud, mostly public clouds.  They will also be web scale and likely supporting the mobile apps or big data analytic workloads.  IT needs to enable back end infrastructure to support such applications and services and must be willing to work out solutions both on-prem or in public clouds.  For example, microservices will be delivered through containers such as Docker, and hence IT should focus on preparing for the latest technologies for building these infrastructures to help application developers and operations teams to deploy these apps.  Some of these technologies can range from a simple cluster of Docker hosts being managed as a cluster through a Cloud Management product (CMP) like BMC CLM to more complex cluster managers and schedulers such as Google’s Kubernetes or even datacenter operating systems like Mesos.  These are the next generation infrastructures needed for running cloud applications and IT should be able to be proficient in these as well as be able to manage this infrastructure efficiently.

b) Rate of change of these apps is going to be much higher than the traditional 6-12 month delivery cycle for legacy apps.  The apps team needs to implement a DevOps pipeline that allows quick release of application through build, test and deploy stages.  IT should get out of the way of apps team and also provide API based services that can be called from DevOps pipelines such as security services that they wish to enforce.  IT must also provide API based environments to be created and destroyed in public or on-prem private clouds, and IT should enable this instead of being a roadblock.  Even shadow IT is alright.  If there are minimalistic policies on security and compliance on the applications or infrastructure, these should be provided by IT as a API based service.  For example, if IT requires security compliance to check for vulnerabilities in Docker container, IT must provide an API based security compliance as a service.

c) Application updates with creative ootb deployment strategies in production. Application changes have to be pushed into production using modern techniques such as blue-green deployment models that is quite extensively used by companies like Netflix on AWS or a rolling deployment update model as well.  Both are based on the common theme that application releases have newer versions being deployed together with existing versions, and they need to be validated by releasing some traffic to the newer version, verifying them in production, and then switching gradually to the newer version of the application.  DevOps tools must support much of these deployment orchestration to production.

d) Less governance and control.  Although some minimal viable governance is certainly needed for even the new agile IT, the level and evaluation of governance differs from mode 1.  In mode 1, governance is stringent and rigid and is based on approvals and change control boards.  We loosen this a bit in mode 2, the new agile way.  Instead the governance is based on permissions and s built into the Devops process itself, such as basic compliance and vulnerability testing of applications is a part of the DevOps pipeline itself.

By adapting to this new mode 2 agile IT, IT and application development teams can achieve the goals of building and delivering new applications faster.  IT products such as cloud management tools, DevOps tools, deployment tools, configuration management tools, compliance, patching and monitoring tools need to adjust to ensure that both mode 1 and mode 2 are supported based on the use cases and type of application these tools are supporting.

3 Steps to Introduce Docker Containers in Enterprise

Docker container technology has seen a rapid rise in early adoption and broad market acceptance.  It is a technology that is a seen to be a strategic enabler of business value because of the benefits it can provide in terms of reduced cost, reduced risk and increased speed. Unfortunately, enterprises do not know how to introduce Docker to get business value, how to run Docker in dev, test and prod or how to effectively use automation with Docker.  We propose a 3 step yellow brick road to allow enterprises to take on the journey of using Docker.  This journey starts with ad-hoc Docker usage in the Evaluation phase followed by increasing levels of usage and automation through Pilot and Production phases.

Step 1. Evaluation

In the early phases, engineers ‘play’ and ‘evaluate’ Docker technology in dockerizing a small set of applications.  First, a Docker host will be needed.  Ubuntu or Redhat machines can be used to setup Docker in a few minutes by following instructions at the Docker website.  Once Docker host is set, at least initial development can be done in a insecure mode (so need for certificates in this phase).  You can login to the Docker host and use docker pull and run commands to run a few containers from the public Docker hub.  Finally, selecting the right applications to dockerize is extremely important in this phase.  Stateless internal or non-production apps would be a good way to start converting them to containers.  Conversion requires the developer to write Docker files and become familiar with Docker build commands as well.  The output of build is a Docker image.  Usually, an internal private Docker registry can be installed or the public Docker hub can be used with private account so your images do not become public.

Step 2. Pilot It

In Pilot phase, the primary goals are to start bringing in IT and DevOps teams to go through infrastructure and operations to setup Docker applications.  An important part of this phase is to “IT-ize” the docker containers to run a pilot in the IT production so that IT operations team can start managing docker containers.  This phase requires that IT operations manage dual stacks – virtualization platforms like VMWare vCenters and vSphere infrastructure for virtual machines as well as new infrastructure for running Docker application containers.

In this phase, management systems such as BMC CLM. BMC BSA and BMC DevOps products will be needed in 4 primary areas:

a) Build Docker Infrastructure: Carve out a new Docker infrastructure consisting of a farm of Docker hosts to run containers along side with traditional virtualization platforms and hybrid clouds.

b) Define and deploy your app as a collection of containers: These products also provide blueprints to define application topology consisting of Docker containers, spin them up and then provide day 2 management of docker containers for end users such as start/stop and monitor Docker applications.  They also integrate with Docker Hubs or Docker Trusted Registry for sourcing images.

c) Build your delivery pipeline: BMC DevOps products offer CI/CD workflows for continuous integration and continuous deployment of Docker images.

d) Vulnerability testing of containers: BMC BSA can be used to do SCAP vulnerability testing of Docker images.

Step 3. Put it in Production
Finally, in the ‘put it in production’ phase, Docker containers are deployed to production infrastructure.  This will require not just DevOps, and deployment of containers to a set of Docker hosts, but also requires security, compliance and monitoring.  Supporting complex application topologies is a degree of sophistication many enterprises will in fact desire to allow gradual introduction to the benefits of containers while keeping the data in the traditional virtual or physical machines.  Another degree of sophistication is the introduction of more complex distributed orchestration to improve datacenter utilization and reduce operational placement costs.  While in the previous phase, we had used static partitioning of infrastructure resources into clusters, this phase will use more state of the art cluster schedulers such as Kubernetes or Fleet.  Finally, governance, change control, CMDB integration and quota management are some of the ways enterprise can start governing the usage of Docker as it grows in the enterprise.  Container sprawl reduction through reclamation are additional processes that need to be automated at this level.

Each enterprise should evaluate the business benefits at the end of each of this steps to determine if there is ROI achieved and goals accomplished.  We believe that having a 3 step phased approach to introducing Docker with increasing sophisticated usage and automation would make it easy to test drive and productize Docker inside enterprises.

Running Docker containers on Swarm, Mesos and Kubernetes Clusters

We believe that the next gen cloud and datacenter will be based on cluster managers such as Kubernetes and Mesos.  Some call it the “Datacenter operating system”.  These cluster architectures can not only support big data analytics and real time apps like Hadoop and Storm but also can support Docker containers and many other type of application workloads.

Mesos is a cluster management framework heavily used by Twitter, Airbnb and Google.  . Kubernetes is a cluster manager based on internal Omega  cluster manager that Google has been using for over a decade managing their  Linux containers. Even though large scale web and social internet companies have used these infrastructure, these infrastructure architectures are now slowly finding their way into the traditional enterprises.   I ran two experiments to get familiar with these technologies and convince myself that running Docker container workloads is truly as easy as it sounds.

I was able to successfully deploy Mesos, Marathon and Zookeeper to build a datacenter OS on Amazon EC2 and provision several Docker containers on it. through  a UI and REST API.  A couple of hours on a lazy Saturday was enough to get this done.  I also setup a Amazon ECS as well.  Once these two clusters were ready, it was very easy to provision containers to either of the two clusters.

In a later post, I will post the instructions to get this done.  All you need is an AWS account.  It is really exhilarating to see that without IT I can deploy large complex infrastructures and push Docker workloads on it so easily.  This is clearly going to be the wave of the future.

Best practices for containerizing your application – Part-I

We have a team working to dockerize a few of our applications to Docker over the past few months. As we went through this dockerization process, we have collected a set of challenges and best practices in using Docker.  We don’t yet have all the answers but at least we know the questions and issues that any application team will face when converting a traditional application into Docker containers.  Broadly, we have categorized our best practices into 3 categories:

  1. Design containers for your apps
  2. Getting the devops pipeline ready
  3. Running containerized app in operations (production)

We will cover each of these in three parts.

1. Design your containerized App

Break your app down

One of the first steps is to understand the overall architecture of your application in terms of tiers, complexity, stateful/stateless, datastore dependencies and such.  It would also be useful to know the running state of the application, such as the number of processes as well as the distributed nature of the application.  Based on this information, it might be useful to break a monolithic application into components that would each represent a container or keep the monolithic application as one container.  If there are multiple containers, then the complexity of the solution increases as communication/links between containers will need to be designed.  Of course, keeping one big monolothic container is also difficult to use if it becomes huge (multiple GB) and replacing components would require a full fork lift upgrade where you lose the container benefits. The rest of the discussion applies to each component container.

Base image selection – standardize and keep it updated

It is important to standardize on a base image or a small set of base images that you need for your containers.  These can be RHEL, Ubuntu or other operating system images.  Traditionally, these are maintained by an infrastructure team or can be maintained by apps team as well.  Note that these images will go through a number of compliance and possible patching and rebuilt due to vulnerabilities and hence require careful selection and attention during their lifecycle to keep it updated.  Finally, standardization is a key aspect so that multiple application teams use a common set of base images.  This will help enterprises achieve simplification in day 2 operations and management of these images.

Configuration and Secrets

Configuration and secrets should never be burned into a container (Docker) image and must always be externalized.  There are a number of ways to achieve this such as through environment variables, through scripts with Chef/Puppet and through tools such as Zookeeper and Consul.  Secrets also require a store outside the container ecosystem especially in production such as vault and many others such as Keywhiz, HSM and others.  Externalizing configurations help in making sure that when the containers are provisioned in DEV or QA or PROD, each have a different set of environment variables and secrets.

Datastores and databases

The stateful containers such as databases and datastores require a careful analysis of whether these should be containerized or not.  It is alright to keep them non-containerized in the initial releases of the application.  There are ways in Docker to use data containers and mounted volumes but we haven’t investigated this further.

Once you have taken a few of the above critical decisions on a plan to containerize your application, you are on your way.  In part-II, we will cover how to build DevOps pipeline and best practices for tagging builds and testing images.  And finally, in part-III, we will cover the best practices for running container applications in production.

Can OpenStack be used without a Cloud Management Platform? Five Challenges that you would face if you did just that.

Many customers are attracted by the fact that OpenStack is freely available and wonder about the need for a cloud Management Platform, such as BMC CLM. However, market experience and a number of cloud false starts have shown that cloud computing—at least the successful kind—is not easy. Heterogeneous infrastructure and multiple platforms are the reality for most enterprises today and, combined with increasing levels of IT security threats, make cloud management a complex and sometimes daunting task. Enterprises struggle to manage this complexity with OpenStack alone. To explain why, we analyze 5 challenges of running a private cloud with OpenStack without an accompanying CMP, such as CLM.
Challenge 1: Breadth of functionality
Building a cloud solution takes more than just the technical infrastructure functions and management tools that OpenStack provides. It involves implementing a set of business, architectural, and functional requirements which OpenStack usually lacks.

Challenge 2: When is “free” really free?
Although OpenStack is marketed as free software, industry experience so far has been quite the contrary because there are hidden costs to implement, operate, and support OpenStack. Each year there is increasing agreement among customers that a skilled engineering team is needed to develop missing capabilities and then customize, integrate, and maintain OpenStack to make it usable in enterprises. Most deployments require five to ten engineers to do development, customizations, integrations, and operations using OpenStack. The development team typically enhances OpenStack with needed cloud management capabilities such as governance, UI enhancements, compliance, automation, and policies. With BMC CLM, this additional development effort for would not be needed. Of course, both BMC CLM and OpenStack require integrations to enterprise systems as well as day to day operations for these systems.

Challenge 3: Depth of functionality
Governance, policies, and pooling of resources
OpenStack does not have deep and flexible functionality in governance, policies, and pooling of resources into higher-level logical constructs, such as logical data centers and configurable user-extensible policies to map workloads to logical data centers. BMC CLM offers an extensive mechanism to group resources into pools and logical data centers and mark them as shared or private, and has flexible, configurable policies for workload placement based on tenants, tags, or custom workflows.  It also has deep governance ranging from reclamation of resources, quota management (OpenStack has this capability), change management, and CMDB integration.

Platform support
Even though there is a good breadth of platform support in OpenStack, the deep functionality required for enterprise cloud management is lacking at times in many of the drivers. OpenStack Nova provides full support for KVM/QEMU but limited support for Microsoft Hyper-V, Citrix XenServer, and VMware vSphere (which are fully supported by BMC CLM). Hence, if the deployment is using KVM, OpenStack has full functionality, but for others it is better to use platform support that BMC CLM provides directly to these hypervisors.
Service catalog
While BMC CLM has a very extensive service catalog to allow administrators to define offerings and entitlements per tenant, OpenStack lacks this level of flexibility.

Challenge 4: Managing risk
We have all heard about the huge increase in IT security threats that have emerged over the last year or so. Hacking incidents, viruses, and vulnerabilities such as Heartbleed, Ghost and ShellShock have hit many companies hard. No IT organization can afford to ignore risk management for both legacy and new cloud infrastructures. Compliance, security, patching, governance, and policies are not built into OpenStack. Again, additional effort is required to integrate OpenStack to Chef, Puppet or some other tool to provide these policies, such as server hardening, server compliance, and server patching. BMC CLM can perform automated compliance and patching on services across all legacy data center infrastructure as well as public and private cloud infrastructure, including OpenStack private clouds, in a consistent manner to reduce risk from provisioning and throughout the lifecycle of the service.
Challenge 5: Heterogeneous platforms and hybrid cloud infrastructure are a reality
If an organization has a single OpenStack infrastructure; does not have any other infrastructures such as VMware vCenter, Microsoft Hyper-V, or public clouds; and has little governance or automation requirements, then the need for a CMP is questionable. However, most enterprises have a hybrid infrastructure with multiple platforms such as Hyper-V, vCenter, and KVM; multiple private clouds; and possibly even multiple public clouds. Sourcing policies seeking to avoid vendor lock-in, as well as mergers and acquisitions, dictate that heterogeneous infrastructure is the new reality. Managing across all of these different platforms becomes very complex: with different people, processes, and technologies required to manage each infrastructure, IT costs can quickly skyrocket. To provision agile services while ensuring costs are kept under control and risk is minimized, IT organizations require a management platform that can abstract the complexity of provisioning and managing across heterogeneous infrastructures and provide a single pane of glass for users as well as administrators.  BMC CLM orchestrates the agile delivery and ongoing management of IT services across hybrid cloud and legacy infrastructures to reduce costs while applying consistent compliance and governance policies across all platforms.


Running OpenStack without a cloud management platform is sufficient only in basic cloud use cases. OpenStack has a number of gaps that preclude it from being a complete enterprise grade cloud solution. OpenStack and CMPs such as BMC CLM are not competitive but complementary. Using them together will make private clouds truly enterprise grade.

Cloud Lifecycle Management for CI/CD DevOps – Part II

In part-I, we described the challenges in a typical CI/CD environment.  In this blog, we will show how BMC Cloud Lifecycle Management (CLM) can be used to address these challenges.


By using CLM along with Jenkins and a few other automated testing tools and scripts, we were able to address the above challenges and provide a complete fully automated devops pipeline that resulted in huge developer time savings, sanitized consistent environments and deployment automation.  Let us see how CLM was used to achieve the devops pipeline for CLM leading to a high ROI far beyond our expectations in many other areas shown below.

As seen below in Figure 2, as soon as build is successful, automated testing of the CLM application takes place that runs thousands of tests by provisioning test infrastructures using CLM.  After successful tests, the hardened CLM application is automatically converted into a “service offering” in a CLM for that specific build.  The CLM service catalog is updated with this new service offering and made available to the development and test teams for downstream activities such as provisioning of the latest CLM application stack for their individual testing.  We have 100’s of developers each day creating CLM application stacks (‘application environments’) as shown below.


Figure 2. CLM is used to provision hundreds of CLM stacks each day

CLM Service Catalog

Figure 3 below shows an example service catalog that is used by our engineering team for on-demand deployment of CLM application stacks for current and older releases.  Offering deployable application environments for CLM daily and prior releases have resulted in high developer productivity.


Figure 3. Service catalog of CLM service offerings

Converting CLM deployable artifacts into service offerings

Once the CLM has been built and tested through the DevOps cycle, the CLM deployable artifacts are automatically converted into service offerings that developers can request through service catalog as shown above.  This consists of a number of steps that are automated:

  1. A virtual machine is provisioned through vCenter
  2. CLM deployable artifact is next installed on it
  3. Sanity tests are executed
  4. After this, a template is created using vCenter CLI
  5. Finally, CLM SDK/API calls are used to update or create new service offerings based on the newly created template that reflects the new version of the CLM application

Day 2: CLM application – Take environment snapshot

A developer who has provisioned their own CLM application stack can take snapshots of the complete stack using a day 2 action, such as “TakeVMSnapshot”.  This is shown in the figure 4 below and can be useful in saving application and machine state for debugging during the dev cycle or reverting back to a consistent state.


Figure 4. Taking a snapshot of developer’s CLM environment

This new custom action was implemented by creating a BMC Atrium Orchestrator (AO) workflow that implements the action to take a snapshot, configuring the Callout Provider, and then using API calls to import the BMC Atrium Orchestrator workflow to BMC Cloud Lifecycle Management.  The high level steps to do this are given below:

  • Step 1: Define and write the AO workflow.  This workflow accepts the context information for the virtual machine identifier (additionalInfo parameter holds this information about ComputeContainer, VirtualGuest, User, Service Offering Instance), calls nsh script that connects to BSA which then invokes snashot on vCenter.  Alternatively, AO workflow can also directly use the vCenter adaptor to directly execute a snapshot call to vCenter.
  • Step 2: Configure Callout provider in providers.json
  • Step 3: Use REST API to import AO workflows
  • Step 4: Customize parameters in Reference Action Catalog.  In our case, we added these parameters:  Set optional, Set encrypted, Set parameter order and Set end user input required
  • Step 5: Use REST API to refresh Action Catalog
  • Step 6: Put i18n labels

See user documentation for more information on creating custom server actions:

Day 2: CLM application – Update a component

In addition to taking day 2 actions for taking snapshots of the CLM application stack, a developer can also update a specific component within the application stack by either pushing code directly to the stack or using another day 2 action such as ‘Update Component on CLM Application’.  This can help in developer level integration testing of their component and testing it against a complete consistent CLM application environment.


The benefits for several of the stages in our devops pipeline are summarized below:

Stage Product used for automation Metrics Savings
CI Builds Jenkins 5000 builds/release in 6 months
Automated deployment CLM Hourly, Nightly, Daily and Weekly deployments done using CLM!
Automated testing Silk and Selenium 1000’s of tests run each day automatically
Environment infrastructure provisioning and deprovisioning CLM 30 per day No static infrastructure – savings $$$
Day 2 actions – snapshots CLM Hundreds each day by developers
Service catalog and Portal 20 service offerings, 200 users and 150 SOIs Productivity benefits with consistent build and deploy environments – developers can deploy environments in one-click
CLM Application Lifecycle management CLM Developers and QA can manage lifecycle of their private CLM application stacks by starting, stopping, updating continuously as needed. Consistent simplified UI and environment to provide common dev tasks results in developer savings.
Automated reclamation CLM Unused CLM stacks are automatically reclaimed $$$ savings
Testing and Application infrastructure on any target – on-prem and in cloud CLM CLM allows to provision application environments to on-premise as well as AWS and other cloud infrastructures $$$ savings and flexibility
Maintenance of older releases CLM Service offerings for prior releases also available with pre-baked data Huge savings as prior releases can be deployed in one click through service offerings


In addition to CLM application stack as service offerings for our developer community, we are also offering many other container and PaaS stacks for developers to do innovations and experimentations.  For example, we have made available “Docker Hosts” as a service offering in our service catalog that allows any developer to request a Docker Host in a few minutes to start using containers.  We also plan to make PaaS environments and other middleware application environments available to developers to experiment with new technologies and continue with innovations.  Finally, during our conferences, we have used CLM to manage and run our hands-on labs to provision hundreds of CLM application environments on AWS cloud for training at scale.


At BMC, we take drinking our own champagne very seriously.  We continuously build, test and deploy CLM using CLM.  This has resulted in huge improvements in product quality, speed and agility in providing faster releases to the development team and infrastructure savings.  We believe that CLM can be used very effectively in managing any application DevOps pipeline in three critical areas:  a) Infrastructure as code b) Dynamic infrastructure can result in cost savings such as test environment creation and decommissioning, and c) Providing service offerings to developers for machine, PaaS, middleware and application environments that will increase developer agility and happiness.

Cloud Lifecycle Management for CI/CD DevOps – Part I

At BMC Software, we build, test and deploy Cloud Lifecycle Management (BMC CLM) product in our devops cycle using CLM itself.  This continuous integration/continuous deployment CI/CD DevOps pipeline enables our engineers to provision on demand the latest consistent application environment with a single click.  We use CLM to run DevOps pipeline for CLM application itself by taking advantage of CLM’s service catalog, service offerings, deployment capabilities and “infrastructure as code”.  CLM is effectively used to provision and de-provision several hundreds of development and testing infrastructures and application environments each day by our engineering team across continents.  This has allowed us to build high enterprise grade quality into CLM with automated pipelines, faster agile deployments, managing infrastructure more efficiently and ability to provide a consistent latest development environment for our engineering team all available through service catalog.

We will in this part-I highlight the challenges in a typical DevOps CI/CD environment and in part-II we will show how CLM solves this challenge.


Our DevOps CI/CD pipeline is shown below in Figure 1.  As you can see, CLM product development process goes through a number of stages that includes build, test and deploy of CLM product.


Figure 1. CI/CD pipeline for CLM
Early on, we faced a number of challenges implementing and automating our DevOps CI/CD  process.

Challenge 1. Complex and slow manual CI/CD Process caused lack of agility and productivity

One of the first challenges we faced is that our CI/CD pipeline was manual requiring a lot of manual steps by engineers to get daily or weekly builds deployed to multiple target environments.  This resulted in days to weeks to produce a good working environment.  This started impacting developer productivity before we automated our pipeline using CLM and other tools.

Challenge 2. Consistent development application environments were hard to maintain without automation

As developers checked in code continually, there was a need to get a complete consistent view of multiple check-ins by the team of developers to do unit, integration and system testing.  Before CLM, we had many application and test environments that were not easily reproducible and traceable that led to wastage of resources.

Challenge 3. Static infrastructure and application environment sprawl increased our costs

As a part of our CI/CD pipeline, we have a large number of infrastructure environments that need to be provisioned and decommissioned each day for unit, integration, quality, system, performance and security testing.  This lead to an explosion of number of environments that we had to maintain and led to increasing costs for keeping infrastructure always available and ready when it was only used for a limited time during testing.  We also experienced application environment sprawl since there was no automated reclamation of unused environments.  This increased our costs as well.

In our next part-II, we will show how CLM addresses these challenges and simplifies the developer experience.