Enterprise cloud adoption strategies – Role of Central IT

By 2020, a corporate ‘’no cloud” policy is as rare as a “no internet” policy is today – Gartner.

Available data clearly indicates the direction of Cloud infrastructure market forecast.

Pic1 for Blog

Adopting cloud at enterprises require some additional considerations on the following topics

  • Compliance
  • Security
  • Governance
  • Auditability / traceability
  • Operating models
  • Responsibility model
  • Way of work

Irrespective of the size of the organization, there are certain common areas of focus during cloud adoption such as

  • Cloud(native) first approach
  • Buy vs build
  • T-shaped skilled people
  • Application architecture and technology landscape
  • Culture of safety and experimentation
  • End to end value chains with minimal handoffs


What is Central IT?

Central IT is IT function of an enterprise which provides its Data center services / infrastructure services. Central IT (also known as Central CIO, IT for IT etc based on the organization lingo) provides various services to business units such as network, Databases, servers, application platforms (e.g. api platforms) etc.

So, will this function be relevant after an enterprise adopts cloud? Yes, but not in the same way as pre-cloud era. The role of Central IT depends on the cloud vision of the organization. An organization cloud vision can be broadly put on a below spectrum.

pic2 for blog

Trust no one: Central IT function decides on the infra requirements. This is almost like using cloud as another data center. This doesn’t really make the best out of cloud. However, gives total control to central IT

Trust the decentral process:  Business Units are free to specify their infra needs. For e.g. using Elastic Beanstalk or Lamda can be driven by BU’s but ultimately executed by Central IT

Trust only specialists in teams: Only certain people in BU’s can create / manage cloud resources. These are the people with special privileges who not necessarily come from central IT

Trust the teams but verify: Teams are allowed to take total responsibility of their infrastructure. However, Central IT provides an automated way to enforce policy requirements on all the resources teams create.


Central IT as “Cloud Evangelists” during cloud adoption:


There are clear benefits in having a “Cloud Evangelists” model (can also be known as “Cloud center of expertise”, “Cloud competency center” etc based on organization lingo) during cloud adoption. Adopting cloud at an enterprise has an impact on its people, process and technology. To enable, facilitate and fasten the change process, “Cloud Evangelists” will define standards, advise on cloud technology and its usage and coach BU’s to adopt cloud.


This “Cloud Evangelists” team(s) will also ensure that common concerns such as compliance, security etc are enforced automatically so that each BU’s need not to spend any additional effort on it. Below diagram depicts a possible role a “Cloud Evangelists” team can play in an enterprise.

pic3 for blog

I plan to express my views in detail on each topic of cloud adoption in my next series of blogs. Do let me know what you think about the role of a central cloud team? Do you recognize the aspects I mentioned in this blog? Agree or disagree, do let me know via comments.


Testing your infrastructure with Inspec

Infrastructure as code(IAC) is not a luxury anymore but a necessity for Devops teams to be efficient and effective. Most of the teams I coach either already have implemented or in the process of implementing their infrastructure as code. At some point during coaching assignment, all these teams have asked exactly same question – “We already use tools (Terraform, CloudFormation, Ansible, Packet etc) to create infrastructure. Why should we test it again? Obviously, we don’t doubt these tools nor we want to test these tools, right”.

Why test infrastructure

Because it changes. Because it is code. In the world of IT, anything that changes need to be validated if it still matches the desired result. Basic principles of software development should be applied while creating infrastructure by code. Here are my top 5 reasons why you should test your infrastructure code.

  • Continuous compliance and security standards: Infrastructure testing tools(such as Inspec) will help you to detect violations and report so you can address appropriately.
  • Shift left approach: Move compliance checks, security validations, feedback loops on IAC changes more towards left of delivery pipeline.
  • Faster troubleshooting: If your application is not working as expected, it becomes easier to narrow down if it is due to environmental/infrastructure related.
  • Make changes with more confidence: Automated verifications of IAC provides safety nets that enable teams to make changes with confidence.
  • Focus on fire prevention rather than firefighting: By detecting problems/symptoms sooner enable teams to take required corrective measure before it escalates and disrupts business.

What and Why of Inspec

Inspec is an open source framework written in Ruby which helps you to test your infrastructure. Inspec validates actual state with desired state. I hear you say “Hey! Even terraform plan can give me this information”. Yes, terraform (or other similar tools) will tell you if your infrastructure definition matches actual state. However, let’s look at a practical use case.

You want to ensure all your web servers are associated with certain sub-nets and those sub-nets belong to certain security group. You might also want to verify ingress and egress rules. Another use case: How do you ensure someone else (because of shared tenant or otherwise) have not added an extra security group or sub-net? There are numerous use cases you can derive based on network configuration, file system, system configuration etc. Well, this is where inspec will come to your rescue.

A simple inspec test – verifies if a given security group meets specific inbound rules

describe aws_security_group(group_name: linux_servers) do

  its('inbound_rules.first') { should include(from_port: '22', ip_ranges: ['']) }


Below test checks for correct tagging of a EC2 instance

describe aws_ec2_instance('i-090c29e4f4c165b74') do  its('tags') { should include(key: 'Contact', value: 'Gilfoyle') }end

A quick anatomy of above tests:

  • Describe: keyword of Inspec which you can roughly equate to “test” fixture
  • aws_security_group/aws_ec2_instance: Out of the box resources of inspec
  • parameter to resource: A way to identify the resource. This can be parameterized by attributes file or from a terraform state file.
  • Its (‘…’): a property of the resource which needs to be compared
  • should include(…): a condition which gives the test result.

Why I prefer Inspec? Here are my top 5 reasons

  • Flexible: Inspec is incredibly flexible. It offers numerus resources out of the box. However, it is also quite easy to create your own custom resources to meet your requirements.
  • Easy to write and read: Inspec tests are very easy to write and read and they closely resemble human readable format.
  • Remote testing support: You don’t have to install any packages/tools on your target infrastructure. Inspec use ssh/winrm to carryout testing.
  • Platform agnostic: Inspec can be used on multiple platforms like windows, linux (different flavors), docker and multiple cloud providers aws, azure
  • Open source: Inspec is an open source and supported by Chef.

How to get started

Introducing a new tool or a practice to your teams can be done in multiple ways. Here is what worked for me. Again,  here are my top 5 steps to introduce Inspec

  • Make Problem visible:
  • This is obvious but obvious pain is not always obvious 😊. This is especially true with IAC as rate of changes encourages teams to bear the pain.

  • Familiarize with the tool:
  • I usually arrange a working lunch or couple hours of hands on session with teams and guide them through some simple tests. This really helps teams to get a good understanding of Inspec and encourages them to tryout in their projects.

  • Identify critical yet simple use cases:
  • Start small. Pick one or two simple test cases, possibly identify the part of infra which changes often. The key is to have a small enough test case yet which gives value to the team.

  • Start with non-production environments:
  • Test environments change more often (manually or otherwise) than production. I suggest teams to target their tests on test environments for initial 1 or 2 sprints. This gives them enough feedback and put this on their radar. Once they are comfortable, this will become part of their DoD.

  • Make these tests part of your pipeline:
  • Make sure these tests are part of your pipeline. Remember, Inspec tests can be used for compliance and security auditing. By having these tests in a pipeline will also helps in meeting required complacency standards.

Refer to https://www.inspec.io/ for more information. You can find lot of hands on tutorials here https://learn.chef.io/modules#/

5 Characteristics of a Devops organization

As consultant (or a new member in a devops team), what are your telltale signs of a “DevOps” organization? Do share them in comments section. Below are my top 5.

Product based teams over component teams:

Autonomous and cross skilled teams are key to deliver and maintain products. An organization structure which is based on knowledge silos (such as dev, qa, ops) is bound to create multiple handovers and thereby increase waste and risk of things going wrong. In a mature Devops organization, you will typically find organization structures based on the products/services they offer.

However, there is still value in “Component teams”. For instance, the core technology services/capabilities of the IT organization can still be fulfilled by a technology team. This is an acceptable situation when these core services/capabilities are enablers for other devops teams to deliver value to end customers.

Obsession with Automation over preoccupation with manual work:

Devops teams are obsessed with automation. Every manual task has an increased risk comparted to its automated counterparts. In most cases, one of the biggest bottleneck in overall value stream is manual interventions. This manual intervention is also highly error prone and time consuming.  Hence, mature Devops teams relies on automation to achieve consistency and speed. Devops organizations enable their teams to focus on ruthless automation of all their activities such as infrastructure, deployments, testing, documentation etc.

However, there is still value in some manual interventions. Typically, activities such as exploratory testing, end user training etc might still require some manual effort but this should be kept to minimal or look for ways to automate. For e.g. to get early feedback from customer, devops teams can use techniques such as canary releases, feature toggle, A/B testing, dark launchs etc.

Evidence based over gutfeel:

Devops teams measure what matters. Their KPI’s gives insights into various aspects such as code quality, build quality, release quality, NFR’s and various production monitoring metrics. Technology and business decisions are driven by data. For e.g. how did new architecture design changes impact performance? How is the new feature we implemented is being used by our users? When does users use a feature in our application? How does the new code we shipped impact out code quality or security? Things like these are answered by hard facts and not by gutfeel of the team.

Data driven decision making is a one of the key aspects of devops teams and organizations. However, in some instances business might take some decisions such as a new feature implementation based on gutfeel. I rather like to call it as assumptions or hypothesis that a certain feature will make users happier or make them more effective etc etc. However, there decisions which are based on a certain hypothesis need to be validated with data either after a release or preferably before that.


Team work over individual work

Devops teams require high level of professionalism and engineering excellence. Professionalism reflects in their ability to do the right thing, courage to say No, courage to ask for help, disagreeing respectfully, committed to deliver, ability to have an open and honest collaboration with each other. When people disagree, argue or criticize, they don’t disrespect each other. They only disagree with the idea and not the person.

Members in a mature devops teams hold each other to higher standards. As a team they celebrate each other successes which is in tern teams’ success. This promotes a sense of achievement, quality and a great engine of motivation at workplace.

Fail fast over delayed learning

Mistakes are mandatory to learn! A team which always plays safe without exploring uncharted territories, will not often challenge status co. Mature devops teams/organization perform blameless postmortems to get learning from mistakes. Often, these local learnings could be transformed into organizational wide learnings.

Fail fast will be an effective strategy only if the cost of failure is small, manageable and doesn’t result in cascading chain reaction. This is where effective feedback loops, high-level of automation comes into picture. Apart from this, mature devops teams have a culture of trusting each other, challenging each other and an eye for constant improvements.

For e.g. in our organization we have a culture of “Celebrating failures/mistakes”. Every monthly all-hands meeting, employees share their biggest mistake/failure of the month. Whole organization votes on which is the biggest “screw-up” and that person wins a nice dedicated parking spot for a month 😊. This resulted in a culture where people are open to share their mistakes and there by all of us can learn from it.



Do you notice these characteristics in your team/organization?  What will you add to this list?

6 best practices for application deployments

Many software development teams are now working in Agile/Scrum way and that’s great! One of the cornerstones of Agile way of working is “Deliver value fast and often”. Real value is delivered only when software is running in production (not Dev, not QA J).

Having right deployment principles and practices in place is all the more important in Agile environments because new increments are produced by scrum teams at the end of each sprint. A right deployment strategy is a key factor to have faster and effective feedback loops from each environment. Below are some of the best practices for application deployments.

Build once deploy anywhere

Do you run into situations such as “Hey! It works on QA but not UAT or Prod”.  One of the root cause of such situations is creating build artifacts for each environment. It is key to promote same package which was tested in lower environments (Dev/QA) to later environments (UAT/Prod). You will introduce unwanted risk if you build codebase everytime to deploy to different environments as there is always a hidden danger of introducing unwanted changes. Automated deployments are very effective only when a same deployment package goes through different quality gates. If you change/ build deployment package for each environment, you are bypassing lower environment quality gates.

Hint: Use same build package and promote it through all environments.

It should be a people first process

Using right tools for application deployments is important. However, focusing on tools alone will not help.  Deployments are smooth when there is a better collaboration between people who build the software and people who deploy the software. When work is done in silos, focus is narrowed which leads to expensive and time consuming handoffs.  Improving the speed of the slowest member of a convoy increase the speed of whole convoy. In the same way, having better collaboration and elimination of waste during handover improves over deployment process.

Hint: Improve collaboration between Dev and Ops to minimize handovers.

Make deployments boring

Deploying to production need not to be a ceremony. Production deployments need to be routine, boring events because same process is used all along for each environment. New features you deploy to production should give you excitement but not the deployment process J. You will add unnecessary complexity if you customize deployment process for each environment.

Hint: Use same repeatable and reliable way of deployments to each environments.

Automate, automate, automate

Automate your build process, automate your application/component configuration (configuration as code), automate your infrastructure (infrastructure as code), automate your deployment process. A good rule of thumb: “Everything that does not require human judgment/intervention is a candidate for automation”

Hint: Visualize your current end to end deployment process and identify quick wins and low hanging fruits to automation or to identify bottlenecks

The Architecture Drives the Build

Batch size has great deal of influence on flow and architecture influences batch size. If you modify or add one line of code, how big is the impact on testing, building and deploying the package? Follow standard design principles such as separation of concerns, Single Responsibility principle, Principle of Least Knowledge, Don’t repeat yourself and minimizing upfront design. As depicted below, if you have a spaghetti architecture deploying a change is expensive and time consuming, so choose Ravioli 😉


Hint: Choose a loosely couple architecture and focus continuously on architecture refactoring

Manage your dependencies

One of the key challenge working in distributed, multi team environment is dependency management. There is a high need to ensure easy distribution of artifacts produced by different teams as they share dependencies between them. Using a repository manager comes in handy in this situation. It is also useful to define access rules for users and groups that consume artifacts so the consumer uses right artifacts/version.  Other benefits of using a repository manager includes reduction of build time as significantly reduced number of downloads off remote repositories. You can also use a repository manager in case you want to roll back to a previous version.

Hint: Always use repository manager to manage your dependencies and version control your build artifacts.


Who can be a great scrum master?

Lot of organizations, managers and scrum masters have this question. What makes a scrum master great? Is he/she just need to know scrum or do they need to have more than that?

My usual answer – a scrum master need to have some skills and traits. Of course, having sound agile/ scrum knowledge is important and equally important is some mindset aspects which are listed below.

  • Don’t ask for permission, ask for forgiveness
  • Ask the team
  • “I have great responsibility, but no authority”
  • “The collective minds of the team vastly exceeds my own”
  • My job is to make sure I’m not needed
  • I win when the team wins
  • Able to holding the mirror for them to reflect and adapt
  • Make team feel accountable, inspired, focused
  • Inspire, don’t “require”
  • Don’t give team the fish, teach them to fish.
  • Non judgmental
  • Actions based on facts and not on perceptions
  • You are a midwife, not the laboring woman Smile
  • Live the values!
  • Have serve the team as primary goal.

Myths about Scrum, Agile, Software development


During past few years, my role as a scrum.org trainer, agile coach, software developer has given me opportunities to interact with some of the best and brightest of the industry. At the same time I’ve also interacted with some people, teams and organizations which somehow got into the trap of believing some myths of our industry. The below list is based on what me and my colleagues at scrum.org have seen.


  • Scrum is the silver bullet that can make any project finish on time.
  • Scrum can put your project to failure.
  • Scrum should be changed to fit your company. (Note: Scrum is a framework, it can be adapted but no changes it its core essence)
  • Product Owner can accept or reject increment.
  • Scrum is suitable only for small projects.
  • Scrum does not work for remote teams
  • Scrum only works for mature team members
  • Kanban is more flexible than Scrum
  • Scrum does not work for fixed price projects
  • Tester does not have any role in Scrum
  • Scrum Master is a project manager in Scrum
  • Scrum (or any part of it) will never work here
  • Our project/product is different. Scrum is no good use here.
  • Scrum doesn’t work when there are too many dependencies between teams
  • Scrum can’t work if you don’t change performance appraisals, incentives, etc
  • Scrum doesn’t work if your software runs on hardware
  • One PO cannot possible handle X teams  (where x is some number larger than 2-3)
  • Scrum can’t/shouldn’t be used for ‘maintenance teams’ / Kanban should be used for maintenance/brownfield/legacy teams
  • Scrum is just Waterfall done more often


  • Agile teams don’t do any planning
  • Agile teams don’t do documentation
  • Agile teams are cowboys where you don’t have any control over them
  • Pair Programming is someone watching over my shoulder
  • If we skip documentation, we are Agile
  • Agile ignores risk
  • Agile doesn’t believe in any metrics
  • Agile requires no management
  • Agile requires no experts
  • Agile means no deadlines

Software Development:

  • Bugs/Production emergencies are always going to happen
  • Test Automation is too expensive and too hard to be worthwhile
  • We have to be able to fix schedule/scope/cost to keep customers happy
  • Adding people to a project that is running late will get it back on schedule
  • Developers can’t talk to customers
  • We don’t have any way of getting customer feedback
  • Schedule, Scope, Cost is a great measure of software success
  • We’re doing Scrum, so we don’t need to do TDD and Pair coding
  • Programmers cant be trusted to test their own software
  • Only the testers do testing.  Testing is not my(programmer/analyst/architect) job
  • The architecture/design must be done upfront
  • Only the BA writes requirements.  Requirements are not my(programmer/tester/architect) job

Continues Delivery with Release Management – Introduction


Microsoft recently acquired InCycle’s “InRelease” software [now called as Release Management (RM)] and integrated with VS 2013.  Release Management software fully supports TFS 2010, 2012 and 2013.

Before we look into details of Release Management, let’s look at what Continuous Delivery means.

What is CD?

Continuous Delivery is the capability of automatic deployment of components to various servers in different environments.  This typically involves configuration management of different environments, ability to define and customize a deployment workflow driven by business involving multiple roles in organization.

Why do we need it?

Well, DevOps is the talk of the town. If you want to be a cool kid (team), you gotta know/implement CD. Apart from the cool factor, CD brings following advantages to the dev team and business.

–          Develop and deploy quality applications at a faster pace.

–          Improve value of deliver by reducing cycle time.

–          Enable same deployment package to traverse various environments as opposed to rebuild for each environment

–          Manage all configuration information in a centralized location.

–          Have repeatable, visible and more efficient releases

–          Alight with deployments with business process

–          Adhere to any regulatory requirements during deployment process.

What is Release Management?

Release Management is a continuous delivery solution for .NET teams for automating deployments through every environment from Team Foundation Server (TFS) until production. RM also allows to define release paths to include approvals from Business and other departments (such as ops) when required. RM enables to assemble all the components of your application, copy them to required target servers and installs all of them in one transaction. QA checks such as automated tests or data generation scripts, configuration changes etc. are all handled by RM. Release Management also handles roll back in required scenarios.

Release Management Components:

The following diagram shows the main components of Release Management.

Release Management Components

Client: There are two Client components. The Windows client is a Windows Presentation Foundation (WPF) application that serves as the main interface point to manage release information.The Web client is used to act on Approval Requests. This is the interface to which users are directed when following links in e-mail notifications. Client is used both by business and development teams to provide necessary approval when required.

RM Server: The Server component is the heart of Release Management. It is a combination of Web and Windows Services that expose contracts used by all other components. The server component also contains a SQL Server database. Typically, RM server is installed on TFS server and can share same SQL server.

RM Deployer: The Deployer component is a Windows service that lives on the Target Servers (such as Dev, QA, Prod etc) where your application components need to be installed.

Tools: The Tools are components that help in the deployment of various components to different servers, configurations etc. Few of the are given below.

–          Installing a version of a component to a specific environment.

–          Deployments to Azure

–          Uninstalling a previous version of a component before a re-deployment

–          Deploying reports to Microsoft SQL Reporting Services

–          Running SQL scripts on a database server etc.

In next blog, I’ll write about configuring Release Management.

Reference material:

Channel 9 Video

Visual Studio 2013 ALM VM – Hands on lab

InRelease User guide

Continues Delivery with Release Management – Configuration

This blog is in continuation to my previous blog about introduction Release Management tool to implement continues deliver.

Release Management software (server, client and deployment agent), installation guide, user guide can be downloaded from here.

1. Server can be installed on TFS server and RM can create database on the DB same server.

2. By default, RM runs on port 1000, but can easily be changed.

3. Server configuration is pretty straightforward, deployment agent can be configured in the Client by choosing Administration, Settings, Deployer Settings


4. Most of the key configurations such as TFS user groups, SMTP settings, connections, servers etc are configured via “Configuration Paths” This can be done by navigating to Administration, Settings, System Settings.



In next blog we will see how to create and configure release pipelines.

No programmers in Agile teams!

Yes, that’s right. Agile teams don’t need software programmers, they need software developers. I don’t mean this as some English vocabulary usage. I think the key difference is, if you ask a programmer to build some code, you will get code. If that programmer is good, you might get good, reasonably commented and reasonably efficient code which works.

If you ask a software developer to build some code, you will first get questions:) before you get a solution.

  • How does it fit in the business process? Are the requirements thought out?
  • Are you sure you understand what it will cost?
  • Who will support it? What about diagnostics and instrumentation?
  • What kind of documentation will it need?
  • How might it interact with other code?
  • What platform will it run on. Are the scalability issues?
  • How might it impact future development? How might it be enhanced in the future?

Another key difference is, programmers focus on languages, software developers focus on language characteristics. A programmer might see himself/herself as a Java programmer, C# programmer or a Ruby programmer. But a developer focuses on language characteristics such as strong or loosely typed? Object oriented or functional? Interpreted or compiled? Etc. This allows developers to quickly adapt and pick up new languages and technologies.

So, the key mantra for agile teams is to deliver high business value software with high quality. This cannot happen with programmers whose focus is just to code and ignore everything else.

Code is not always ‘THE’ solution but its ‘a’ solution.

Credit: This blog is inspired by a talk from Dan Appleman

User scripts – Intro

User scripts are a handy little concept through which we can extend / customize behavior of any web site. Before we talk what user scripts are, lets see what kind of things we can do with it

  • Always display some information like time or how much time you are spending on that website etc.
  •  Remove ads from your favorite web site such as Facebook  or any news site
  • Extend or implement some functionality on web sites you like. For e.g. download albums from facebook.
  • Do some time sensitive actions such as log into a web site as soon as its up. (handy when you have to book train tickets through IRCTC Tathkal)

So, what is a user script?  It’s a script which your browser can understand and act on it.  So how is it different from Javascript or vb script? The key difference is, all these client side scripts have to be part of a web application, where as user scripts sit on browser and gets injected into every site (or selected sites)  that you open in a browser. These scripts are loaded before any scripts are loaded on the page and hence the advantage.

These user scripts are specific to browser as they act as extensions to web browser. There are lot of browser extensions out there to manage and create user scripts such as Greasemonkeyfor firefox,  Tampermonkey

In next blog,  we will see how to use Tampermoney to create user scripts and introduce a custom behavior for a web site.