7 pitfalls to avoid with OKR’s

Can you name the top 5 priorities for your organization (or department)? How likely is that your colleagues can also come up with the same list in the same order of priority? As per a Harvard Business Review survey, only about half the staff can name their company’s top 5 objectives.

Why OKR’s

There is a high chance that you might have already worked (and perhaps hated) with some of the goal-setting frameworks such as S.M.A.R.T. or quarterly As & Os (accomplishments and objectives). So, what new value does OKR’s bring to the table? Why should an organization consider OKR’s, well apart from OKR’s being hip and used by most of Silicon Valley companies such as Google, LinkedIn, Airbnb, Dropbox, Amazon, Twitter, etc? Here are a few reasons why you might want to consider OKR’s.

So, what are OKR’s

Objectives and Key Results (OKR’s) – A critical thinking framework and ongoing discipline that seeks to ensure teams of people work together, focusing their efforts to make measurable contributions that drive the company forward. (Ben Lamorte)

Here is a basic structure of OKR’s

Pitfalls with OKR’s

OKR’s(Objectives and key results) are an effective yet simple goal setting practice. As with most simple practices/frameworks, OKR’s are simple to understand yet difficult to master and implement in a correct way. Here are top 7 pitfalls/mistakes I’ve noticed when organization try to use OKR’s

  1. Lack of heartbeat

This is one of the most common pitfalls with OKR’s. A credible and sustainable cadence is key to effectively use OKR’s. A regular check (preferably weekly) is crucial to accurately assess progress and have a disciplined approach to meet set objectives. Without these regular checks, OKR’s quickly becomes a quarterly ritual that doesn’t add much value to the organization.

  1. Unmeasurable key results

Key result areas are derived for each objective. There should be an effective way to measure key results to have better insights into the progress of each key result. Key results are best measured as numeric values. Defining KPI’s for key results is also an acceptable practice in most organizations

  1. Working in silos

Effective OKR’s are set at the organization level which gets translated into individual department/team’s OKR’s. There is a need for better alignment across departments/teams. One of the main reasons for this pitfall is that most executives assume that top-level OKR’s transcend into the department/team level. This also leads to a lack of bi-directional alignment and leads to inconsistent expectations across different teams. This must be a deliberate exercise to ensure such cascading does occur and communication occurs bi-directionally.

  1. Using as a tool to control employees’ work

When an organization encourages the use of OKR’s at the individual level, there is a high chance that it might quickly become a tool to control, monitor, or manage individual employees. There are many instances where OKR’s are used to evaluate employee performance. This is a bad practice because being ambitious while setting an objective is a key factor when using OKR’s. When employees are assessed on their individual OKR’s, they invariable try to play safe by not setting ambitious goals. This promotes a risk aversion culture.

  1. Using as a dashboard for business as usual

OKR’s are not meant to be used for business as usual activities. By using OKR’s for business as usual activities, organizations tend to either repeat the same OKR’s every year or they end up with too many OKR’s. OKR’s provides a great structure to encourage organizations to set and act on ambitious goals. It defeats the point of OKR’s when used of business as usual activities. KPI’s and balanced scorecards are best suited for such business as usual activities.

  1. Tasks as key results

OKR’s is not a collection of tasks. They primarily focus on outcome or results then output or activities. As Google’s re:Work states: “One thing OKRs are not is a checklist. They are not intended to be a master task like…Use OKRs to define the impact the team wants to see, and let the teams come up with the methods of achieving that impact.”

So, in a nutshell – a task is something done by a team or individual. A key result is achieved by completing multiple tasks. To achieve an objective, multiple key results are acquired.

  1. Setting too many OKR’s (everything is an OKR)

Focus is key to achieving results. Too many OKR’s results in diluted focus and thereby less than optimal results. 3 to 5 is a good number of OKR’s to start with. By having too many OKR’s organizations/teams overwhelm themselves. Do note that a bit of slack room is needed to foster creativity in teams.

Conclusion

Working with OKR’s requires a learning curve. We started to reap benefits from our 2nd attempt (2nd quarter) OKR’s. Do not hesitate to have difficult conversations. In a nutshell, get started and be patient.

DevOps practices for PowerShell programming

Powershell is a powerful scripting language and I have seen a lot of developers and administrators miss out on all the goodness of DevOps practices such as versioning, test automation, artifact versioning, CI/CD, etc.

In this blog, I’ll try to explain with a working example on how to program PowerShell (not just scripting) with a predetermined module structure, ensuring quality with unit tests and deliver code in a reliable and repeatable way using continuous integration and continuous delivery pipelines.

Build quality software & Deliver it right

Mature software development teams rely on strong engineering practices to incrementally deliver their software. However, these development practices are not fully used by operation teams where PowerShell is widely used. This blog explains how to structure the code using plaster, version control using git, build the code with psake, test modules with pester, artifact versioning and sharing via NuGet packages using Artifacts and create a release pipeline with Azure DevOps pipelines.

Scope for this Blog
Enter a caption

Why bother?

We can simply write a bunch of PowerShell scripts to meet the need, then why bother about all these DevOps practices? Why should someone care about them? I think benefits broadly fall into below 5 categories.

Standardization of Module design:  When multiple people in a team work with PowerShell, it is paramount to have standards on how to develop PowerShell modules.  Implementing and ensuring certain development standards reduce the complexity and overhead of code deviations and also helps to create a mindset of collective code ownership in the team.

I’ve used plaster templates to create standard modules. Plaster templates allow teams to customize how to structure a module or a PowerShell script file. A team can define multiple plaster templates depending on the need.

In the sample project I have created a plaster template to create a module. This template creates a module along with a set of default folders(Public, Internal, Classes, Tests) and a default pester test case.

Reusability: The ability to share a PowerShell module with other teams have obvious benefits. E.g. if a team writes a custom module to log data to Splunk, it might be beneficial to share this module with other teams. But the question is, how can we reliability share a module via a private repo (Like a private PowerShell Gallary).

In my code example, I’ve used Azure DevOps Artifacts to store/version my PowerShell module. This allows us to share a module (and control who can consume it)  based on the version (beta, prerelease, release)

Control: In large Enterprises (also in most small organizations), traceability is an important factor. There is a need to be in control of when what, why, who has changed the code. There is a need to have certain controls in place from planning (via user stories), code creation to all the way of code deployment in production. Mature DevOps teams have certain tools and processes to ensure there is a right level of traceability in the code promotion process.

In this example I have used git workflow where the master branch is protected. A pull request is created which is then approved by a fellow team member to make changes to master. In the release pipeline there are also controls to ensure only people with certain roles can push code to the production environment. In this example, this is achieved by configuring pre-deployment approvals in the release pipeline.

Quality: Mature DevOps teams have “Quality First Mindset”. Quality is ensured via a set of automated tests such as unit, integration, and functional test.

In this example, I’ve used pester to write sample unit and integration tests. Unit tests are run via build pipeline and integration tests are run during release pipeline execution.

CI/CD: Continuous integration and continuous delivery is a cornerstone practice for mature DevOps teams. Continuous integration offers a great way to get faster feedback to the team on their code changes. Continuous delivery offers a reliable and repeatable way of deploying changes to DTAP street.

In my example, I am using YAML based build pipeline and a release pipeline to deploy to DTAP. Continuous integration pipeline builds the module, increments the version number, runs pester unit tests, and publishes test and code coverage results. Release pipeline pulls the latest artifact from the azure artifact, runs integration test, and promotes to next view.

Working example

To explain various practices such as versioning, CI/CD, etc, I’ll use a sample(dummy) module “PSLogger” to illustrate the same. I’ll explain the individual components of the pipeline and how to implement a CI/CD pipeline for PowerShell.  I’ve made this project public, so you can have a look at code as well as the release pipeline and a custom dashboard I created to give an overview of the release process.

Dashboard:  As Dr. Covey says, “Begin with End in mind” – I want to start with the result of the implementation of above said DevOps practices.  This dashboard is a one-stop view of build trends, test results, open pull requests, and release status. You can access this dashboard here

Dashboard

Version Control: I use git to version control my codebase and I use Azure Repos for the same. I’m a big proponent of trunk-based development (TBD). In a nutshell, TBD is about having a single branch without the need for any other long-living branches such as development or release branches. TBD not only ensures faster feedback but also avoids wastage by avoiding (or reducing) merger conflicts.  You can read more about TBD here 

Code standardization: I use plaster templates to structure PowerShell modules. You can find more information on how to use plaster here.  In the example project, you can use the plaster template to create a PowerShell module by running the following command.

Invoke-Plaster  

Creating a PowerShell module using Plaster template

CI pipeline:  I use psake to build the PowerShell module. I also use the BuildHelpers module in the build process. A CI pipeline is setup to trigger build on every check-in.   I use YAML based multi-stage pipeline in azure-pipelines.yml. CI Pipeline has two stages namely “Build” and “publishArtifacts” with the following tasks

“Install Dependencies and initialize”: I use the PSDepend module to ensure build dependencies. PSDepend uses build.dependencies.psd1  to resolve dependencies.

- powershell: |
              .\Build_Release\build.ps1 -ResolveDependency -TaskList 'Initialize'
		            displayName: "Install Dependencies and initialize"

“Nuget tool Install and NugetAuthenticate”:  PowerShell module version is updated based on the latest module available in my NuGet Artifact feed. To achieve this, I get the latest module-info from Nuget feed. To do this, ensure NuGet is available in the build agent by installing it.

- task: NuGetToolInstaller@1
         inputs:
          versionSpec:
- task: NuGetAuthenticate@0
         inputs:
          forceReinstallCredentialProvider: true

 

“Build Module”: This task is responsible for updating the version number of the module and removing the “Tests” folder. The latest module version number is fetched from NuGet feed and the same is incremented. If there is no module available (in the case of first time publish to Nuget feed), the default version of 0.0.1 is selected. This task accepts a token, NuGet feed name, and URL. These are set as pipeline variables.

- powershell: |
              .\Build_Release\build.ps1 -TaskList 'BuildModules' -Parameters @{ADOPat='$(ADOPAT)';NugetFeed='$(NugetFeed)';ADOArtifactFeedName='$(ADOArtifactFeedName)'} 
            displayName: "Building modules"

“Test”: In the test step, all units are executed by pester and test results and code coverage results are published. A “Test” task in build.pskae.ps1 is responsible for running and creating test and coverage results files.

Task 'Test' {

    $testScriptsPath = "$ENV:BHModulePath"
    $testResultsFile = Join-Path -Path $ArtifactFolder -ChildPath 'TestResults.Pester.xml'
    $codeCoverageFile = Join-Path -Path $ArtifactFolder -ChildPath 'CodeCoverage.xml'
    $codeFiles = (Get-ChildItem $testScriptsPath -Recurse -Exclude "*.tests.ps1" -Include ("*.ps1", "*.psm1")).FullName

    # Load modules to prep Tokenizer tests

    Import-Module -Name $ENV:BHPSModulePath
    if (Test-Path $testScriptsPath) {
        $pester = @{
            Script       = $testScriptsPath
            # Make sure NUnitXML is the output format
            OutputFormat = 'NUnitXml'         # !!!
            OutputFile   = $testResultsFile
            PassThru     = $true # To get the output of invoke-pester as an object
            CodeCoverage = $codeFiles
            ExcludeTag   = 'Incomplete'
            CodeCoverageOutputFileFormat = 'JaCoCo'
            CodeCoverageOutputFile = $codeCoverageFile
        }
        $result = Invoke-Pester @pester
    }
}

A sample test report and a code coverage report are shown below.

Unit Test Report
Code Coverage report

“PublishArtifacts”: This stage is responsible for publishing artifacts (PowerShell module as a NuGet package) to Azure Artifacts feed. As a best practice, I use only one feed to publish and use different views (@PreRelease and @Release)  to promote artifacts across environments. The NuGet feed is registered via helper.registerfeed.ps1 and published via publish.ADOFeed.ps1

# helper.registerfeed.ps1 
[CmdletBinding()]
param (
    [string]$ADOArtifactFeedName,
    [string]$FeedSourceUrl, 
    [string]$ADOPat
)

$nugetPath = (Get-Command NuGet.exe).Source

if (-not (Test-Path -Path $nugetPath)) {    
    $nugetPath = Join-Path -Path $env:LOCALAPPDATA -ChildPath 'Microsoft\Windows\PowerShell\PowerShellGet\NuGet.exe'
}

# Create credentials
$password = ConvertTo-SecureString -String $ADOPat -AsPlainText -Force
$credential = New-Object System.Management.Automation.PSCredential ($ADOPat, $password)

Get-PackageProvider -Name 'NuGet' -ForceBootstrap | Format-List *

$registerParams = @{
    Name                      = $ADOArtifactFeedName
    SourceLocation            = $FeedSourceUrl
    PublishLocation           = $FeedSourceUrl
    InstallationPolicy        = 'Trusted'
    PackageManagementProvider = 'Nuget'
    Credential                = $credential
    Verbose                   = $true
}

Register-PSRepository @registerParams

Write-Host "Feed registered"

Get-PSRepository -Name $ADOArtifactFeedName
# publish.ADOFeed.ps1
[CmdletBinding()]
param (
    [string]$ADOArtifactFeedName,
    [string]$FeedSourceUrl, 
    [string]$ADOPat,
    [string]$ModuleFolderPath  
)

if (-Not $PSBoundParameters.ContainsKey('ModuleFolderPath')) {
    $ModuleFolderPath = $(Pipeline.Workspace) -join "\Staging"
}

$nugetPath = (Get-Command NuGet.exe).Source
if (-not (Test-Path -Path $nugetPath)) {    
    $nugetPath = Join-Path -Path $env:LOCALAPPDATA -ChildPath 'Microsoft\Windows\PowerShell\PowerShellGet\NuGet.exe'
}

. $PSScriptRoot\helper.registerfeed.ps1 -ADOArtifactFeedName $ADOArtifactFeedName -FeedSourceUrl $FeedSourceUrl -ADOPat $ADOPat

$module = (Get-ChildItem  -Path $ModuleFolderPath -Directory).FullName

$publishParams = @{
    Path        = $module
    Repository  = $ADOArtifactFeedName
    NugetApiKey = $ADOPat
    Force       = $true
    Verbose     = $true
    ErrorAction = 'SilentlyContinue'
}

Write-Host "Publishing Module"

Publish-Module @publishParams -Credential $credential

Status: Build and deployment status is reported in the dashboard (above) and also as build badges. I’ve added badges in README.md file.

Build and Release Badges

Release pipeline 

The release pipeline is responsible for promoting artifact (PowerShell module) from Dev to Test and then to Prod. Here is a high-level overview of the pipeline.  A new release is triggered when a new version of the artifact is available in the Azure Artifacts.

Release Pipeline overview

Stages: There are 3 stages in the pipeline – Dev, Test & Prod. All 3 stage broadly does following.

  1. Register Nuget Feed as PSRepository. I invoke helper.registerfeed.ps1 for the same.
  2. Install the latest module from the feed.

Both Dev and Test stages also run integration tests before they promote the package to the next stage.  I use “Promotes a package to a Release View in VSTS Package Management” extension to achieve this. 

 Azure Artifacts

I’ve created a feed “PSLogger_artifacts” in Azure Artifacts. Every successful CI pipeline creates a new artifact with a default (@Local) view.

The release pipeline is responsible for promoting the view based on quality gates defined in the Dev and Test stage to the next view.

I hope this blog gives you detailed insights into how to setup a pipeline and ensure quality standards and release gates for PowerShell development. I’m curious to know what you think about the various DevOps practices mentioned in this blog.

Enterprise cloud adoption strategies – Role of Central IT

By 2020, a corporate ‘’no cloud” policy is as rare as a “no internet” policy is today – Gartner.

Available data clearly indicates the direction of Cloud infrastructure market forecast.

Pic1 for Blog

Adopting cloud at enterprises require some additional considerations on the following topics

  • Compliance
  • Security
  • Governance
  • Auditability / traceability
  • Operating models
  • Responsibility model
  • Way of work

Irrespective of the size of the organization, there are certain common areas of focus during cloud adoption such as

  • Cloud(native) first approach
  • Buy vs build
  • T-shaped skilled people
  • Application architecture and technology landscape
  • Culture of safety and experimentation
  • End to end value chains with minimal handoffs

 

What is Central IT?

Central IT is IT function of an enterprise which provides its Data center services / infrastructure services. Central IT (also known as Central CIO, IT for IT etc based on the organization lingo) provides various services to business units such as network, Databases, servers, application platforms (e.g. api platforms) etc.

So, will this function be relevant after an enterprise adopts cloud? Yes, but not in the same way as pre-cloud era. The role of Central IT depends on the cloud vision of the organization. An organization cloud vision can be broadly put on a below spectrum.

pic2 for blog

Trust no one: Central IT function decides on the infra requirements. This is almost like using cloud as another data center. This doesn’t really make the best out of cloud. However, gives total control to central IT

Trust the decentral process:  Business Units are free to specify their infra needs. For e.g. using Elastic Beanstalk or Lamda can be driven by BU’s but ultimately executed by Central IT

Trust only specialists in teams: Only certain people in BU’s can create / manage cloud resources. These are the people with special privileges who not necessarily come from central IT

Trust the teams but verify: Teams are allowed to take total responsibility of their infrastructure. However, Central IT provides an automated way to enforce policy requirements on all the resources teams create.

 

Central IT as “Cloud Evangelists” during cloud adoption:

 

There are clear benefits in having a “Cloud Evangelists” model (can also be known as “Cloud center of expertise”, “Cloud competency center” etc based on organization lingo) during cloud adoption. Adopting cloud at an enterprise has an impact on its people, process and technology. To enable, facilitate and fasten the change process, “Cloud Evangelists” will define standards, advise on cloud technology and its usage and coach BU’s to adopt cloud.

 

This “Cloud Evangelists” team(s) will also ensure that common concerns such as compliance, security etc are enforced automatically so that each BU’s need not to spend any additional effort on it. Below diagram depicts a possible role a “Cloud Evangelists” team can play in an enterprise.

pic3 for blog

I plan to express my views in detail on each topic of cloud adoption in my next series of blogs. Do let me know what you think about the role of a central cloud team? Do you recognize the aspects I mentioned in this blog? Agree or disagree, do let me know via comments.

Testing your infrastructure with Inspec

Infrastructure as code(IAC) is not a luxury anymore but a necessity for Devops teams to be efficient and effective. Most of the teams I coach either already have implemented or in the process of implementing their infrastructure as code. At some point during coaching assignment, all these teams have asked exactly same question – “We already use tools (Terraform, CloudFormation, Ansible, Packet etc) to create infrastructure. Why should we test it again? Obviously, we don’t doubt these tools nor we want to test these tools, right”.

Why test infrastructure

Because it changes. Because it is code. In the world of IT, anything that changes need to be validated if it still matches the desired result. Basic principles of software development should be applied while creating infrastructure by code. Here are my top 5 reasons why you should test your infrastructure code.

  • Continuous compliance and security standards: Infrastructure testing tools(such as Inspec) will help you to detect violations and report so you can address appropriately.
  • Shift left approach: Move compliance checks, security validations, feedback loops on IAC changes more towards left of delivery pipeline.
  • Faster troubleshooting: If your application is not working as expected, it becomes easier to narrow down if it is due to environmental/infrastructure related.
  • Make changes with more confidence: Automated verifications of IAC provides safety nets that enable teams to make changes with confidence.
  • Focus on fire prevention rather than firefighting: By detecting problems/symptoms sooner enable teams to take required corrective measure before it escalates and disrupts business.

What and Why of Inspec

Inspec is an open source framework written in Ruby which helps you to test your infrastructure. Inspec validates actual state with desired state. I hear you say “Hey! Even terraform plan can give me this information”. Yes, terraform (or other similar tools) will tell you if your infrastructure definition matches actual state. However, let’s look at a practical use case.

You want to ensure all your web servers are associated with certain sub-nets and those sub-nets belong to certain security group. You might also want to verify ingress and egress rules. Another use case: How do you ensure someone else (because of shared tenant or otherwise) have not added an extra security group or sub-net? There are numerous use cases you can derive based on network configuration, file system, system configuration etc. Well, this is where inspec will come to your rescue.

A simple inspec test – verifies if a given security group meets specific inbound rules

describe aws_security_group(group_name: linux_servers) do

  its('inbound_rules.first') { should include(from_port: '22', ip_ranges: ['10.2.17.0/24']) }

end

Below test checks for correct tagging of a EC2 instance

describe aws_ec2_instance('i-090c29e4f4c165b74') do  its('tags') { should include(key: 'Contact', value: 'Gilfoyle') }end

A quick anatomy of above tests:

  • Describe: keyword of Inspec which you can roughly equate to “test” fixture
  • aws_security_group/aws_ec2_instance: Out of the box resources of inspec
  • parameter to resource: A way to identify the resource. This can be parameterized by attributes file or from a terraform state file.
  • Its (‘…’): a property of the resource which needs to be compared
  • should include(…): a condition which gives the test result.

Why I prefer Inspec? Here are my top 5 reasons

  • Flexible: Inspec is incredibly flexible. It offers numerus resources out of the box. However, it is also quite easy to create your own custom resources to meet your requirements.
  • Easy to write and read: Inspec tests are very easy to write and read and they closely resemble human readable format.
  • Remote testing support: You don’t have to install any packages/tools on your target infrastructure. Inspec use ssh/winrm to carryout testing.
  • Platform agnostic: Inspec can be used on multiple platforms like windows, linux (different flavors), docker and multiple cloud providers aws, azure
  • Open source: Inspec is an open source and supported by Chef.

How to get started

Introducing a new tool or a practice to your teams can be done in multiple ways. Here is what worked for me. Again,  here are my top 5 steps to introduce Inspec

  • Make Problem visible:
  • This is obvious but obvious pain is not always obvious 😊. This is especially true with IAC as rate of changes encourages teams to bear the pain.

  • Familiarize with the tool:
  • I usually arrange a working lunch or couple hours of hands on session with teams and guide them through some simple tests. This really helps teams to get a good understanding of Inspec and encourages them to tryout in their projects.

  • Identify critical yet simple use cases:
  • Start small. Pick one or two simple test cases, possibly identify the part of infra which changes often. The key is to have a small enough test case yet which gives value to the team.

  • Start with non-production environments:
  • Test environments change more often (manually or otherwise) than production. I suggest teams to target their tests on test environments for initial 1 or 2 sprints. This gives them enough feedback and put this on their radar. Once they are comfortable, this will become part of their DoD.

  • Make these tests part of your pipeline:
  • Make sure these tests are part of your pipeline. Remember, Inspec tests can be used for compliance and security auditing. By having these tests in a pipeline will also helps in meeting required complacency standards.

Refer to https://www.inspec.io/ for more information. You can find lot of hands on tutorials here https://learn.chef.io/modules#/

5 Characteristics of a Devops organization

As consultant (or a new member in a devops team), what are your telltale signs of a “DevOps” organization? Do share them in comments section. Below are my top 5.

Product based teams over component teams:

Autonomous and cross skilled teams are key to deliver and maintain products. An organization structure which is based on knowledge silos (such as dev, qa, ops) is bound to create multiple handovers and thereby increase waste and risk of things going wrong. In a mature Devops organization, you will typically find organization structures based on the products/services they offer.

However, there is still value in “Component teams”. For instance, the core technology services/capabilities of the IT organization can still be fulfilled by a technology team. This is an acceptable situation when these core services/capabilities are enablers for other devops teams to deliver value to end customers.

Obsession with Automation over preoccupation with manual work:

Devops teams are obsessed with automation. Every manual task has an increased risk comparted to its automated counterparts. In most cases, one of the biggest bottleneck in overall value stream is manual interventions. This manual intervention is also highly error prone and time consuming.  Hence, mature Devops teams relies on automation to achieve consistency and speed. Devops organizations enable their teams to focus on ruthless automation of all their activities such as infrastructure, deployments, testing, documentation etc.

However, there is still value in some manual interventions. Typically, activities such as exploratory testing, end user training etc might still require some manual effort but this should be kept to minimal or look for ways to automate. For e.g. to get early feedback from customer, devops teams can use techniques such as canary releases, feature toggle, A/B testing, dark launchs etc.

Evidence based over gutfeel:

Devops teams measure what matters. Their KPI’s gives insights into various aspects such as code quality, build quality, release quality, NFR’s and various production monitoring metrics. Technology and business decisions are driven by data. For e.g. how did new architecture design changes impact performance? How is the new feature we implemented is being used by our users? When does users use a feature in our application? How does the new code we shipped impact out code quality or security? Things like these are answered by hard facts and not by gutfeel of the team.

Data driven decision making is a one of the key aspects of devops teams and organizations. However, in some instances business might take some decisions such as a new feature implementation based on gutfeel. I rather like to call it as assumptions or hypothesis that a certain feature will make users happier or make them more effective etc etc. However, there decisions which are based on a certain hypothesis need to be validated with data either after a release or preferably before that.

 

Team work over individual work

Devops teams require high level of professionalism and engineering excellence. Professionalism reflects in their ability to do the right thing, courage to say No, courage to ask for help, disagreeing respectfully, committed to deliver, ability to have an open and honest collaboration with each other. When people disagree, argue or criticize, they don’t disrespect each other. They only disagree with the idea and not the person.

Members in a mature devops teams hold each other to higher standards. As a team they celebrate each other successes which is in tern teams’ success. This promotes a sense of achievement, quality and a great engine of motivation at workplace.

Fail fast over delayed learning

Mistakes are mandatory to learn! A team which always plays safe without exploring uncharted territories, will not often challenge status co. Mature devops teams/organization perform blameless postmortems to get learning from mistakes. Often, these local learnings could be transformed into organizational wide learnings.

Fail fast will be an effective strategy only if the cost of failure is small, manageable and doesn’t result in cascading chain reaction. This is where effective feedback loops, high-level of automation comes into picture. Apart from this, mature devops teams have a culture of trusting each other, challenging each other and an eye for constant improvements.

For e.g. in our organization we have a culture of “Celebrating failures/mistakes”. Every monthly all-hands meeting, employees share their biggest mistake/failure of the month. Whole organization votes on which is the biggest “screw-up” and that person wins a nice dedicated parking spot for a month 😊. This resulted in a culture where people are open to share their mistakes and there by all of us can learn from it.

 

 

Do you notice these characteristics in your team/organization?  What will you add to this list?

6 best practices for application deployments

Many software development teams are now working in Agile/Scrum way and that’s great! One of the cornerstones of Agile way of working is “Deliver value fast and often”. Real value is delivered only when software is running in production (not Dev, not QA J).

Having right deployment principles and practices in place is all the more important in Agile environments because new increments are produced by scrum teams at the end of each sprint. A right deployment strategy is a key factor to have faster and effective feedback loops from each environment. Below are some of the best practices for application deployments.

Build once deploy anywhere

Do you run into situations such as “Hey! It works on QA but not UAT or Prod”.  One of the root cause of such situations is creating build artifacts for each environment. It is key to promote same package which was tested in lower environments (Dev/QA) to later environments (UAT/Prod). You will introduce unwanted risk if you build codebase everytime to deploy to different environments as there is always a hidden danger of introducing unwanted changes. Automated deployments are very effective only when a same deployment package goes through different quality gates. If you change/ build deployment package for each environment, you are bypassing lower environment quality gates.

Hint: Use same build package and promote it through all environments.

It should be a people first process

Using right tools for application deployments is important. However, focusing on tools alone will not help.  Deployments are smooth when there is a better collaboration between people who build the software and people who deploy the software. When work is done in silos, focus is narrowed which leads to expensive and time consuming handoffs.  Improving the speed of the slowest member of a convoy increase the speed of whole convoy. In the same way, having better collaboration and elimination of waste during handover improves over deployment process.

Hint: Improve collaboration between Dev and Ops to minimize handovers.

Make deployments boring

Deploying to production need not to be a ceremony. Production deployments need to be routine, boring events because same process is used all along for each environment. New features you deploy to production should give you excitement but not the deployment process J. You will add unnecessary complexity if you customize deployment process for each environment.

Hint: Use same repeatable and reliable way of deployments to each environments.

Automate, automate, automate

Automate your build process, automate your application/component configuration (configuration as code), automate your infrastructure (infrastructure as code), automate your deployment process. A good rule of thumb: “Everything that does not require human judgment/intervention is a candidate for automation”

Hint: Visualize your current end to end deployment process and identify quick wins and low hanging fruits to automation or to identify bottlenecks

The Architecture Drives the Build

Batch size has great deal of influence on flow and architecture influences batch size. If you modify or add one line of code, how big is the impact on testing, building and deploying the package? Follow standard design principles such as separation of concerns, Single Responsibility principle, Principle of Least Knowledge, Don’t repeat yourself and minimizing upfront design. As depicted below, if you have a spaghetti architecture deploying a change is expensive and time consuming, so choose Ravioli 😉

archi

Hint: Choose a loosely couple architecture and focus continuously on architecture refactoring

Manage your dependencies

One of the key challenge working in distributed, multi team environment is dependency management. There is a high need to ensure easy distribution of artifacts produced by different teams as they share dependencies between them. Using a repository manager comes in handy in this situation. It is also useful to define access rules for users and groups that consume artifacts so the consumer uses right artifacts/version.  Other benefits of using a repository manager includes reduction of build time as significantly reduced number of downloads off remote repositories. You can also use a repository manager in case you want to roll back to a previous version.

Hint: Always use repository manager to manage your dependencies and version control your build artifacts.

 

Who can be a great scrum master?

Lot of organizations, managers and scrum masters have this question. What makes a scrum master great? Is he/she just need to know scrum or do they need to have more than that?

My usual answer – a scrum master need to have some skills and traits. Of course, having sound agile/ scrum knowledge is important and equally important is some mindset aspects which are listed below.

  • Don’t ask for permission, ask for forgiveness
  • Ask the team
  • “I have great responsibility, but no authority”
  • “The collective minds of the team vastly exceeds my own”
  • My job is to make sure I’m not needed
  • I win when the team wins
  • Able to holding the mirror for them to reflect and adapt
  • Make team feel accountable, inspired, focused
  • Inspire, don’t “require”
  • Don’t give team the fish, teach them to fish.
  • Non judgmental
  • Actions based on facts and not on perceptions
  • You are a midwife, not the laboring woman Smile
  • Live the values!
  • Have serve the team as primary goal.

Myths about Scrum, Agile, Software development

 

During past few years, my role as a scrum.org trainer, agile coach, software developer has given me opportunities to interact with some of the best and brightest of the industry. At the same time I’ve also interacted with some people, teams and organizations which somehow got into the trap of believing some myths of our industry. The below list is based on what me and my colleagues at scrum.org have seen.

Scrum:

  • Scrum is the silver bullet that can make any project finish on time.
  • Scrum can put your project to failure.
  • Scrum should be changed to fit your company. (Note: Scrum is a framework, it can be adapted but no changes it its core essence)
  • Product Owner can accept or reject increment.
  • Scrum is suitable only for small projects.
  • Scrum does not work for remote teams
  • Scrum only works for mature team members
  • Kanban is more flexible than Scrum
  • Scrum does not work for fixed price projects
  • Tester does not have any role in Scrum
  • Scrum Master is a project manager in Scrum
  • Scrum (or any part of it) will never work here
  • Our project/product is different. Scrum is no good use here.
  • Scrum doesn’t work when there are too many dependencies between teams
  • Scrum can’t work if you don’t change performance appraisals, incentives, etc
  • Scrum doesn’t work if your software runs on hardware
  • One PO cannot possible handle X teams  (where x is some number larger than 2-3)
  • Scrum can’t/shouldn’t be used for ‘maintenance teams’ / Kanban should be used for maintenance/brownfield/legacy teams
  • Scrum is just Waterfall done more often

Agile:

  • Agile teams don’t do any planning
  • Agile teams don’t do documentation
  • Agile teams are cowboys where you don’t have any control over them
  • Pair Programming is someone watching over my shoulder
  • If we skip documentation, we are Agile
  • Agile ignores risk
  • Agile doesn’t believe in any metrics
  • Agile requires no management
  • Agile requires no experts
  • Agile means no deadlines

Software Development:

  • Bugs/Production emergencies are always going to happen
  • Test Automation is too expensive and too hard to be worthwhile
  • We have to be able to fix schedule/scope/cost to keep customers happy
  • Adding people to a project that is running late will get it back on schedule
  • Developers can’t talk to customers
  • We don’t have any way of getting customer feedback
  • Schedule, Scope, Cost is a great measure of software success
  • We’re doing Scrum, so we don’t need to do TDD and Pair coding
  • Programmers cant be trusted to test their own software
  • Only the testers do testing.  Testing is not my(programmer/analyst/architect) job
  • The architecture/design must be done upfront
  • Only the BA writes requirements.  Requirements are not my(programmer/tester/architect) job

Continues Delivery with Release Management – Introduction

 

Microsoft recently acquired InCycle’s “InRelease” software [now called as Release Management (RM)] and integrated with VS 2013.  Release Management software fully supports TFS 2010, 2012 and 2013.

Before we look into details of Release Management, let’s look at what Continuous Delivery means.

What is CD?

Continuous Delivery is the capability of automatic deployment of components to various servers in different environments.  This typically involves configuration management of different environments, ability to define and customize a deployment workflow driven by business involving multiple roles in organization.

Why do we need it?

Well, DevOps is the talk of the town. If you want to be a cool kid (team), you gotta know/implement CD. Apart from the cool factor, CD brings following advantages to the dev team and business.

–          Develop and deploy quality applications at a faster pace.

–          Improve value of deliver by reducing cycle time.

–          Enable same deployment package to traverse various environments as opposed to rebuild for each environment

–          Manage all configuration information in a centralized location.

–          Have repeatable, visible and more efficient releases

–          Alight with deployments with business process

–          Adhere to any regulatory requirements during deployment process.

What is Release Management?

Release Management is a continuous delivery solution for .NET teams for automating deployments through every environment from Team Foundation Server (TFS) until production. RM also allows to define release paths to include approvals from Business and other departments (such as ops) when required. RM enables to assemble all the components of your application, copy them to required target servers and installs all of them in one transaction. QA checks such as automated tests or data generation scripts, configuration changes etc. are all handled by RM. Release Management also handles roll back in required scenarios.

Release Management Components:

The following diagram shows the main components of Release Management.

Release Management Components

Client: There are two Client components. The Windows client is a Windows Presentation Foundation (WPF) application that serves as the main interface point to manage release information.The Web client is used to act on Approval Requests. This is the interface to which users are directed when following links in e-mail notifications. Client is used both by business and development teams to provide necessary approval when required.

RM Server: The Server component is the heart of Release Management. It is a combination of Web and Windows Services that expose contracts used by all other components. The server component also contains a SQL Server database. Typically, RM server is installed on TFS server and can share same SQL server.

RM Deployer: The Deployer component is a Windows service that lives on the Target Servers (such as Dev, QA, Prod etc) where your application components need to be installed.

Tools: The Tools are components that help in the deployment of various components to different servers, configurations etc. Few of the are given below.

–          Installing a version of a component to a specific environment.

–          Deployments to Azure

–          Uninstalling a previous version of a component before a re-deployment

–          Deploying reports to Microsoft SQL Reporting Services

–          Running SQL scripts on a database server etc.

In next blog, I’ll write about configuring Release Management.

Reference material:

Channel 9 Video

Visual Studio 2013 ALM VM – Hands on lab

InRelease User guide

Continues Delivery with Release Management – Configuration

This blog is in continuation to my previous blog about introduction Release Management tool to implement continues deliver.

Release Management software (server, client and deployment agent), installation guide, user guide can be downloaded from here.

1. Server can be installed on TFS server and RM can create database on the DB same server.

2. By default, RM runs on port 1000, but can easily be changed.

3. Server configuration is pretty straightforward, deployment agent can be configured in the Client by choosing Administration, Settings, Deployer Settings

image

4. Most of the key configurations such as TFS user groups, SMTP settings, connections, servers etc are configured via “Configuration Paths” This can be done by navigating to Administration, Settings, System Settings.

 

image

In next blog we will see how to create and configure release pipelines.