DevOps Analytics

Updated 4 weeks ago by Copado Solutions

Introduction

DevOps Analytics is a suite of advanced, interactive visualizations that give insight into your development process with Copado. It provides CIO level metrics as well as dashboards for optimizing planning, development and deployment. DevOps Analytics provides an overview of your development process with a set of standard Salesforce reports and a dashboard. We also have a set of Advanced Interactive Dashboards to help you manage your DevOps lifecycle and identify key areas where there is room for improvement.

Since part of DevOps Analytics is built on Salesforce’s standard functionality, you can create your own reports and dashboards to extend Copado’s built-in capabilities and look for more specific metrics that are more meaningful or relevant to your DevOps process.

Let’s dig deeper into the various reports and dashboards included in DevOps Analytics and the different metrics provided.

New Copado Fields

The Copado DevOps Analytics managed package includes a wealth of functionality! You will find new custom fields that can be added to your Copado core layouts, new reports, a standard dashboard, and advanced analytics.  

User Story object: 

  • Lead Time 
    • This field is automatically calculated based on the first time a user story was ready to be promoted to the time it was promoted to a production environment. 
  • First Ready to Promote Time
    • This field is automatically populated when a user story is ready to be promoted. This is being updated based on the Ready to Promote checkbox or Promote & Deploy checkbox.
       
  • First Time Promoted to Production
    • This field is automatically populated when a user story is promoted to a production environment.
     
  • Development Complete
    • This field identifies when a user story is development complete. This is utilized in the Planning dashboard.
  • Elapsed Time to Resolve 
    • This field is calculated based on the time a user story is created to the time it is promoted to a production environment.
      Utilize this field with the new Business Disruption Failure? field to see how long it takes you to fix or recover from a failure. 

  • Severity 
    • Picklist field which enables you to track severity in a standardized way. It helps identify any type of system downtime.
       
  • Business Disruption Failure? 
    • This checkbox is new and is automatically updated if you select P0 or P1 for severity. This field is used to indicate any system failures.
       
  • Found in Promotion
    • This lookup field gives you the ability to identify which promotion a failure may have been found in. Add this to your Bug layout to use it in conjunction with the Severity and Business Disrupting Failure? fields. 

Promotion object:

  • Caused Business Disruption?
    • This field is automatically updated if user stories reference the promotion in the Found in Promotion field and the user story is a business disrupting failure.
       
  •  Destination is a Production Org? 
    • Indicates whether the destination environment is a production environment. This is automatically updated based on the destination environment.

Environment:

  • Production Org?
    • New checkbox which enables you to identify whether the environment is a production environment.

Standard Dashboard and Reports

The Copado Analytics dashboard provides a unique set of DevOps KPIs that help you monitor your DevOps process. This dashboard includes the following metrics: Lead Time, Current Month Promotion Frequency, Promotion Frequency, Change Fail Rate %, Mean Time to Restore (Days):

Lead Time: This chart shows the average time in days it takes for a user story to be promoted to a production environment .

Current Month Promotion Frequency: This metric displays the frequency of promotions to production for the current month. 

Change Fail Rate %: This chart measures how often promotion failures occur in production that require immediate remedy, such as when you need to perform a rollback. Change fail rate is calculated based on the % of promotions where the field on the Promotion record Caused Business Disruption = TRUE. There are several new fields on the user story that also help you track business disrupting failures: Business Disrupting Failure? (checkbox), Severity (picklist), Found in Promotion (lookup). These all help you track the failure and what promotion it may have been found in. The severity field has automation built around it so that if you mark a user story as a P0 or P1, Copado automatically updates the business disrupting field to TRUE.

Mean Time to Restore: This graph displays the average time it takes to resolve business-disrupting  failures. To calculate this, Copado uses a new field called Elapsed Time to Resolve which is being calculated on all user stories. What Copado is doing in the report is filtering all the user stories where Business Disrupting Failure? = TRUE and First Time Promoted to Production is not BLANK. This way, Copado is only filtering out those user stories that were identified as a failure and were then fixed and promoted to production.

Advanced Analytics Dashboards

Overview

The Overview dashboard covers the four key DevOps metrics, which are:

  • Deployment Frequency: This chart shows how frequently user stories are promoted to production. Each promotion brings new features and fixes, so the higher the deployment frequency the better.
    Leverage Copado Continuous Delivery to speed up your deployments and increase the deployment frequency.

  • Lead Time: The Lead Time chart displays count of released user stories and the line indicates the median lead time for the released stories. What is lead time calculation based on? Lead time is calculated as the time from when a user story is ready to be promoted to the time the story is promoted to a production environment. There are two date/time stamps which have these values: First Ready to Promote Time and First Time Promoted to Production. When the user story reaches production, a process builder updates the field First Time Promoted to Production. The checkboxes Ready to Promote and Promote and Deploy on the user story are used to determine when a user story is ready to be promoted.

  • Average Change Fail Rate: This graph displays the percentage of promotions that cause some type of downtime.
    If your rate is very high, try releasing smaller changes more frequently. You can also implement automated regression tests for all critical functionality and improve architecture to make it safe and easy to make small changes.

  • Average Recovery Time: This graph shows the count of business-disrupting failures as well as the amount of time it takes to fix or roll back these business-disrupting failures.
    Create a layout for bugs and add the Business Disruption Failure?, Found in Promotion, and Severity fields to your layout to track any failures. 

In addition to these four key metrics, you can find a Work Progress chart that displays the number of user stories deployed to production and those pending deployment (work delivered and work in transit). The Distribution of Work Delivered chart shows the distribution of work. What percentage of bugs, user stories and investigations have you released over a period of time? These all dynamically map to the record types you have created for the User Story object.

Planning Dashboard

The Planning dashboard provides relevant information about your sprint. 

At the top of the dashboard, you can see three key KPIs that highlight: distribution of work, user story median lead time, and number of active sprints.  

Below the key metrics you can find a series of charts that display the following information:

  • Sprint Burn Up: In this chart you can see the burn up rate for your user stories in each sprint. You are able to see the number of user stories committed on the first day of sprint and the rate at which they are completed. This metric is utilizing the First Ready to Promote Time field to categorize a user story as completed for a given sprint. This metric is utilizing the Development Complete field to categorize a user story as completed for a given sprint. (ensure that you have field history tracking enabled for this field).
  • Planned Work Over Time: This chart shows a wealth of information. You are able to see three categories of work for a given sprint: 
    • Initial commitment: How many user stories were initially committed to the sprint based on the start date. 
    • Added work: How many user stories were added to the sprint after the sprint start date. 
    • Carried over work: How many user stories were carried over from a previous sprint?
       
  • Planned vs. Completed Stories: This chart compares the user stories that were planned for the sprint vs the user stories were completed. This chart also indicates a team that has either over delivered or under delivered:
    • Planned stories are stories that were committed on the first day of the sprint. 
    • Completed stories are stories that are development complete.
       
  • Distribution of Lead Time: This chart shows the distribution of lead time from all user stories that have been promoted to production. Lead time is the average time it takes for a work item to be completed plus the waiting time.

Development Dashboard

The Development dashboard digs deeper into the work and the changes done during the development process. In this dashboard you can see a chart with the number of metadata items promoted (files) and relevant information about the metadata categories that have been edited (data, business logic, user interface, infrastructure, other), and the number of promotions per file. You can drill down through these metrics and filter them to just focus on a particular user story type (user story, investigation or bug) or metadata category. Additionally, you can filter these data based on different user story fields: Project, Sprint, Epic, Team or Theme.

User Story Types: This donut chart shows the number of user stories per record type (bugs, investigations, etc.).

Number of Files Promoted: This graph shows the number of promotions over time together with the files included in each promotion.

Number of Promotions Per File: This graph displays the number of promotions per file. In the tree map, you will find that each square represents a single file. Hovering over the rectangle will display the file name, type and the number of promotions that are associated with that file. The larger the number of promotions, the larger the size of the rectangle. This tree map also adjusts based off of your selections to the left in the metadata categories. Select a specific category or metadata type from the left-hand side to view the number of promotions for that particular item.

Metadata Categories: Here you can see the various categories that are associated with Salesforce’s metadata types, such as Business Logic, User Interface or Infrastructure. Click on any of these categories to display the subcategories and metadata types included in it.

Sub-Categories: As with categories, you can select a particular sub-category to view metrics pertaining to that specific category.

Types: To further drill down, select one of the metadata types to filter the tree map on the right. 

Deployment Dashboard

The Deployment dashboard, as its name indicates, shows everything you need to know about your deployments and promotions. The KPIs included in this dashboard show data from the past thirty days.

At the top of the dashboard you can see: the number of promotions, the lead time (days), the average deployment attempts per promotion and the average metadata items per promotion. 

The Deployment Complexity graph shows the complexity of your deployments and promotions. Each bubble in the chart represents a promotion. The colors of the bubble correlate to the number of deployment attempts related to the promotion. The darker the color purple, the higher number of attempts. The size of the bubble is based on the number of manual steps. The larger the size, the larger the number manual steps. The Y axis indicates the number of metadata components. The data can be filtered by production environments.

The Deployment Attempts chart shows detailed information about your environments and associated promotions. The chart shows the unique combinations of source and destination environments and aggregates the number of promotions, deployments, failed attempts, and manual steps. The chart also displays an overall success rate per combination of environments based on the number of deployment attempts. The success rate can help determine where you may have problem areas in your deployment process.


How did we do?