Three Business Agility Transformation OKRs to Measure Transformation Success

Safe Business Agility

Objectives and key results (OKRs) are essential in guiding an organization in its Lean-Agile transformation and business agility. But what exactly are OKRs and how and why should leaders use them? In this episode, Vikas Kapila, SPCT at Enterprise Agility Consulting, shares three business agility transformation OKRs that leaders can use in an organization to accelerate adoption and outcomes.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

Objectives and key results (OKRs) are essential in guiding an organization in its Lean-Agile transformation. But what exactly are OKRs and how and why should leaders use them? In this episode, Vikas Kapila, SPCT at Enterprise Agility Consulting, shares three business agility transformation OKRs that leaders can use to accelerate adoption and outcomes.

Melissa and Vikas discuss elements including:

  • Leadership engagement
  • Four superpowers of the OKR framework
  • Tribal unity and agility
  • Organizational agility

Follow these links to learn more about topics referenced in the podcast:

TRANSCRIPT

Speaker 1:

Looking for the latest news experiences and answers to questions about SAFe? You’ve come to the right place. This podcast is for you. The SAFe community of practitioners, trainers, users, and everyone who engages SAFe on a daily basis.

Melissa Reeve:

Welcome to the SAFe Business Agility Podcast recorded from our homes around the world. I’m Melissa Reeve, your host for today’s episode. Joining me today is Vikas Kapila, SPCT at Enterprise Agility Consulting. Thanks for joining me, Vikas. It’s great to have you back on the show.

Vikas Kapila:

Thank you, Melissa. I’m excited to be here again.

Melissa Reeve:

In this episode, Vikas will share three business agility transformation objectives and key results, or OKRs, that leaders can use to accelerate adoption and outcomes. Let’s get started. So Vikas, for our listeners who may not be familiar with OKRs, can you describe what they are and their purpose?

Vikas Kapila:

Yeah, the objectives and key results, alternatively also called as OKR or OKRs, are generally a goal-setting framework used by individuals. Can be used by teams, can be used by complete tribes or organizations, right? Basically, to define measurable goals and track their outcomes. Now it’s a set of two phrases. “Objective” refers to what is to be accomplished, right? It tries to define the objectives that are important, that are concrete, action-oriented, and inspirational. Now, on the other hand, the second phrase, “key results,” they engage in benchmarking and monitoring; how to accomplish these objectives.

Speaker 1:

The origin of OKRs dates back to 1954 when Peter Drucker was the first to examine management as a separate responsibility. He also introduced management by objectives, MBOs. In the 1970s, Andy Grove, the CEO of Intel expanded on Drucker’s MBOs by adding the concept of key results, and OKRs were born. Yet the concept really took off in 1999 when John Doerr, a member of Intel’s management team, introduced them to Google founders Larry Page and Sergei Bren, who quickly adopted them. Today OKRs are still an integral part of Google’s culture and DNA. To learn more OKR history, follow the link in the show notes for this episode on scaled agile.com/podcast.

Melissa Reeve:

So let’s shift now into the SAFe environment and for companies that are implementing SAFe, why is it important for leadership in those companies to establish OKRs?

Vikas Kapila:

One of the key elements, right? We are now in the 15th State of Agile report, right? That has come out early this year. But I’ve been following it since the seventh one closely and every one of them had leadership non-engagement or lack of engagement as a key detriment into the success of adoption of Agile, right? And when it’s a key, it meant that it was more than 40% of respondents, sometimes 41%, sometimes 44%, sometimes 46%, would say that lack of leadership participation was a key reason for non-sustaining or acceleration of the adoption. It was a great champion event would happen and then it would die. So from that perspective, it becomes so important for us to get leadership engagement, and what better to get OKRs, right?

Melissa Reeve:

Yeah. In fact, we talk a lot about leadership engagement on this podcast and it’s a constant source of conversation about how do we make sure that we’re not just getting leadership support, but truly getting that engagement. So talk to us about that.

Vikas Kapila:

And that’s why I think OKR is the right framework to help us because OKRs as a framework bring in four superpowers. They are focus, alignment, collaboration, and engagement. Collaboration brings in … So if these are the four superpowers of OKR, then if we use that as a framework to define the objective, even if we set out its objective was very simple, increase leadership engagement in our adoption journey or in our transformation. You could define three to five key results there that show you are on track. And whatever the key results you have, all of them have to be achieved for the objective to be true. So if we put those out very simply that, “Hey. We see our leadership engaged at least 75% of the system demos. If we see our leadership engaged in the PI planning 100% or 90%,” whichever your numbers that you pre-agree as your center that’s setting the metric up, are setting the key results up. And you achieve it, you can see progression, you can see progress, and you can have a conversation about it. “Hey. We met it. How did it feel? What was right? Hey. We didn’t meet it.” Then have a conversation, “What could we have done differently?”

Melissa Reeve:

Vikas, that’s great. And I can see how the four superpowers really help leaders lean in and get that engagement. What are some other reasons why it’s important for leadership to establish OKRs?

Vikas Kapila:

So I gave you one example about engagement of leadership, but the other important reason why leadership wants to establish these OKRs is to actually understand and get a sense of how are they evolving their burning platform, the true reason the transformation’s happening. Are they able to measure the progress to the evolution of the burning platform? Are they actually able to make their platform more healthy? Are they able to make the system less urgent? Is that happening? So the OKRs can help set that objective and then help identify the key results that you need to march towards, that you need to achieve, that you need to accomplish to get there. Second thing, sometimes when we talk to leaders, the first thing comes up, “We need a business case to do a transformation.” And a lot of times leaders are able to put that business case together and get the funding.

Vikas Kapila:

But now a lot of times you forget about that because next year we have to rewrite it. So what I’ve started doing is having that conversation, “Hey. Let’s justify the investment in a transformation and let’s do it incrementally. So even if we have allocated budget, sometimes millions, sometimes multimillion dollars, let’s make it incremental to see that we are investing in the right way, we are allocating the funds quarterly to that from that budget so that we have built in those key results that we expect for every allocation of funds we get toward that investment in transformation. So that it is more intentional rather than letting it be only organic.” And so this enables leaders to have practical ways to accelerate the adoption of SAFe, adoption of this journey because now they start seeing the quick results, they start celebrating it sooner than later. And that is really important and helpful in the system.

Melissa Reeve:

If we as leaders set these OKRs upfront, we’re better able to measure how successful we are in our SAFe transformation. That’s essentially what I’m hearing you say.

Vikas Kapila:

Right. See? You said it so much better than I could.

Melissa Reeve:

Alright. So now that we know what OKRs are, let’s talk about OKRs that are specific to SAFe transformations. And I think you have three OKRs that you advocate for: one for leaders, one about organizational collaboration, and then there’s a third one. Can you describe those for our listeners?

Vikas Kapila:

Absolutely. The first one that you said, yeah, leadership engagement, the one I used as an example too. The other is organizational collaboration. Some people also call it organizational agility; how do you build that. And the third one is tribal agility enablement because a key element of success with the Agile success in achieving agility is teamwork. And to that, teamwork for me is more like a tribal behavior, the tribal mindset, how do we build that. So having an OKR says that we are intentionally trying to evolve and grow that mindset in the system, so tribal agility enablement.

Melissa Reeve:

So we talked a little bit about that leadership engagement and ways that you could measure that, that OKR. I think earlier you talked about how often they’re participating in SAFe events, like the system demo and PI planning. What’s another way that you could set up a leadership engagement OKR?

Vikas Kapila:

Three things that I’d look at to add to that are basically, one of them is, are they willing to learn the new way of working? Are they willing to know the way and then lead by example? That’s one key objective that I would say we should set up as leaders are willing to learn the new way. Another one I would talk about, are they being leaders of the change, to adapt the new way of leadership or management in the digital age? Because the patterns, the behaviors, that leaders or managers would have in the oil and mass production age are different than the behaviors and patterns we are expecting leaders to be there and be effective within this. So having an objective of that is amazing. Now what really becomes a foundation for these is the third one, which is, are they embracing Lean-Agile values, mindset, and principles. Things that they’re asking everybody else in the organization to adapt, be aligning to; are they doing that themselves as well? Are they championing those values? Are they championing those principles?

Melissa Reeve:

We know we need this leadership engagement and you’ve given us some really concrete examples on how to set OKRs around that. Two questions for you. One is, who in the organization tracks those leadership engagement OKRs. And then, who are we calling leaders in this context?

Vikas Kapila:

So, the first one, who is tracking this? So think about this. We have the LACE, Lean-Agile Center of Excellence in a SAFe implementation. And this LACE is, at this point, responsible for doing self-assessments and facilitating self-assessments of the team and technical agility of the leadership agility and all of that. So the LACE is doing. I think what my observations of success have been that these set of OKRs are actually set up to self-assess the LACE itself, the leaders that are in the LACE, the performance of the LACE. So thereby because it’s at the end of the day helping us justify the investment in transformation. So this is basically a self-assessment that the LACE team is doing on their performance.

Vikas Kapila:

The second question was if I remember right, is who are leaders in this context? So I’ll try to avoid the titles different organizations have because every organization has their own labels, have their own titles. I’ll try to correlate this to the roles as we call them in SAFe. For me at the team level, the scrum master and product owners are leaders. At the Essential SAFe or the Agile Release Train level, the RTE, the product managers, the system architects are leaders. The business owners are leaders of the train, the solution manager, the solution architect, the solution trainings needed for the solution train are leaders. Similarly, the participants in the portfolio, lines-of-business heads, the portfolio managers, the portfolio sync engineers, they are all leaders that are there. So, different perspectives in the organization with different roles. We have leaders in the organization and we want each one of these leaders to be engaging actively to have an effective transformation.

Melissa Reeve:

So you also mentioned tribal unity as an OKR. So a couple of questions there too. I’m trying to manage your WIP limits. First, is that a nice shout-out to Em Campbell-Pretty and her book Tribal Unity?

Vikas Kapila:

Yes. Tribal Unity, essentially, it is about bringing that team spirit, bringing that oneness, right? Because we have as humans done a great job over years over this thing to think about the team before self, especially to bring agility at scale to have effective business agility is to think about the train before the team. To think about the portfolio before the train, right? So that’s the transcendence of that oneness, that feeling, that tribal feeling from being a team, to being a train, to being a portfolio that needs to be built-in, and that’s the OKRs’ role to play to help enable that in the system.

Melissa Reeve:

And so what are some examples of OKRs that you could set to measure this degree of tribal agility or tribal unity?

Vikas Kapila:

The key elements within that drive, what really helps is better transparency. They have a better alignment. What helps build that transparency and alignment is trust in the system, right? And what helps build trust is enabling an environment where people feel safe to be vulnerable. So these key parameters need to come together to say, “How are we doing as a system to encourage experiments in the system?” Where people feel SAFe, people feel okay to run an experiment, hoping it’ll be a success but if it’s a failure, they’re excited because they had new learnings rather than feeling worried, “Oh, my story got rejected and not.'” So that element of employee engagement, success and employee confidence level is the key measure here. So what I ask teams to set up is service where they’re talking about intra-team collaboration, we’re talking about transparency and alignment, how what are the key behaviors? And there are a set of questions that can help us see how it’s doing in the system.

Melissa Reeve:

Yeah. So you’re really looking to measure how well are we collaborating? Are we being transparent? What results does that lead to? And putting an anchor around that, so you can ultimately measure that result of, are we achieving tribal agility?

Vikas Kapila:

Right. So if I were to put it in four key results that I’m looking for, I’m looking, one, at the team level to see if the teams are truly being self-managing, self-organizing, feeling confident for making decisions at their team level. Second thing I’m looking for is within that train, is there intra-team collaboration happening? Are we swarming together as teams? And what tells me … sometimes I do the service, but also a program board shows me when I look at PI planning and the ART sync and all, if that intra-team collaboration is happening or not. Next are the conversations about delivery. Are they customer-centric? Or are we still in the voice of the system? So we want the customer-centric conversation because now we really need that swarming of teams to happen to solve the problems because we are coming from different aspects of the problem statement. So these things can be your key results to observe for key outcomes, to monitor, to help with this particular objective.

Melissa Reeve:

So, we’ve talked about the leadership engagement OKRs. We’ve talked about tribal agility OKRs. Let’s talk about your third OKR that you recommend, which is that organizational collaboration or organizational agility. Talk to us about that.

Vikas Kapila:

Yeah. Thank you. It’s a fractal scale-up, of what we were just talking about in the tribal agility, but the essence is now beyond just my delivery teams looking at it. I want my operations team, I want my marketing team, I want my legals, I want my audit, all of these different aspects that actually help us achieve that business solution, not just the hard product solution and everything going on, but that whole end-to-end element coming through that needs intra-organization collaboration. Now, if I look at the hierarchical structures that our organizations have, the delivery team, the operations team, the legal team, and stuff. So when I start enabling collaboration across them so that we have a more robust definition of ready upfront, so that we have a more robust definition of done before we start the going. And so that we have good, effective feedback loops going on incrementally along the journey so that I can actually enable lean QMS, lean quality management, rather than big-batch quality management. That’s the key results I’m looking for from this organizational collaboration as the objective, essentially reducing handoffs, enabling more collaboration across hierarchies.

Melissa Reeve:

So you’ve given our listeners three really great OKRs, again around leadership engagement, around tribal agility, around organizational collaboration. When we were preparing for the show, you hinted at a potential fourth OKR that you would advocate for. What was that?

Vikas Kapila:

That’s always been an objective in mind for me for every transformation thus far, most of them that we work with, whether it’s digital transformation, business transformation or just bringing Agile in, is project to product mindset. Now, it’s very … and this is the same conversation I had with Dean earlier when he introduced the principle number 10, organize around value because we always said it is implicit. It is the goal of the business agility transformational to get to organize around value, but we wanted to make it explicit. And in that same spirit, I’ve been thinking of making this fourth objective more explicit as project to product mindset. The intent being that we need to do a handful of things, six to seven things, to make this truly possible in an enterprise, a Fortune 100, Fortune 5000 enterprise.

Speaker 1:

In his book, Project to Product, Mik Kersten introduced the flow framework, a new way to see, measure, and manage the flow of value through an organization. Read more by following the link in the show notes for this episode at scaled agile.com/podcast.

Vikas Kapila:

It’s … I’ve added the same conversation I had with Dean earlier when you introduced the principle number 10, organize around value because we understand it is implicit. It is the goal of the business agility transformation, or to get organized around value, but we wanted to make it explicit. And in that same spirit, I’ve been thinking of making this full objective, more explicit as project to product mindset. The intent being that we need to do a handful of things, six to seven things to make this truly possible in an enterprise, a fortune, a hundred fortune 5,000 enterprise, right? And so explicitly calling those things out as a respective objectives and then calling out the key results that I need in my journey, uh, quarter by quarter to achieve success in that do experiments I’ve done in the last few years have been very well received. It’s very … and this is the same conversation I had with Dean earlier when he introduced the principle number 10, organize around value, because we always said it is implicit. It is the goal of the business agility transformation to organize around value, but we wanted to make it explicit. And in that same spirit, I’ve been thinking of making this fourth objective more explicit as project-to-product mindset. The intent being that we need to do a handful of things, six to seven things, to make this truly possible in an enterprise, a Fortune 100, Fortune 5000 enterprise. And so explicitly calling those things out as respective objectives and then calling out the key results that I need in my journey quarter by quarter to achieve success in that. Two experiments I’ve done in the last two years have been very well received and given me the confidence to start saying that, “Hey, Vikas, I should probably start saying there are four transformation OKRs rather than three.” And so that, yeah, this one has been project-to-product mindset. Now that being the objective, if it’s OK for me to take a minute to talk about the key results that I focus on, Melissa?

Vikas Kapila:

So one’s focusing a lot of conversations about how do you fund your milestones, right? Are they annual budget cycles versus funding the value stream? So those evolutions, talking about … when I’m talking about big batch versus small-batch, cadence-based, small-batch life cycle, that’s another key result. How do I get to? How am I relentlessly improving that cadence? The third one being measuring business outcomes on a regular basis in those small life cycles rather than only looking for big outputs. So not looking for the phase-gate style milestone, but objective, working systems. How am I managing risk? Am I carrying for the risk until the end and with the hope-and-pray strategy? Or am I incrementally, as I’m integrating my solution, as I’m building my outcomes, reducing the risk that I’m carrying forward? So, OKRs also start integrating now about teamwork, about sequencing work, and transparency and availability there. That’s all objective is project to product mindset. That’s the objective. And the other bullets I talked about are the key results that that objective has. Yeah.

Melissa Reeve:

You’re introducing these three, potentially four OKRs. In your opinion, how widespread is the adoption of using these types of OKRs to track the transformation and potentially accelerate it? Is this something that you see widely used in the field or is this practice that you’re setting out something that you’ve invented and you would like to advocate for?

Vikas Kapila:

I’m humbled by the thought of you even thinking I’ve invented something. So thank you for that. I think I stand on the shoulders of many before me. So I have adapted. I have adapted the OKR framework. And the other message that I’ve adapted over here is where people think about from whether it’s implementing SAFe or Leading Change from John Kotter who always said, “Set a sense of urgency.” He said that “Establish that sense of urgency before you move.” So we’ve done that. I have adapted OKR as a framework to set that sense of urgency and make that explicit. I’ve tried to use the superpowers of OKR, use the effectiveness of OKRs, to make that more explicit. So in a way, if you think, what is innovation? Bringing two great ideas together to create another better idea, in that sense, yes, I have tried to use this in that sense and I would advocate because I’ve seen a lot of success in the last five years I’ve been using this. It has been very helpful in transformations I’ve worked with. So I have enough proof. When I talk to other SPCTs and stuff, they also do very similar things. I’ve not heard anyone explicitly call out that they use OKRs, but they do sense of urgency. They do establish that and they monitor that on a regular basis.

Melissa Reeve:

Yeah. So this podcast is really geared for our listeners who maybe are struggling to establish that sense of urgency. Really establish the measurement of how you’ll know when you’re making headway in these different areas. And Vikas has put forth several different ways to help create the sense of urgency in order to accelerate the adoption and those outcomes. Thanks, Vikas, for sharing how important these OKRs are to a successful SAFe transformation and journey towards business agility.

Vikas Kapila:

Thanks, Melissa. Yeah, it was great to be here. It was a pleasure. Thanks for having me again.

Melissa Reeve:

Thanks for listening to our show today. Be sure to check out the show notes and more at scaled agile.com/podcast. Revisit past topics at scaledagile.com/podcast.

Speaker 1:

Relentless improvement is in our DNA and we welcome your input on how we can improve the show. Drop us a line at podcast@scaledagile.com.

Host: Melissa Reeve

Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe), and its mission. Connect with Melissa on LinkedIn.

Guest: Vikas Kapila

An SPCT5, Vikas is CEO, Curator at Enterprise Agility Consulting. He focuses on enabling individuals to transform into high-performing teams and realize how the sum of the team is greater than the sum of the individuals. With more than 20 years in solutions delivery, consulting, and coaching, Vikas has a proven track record in successfully delivering complex solutions (transformations) to multidisciplinary and multicultural teams. Learn more about Vikas on LinkedIn.

Measure What Matters for Business Agility with SAFe Metrics – Safe Agile

Safe Business Agility

In direct response to customer feedback, the Framework team recently released a significant update to its guidance on SAFe Metrics that helps for efficient business agility. In this episode, Andrew Sales, a member of the Framework team and SAFe Fellow, joins us to explain what’s changed, why these changes are important, and where enterprises can find key resources to help them apply the new guidance.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

Learn About the New SAFe Metrics

We heard your feedback. The Scaled Agile Framework team recently released a significant update to its guidance on SAFe Metrics. In this episode, Andrew Sales, member of the Framework team and SAFe Fellow, joins us to explain what’s changed, why these changes are important, and where enterprises can find key resources to help them apply the new guidance. 

In their discussion, Andrew and Melissa dive into details including:

  • The story behind the changes
  • How the measurement model works
  • Examples of each measurement domain
  • What enterprises can do with the data

Follow these links to access the resources Andrew mentions in the podcast:

Hosted by: Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe), and its mission.

Guest: Andrew Sales

Andrew Sales is a SAFe Fellow

Andrew Sales is a SAFe Fellow, principal consultant, and SPC at Scaled Agile. He regularly contributes to the Framework, and has many years of experience in delivering SAFe implementations in a wide range of industries. Andrew is also passionate about continuous improvement and supporting teams and leaders in improving outcomes for their business and customers. Connect with Andrew on LinkedIn.

Practical Tips for the New RTE – Business Agility

Safe Business Agility

Release Train Engineers (RTEs) play an important role in aligning the organization and maintaining it during PI Planning. In this episode, we talk to Kimberly Lejonö and Carl Starendal, both former RTEs and experienced Agile coaches, who share their tips for RTEs just getting started in their role. And we’ll dive into some questions we hear from RTEs in the field around inspiring change across teams and Agile Release Trains (ARTs) and managing the flow of value.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

Release Train Engineers (RTEs) play an important role in aligning the organization. In this episode, we talk to Kimberly Lejonö and Carl Starendal, both former RTEs and experienced Agile coaches, who share their tips for RTEs just getting started in their role. We’ll also dive into some questions we hear from RTEs in the field around inspiring change across teams and Agile Release Trains (ARTs) and managing the flow of value.

 Topics that Kimberly and Carl touch on include:

  • PI Planning preparation and execution
  • Maintaining alignment during the PI
  • Supporting cultural change
  • Metrics, and what not to measure

Hosted by: Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe) and its mission.

Guest: Kimberly Lejonö

RTE, project leader, and scrum master

Tapping into her background working as an RTE, project leader, and scrum master, Kimberly brings a high-energy and curious mindset to affect change in others. She loves connecting with the people around her and unlocking their potential to help organizations move in their desired direction. Connect with Kimberly on LinkedIn.

Guest: Carl Starendal

Carl Starendal

With a background in game development and a decade of hands-on experience at the center of global Lean-Agile transformations across multiple industries, Carl co-founded We Are Movement, an Agile and Lean advisory team based in Stockholm. A highly regarded trainer, advisor, and facilitator, he is a passionate advocate and resource for organizations throughout all stages of the Agile journey. Carl is recognized internationally as a speaker on leadership, Agile, and product development. Find Carl on LinkedIn.

AgilityHealth Insights: What We Learned from Teams to Improve Performance – Agility Planning

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

At AgilityHealth®, our team has always believed there’s a correlation between qualitative metrics (defined by maturity) and quantitative metrics (defined by performance or flow). A few years ago, we moved to gather both qualitative and quantitative data. Once we felt we had a sufficient amount to explore, we partnered with the University of Nebraska’s Center for Applied Psychological Services to review the data through our AgilityHealth platform. The main question we wanted to answer was: What are the top competencies driving teams to higher performance? 

Before we jump into the data, let’s start by reviewing what metrics make up “performance.” Below are the five quantitative metrics that form the Performance Dimension within the TeamHealth® radar: 

  • Time-to-market 
  • Quality
  • Predictable Delivery
  • Responsiveness (cycle time)
  • Value Delivered
AgilityHealth

During the team assessment, we ask the team and the product owner about their happiness and their confidence in their ability to meet the current goals. We consider these leading indicators for performance, so we were curious to see what drives the qualitative metrics of Confidence and Happiness as well. 

Methodology 

We analyzed both quantitative and qualitative data from teams surveyed between November 2018 and April 2021. There were 146 companies representing a total of 4,616 teams (some who took the assessment more than once) which equates to more than 46,000 individual survey responses.

We used stepwise regression to explore and identify the top five drivers for each outcome. Stepwise regression is one approach in building a model that explains the most predictive set of competencies for the desired outcome. 

The results of our analysis identified the top five input drivers for each of the performance metrics in the TeamHealth assessment, along with the corresponding “weight” of each driver. We also uncovered the top five drivers of Confidence and Happiness for teams and product owners. These drivers are the best predictors for the corresponding metrics. All drivers are statistically significant, and each metric has the driver’s ranked order. 

By focusing on increasing these top five predictors, teams should see the highest gain on their performance metrics. 

Results

 After analyzing the top drivers for each of the performance metrics, we noticed that a few kept showing up as repeat drivers across performance. 

AgilityHealth

When analyzing the drivers for Confidence and Happiness, we found these additional predictors:

AgilityHealth

We know from experience that shorter iterations, better planning and estimating, and T-shaped skills all lead to better performance—but we now have data to prove it. It was a welcome surprise to see self-organization and creativity take center stage, as it did in our analysis. We’ve always coached managers to empower teams to solve problems, but for the first time, we have the data to back it up. 

Recommendations

Pulling these patterns together, it’s clear that if a team wants to impact its performance in an efficient way, it should focus on weekly iterations, T-shaped team members, effective planning and estimating, enabling creativity and self-organization, role clarity, and right-sizing and skilling. Teams that invested in these drivers saw a 37 percent performance improvement over teams that didn’t. So when in doubt, start here!

We’re excited to share that you can now see the drivers for each competency inside the AgilityHealth platform. We hope it helps you make informed decisions about where to invest your time and effort to improve your performance.

Visit the AgilityHealth page on the SAFe® Community Platform to learn more about these assessment tools and get started!

About Sally

Sally- Agile and business agility space

Sally is a thought leader in the Agile and business agility space. She’s passionate about accelerating the enterprise business agility journey by measuring what matters at every level and building strong leaders and strong teams. She is an executive advisor to many Fortune 500 companies and a frequent keynote speaker. Learn more about AgilityHealth at https://www.agilityhealthradar.com.

Share:

Back to: All Blog Posts

Next: How Do We Measure Feelings?

How Do We Measure Feelings? – SAFe Transformation

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

As business environments feature increasing rates of change and uncertainty, agile ways of working are becoming the dominant way of operating around the globe. The reason for this dominance is not that agile is necessarily the “best” way of working (agile, by definition, embraces the idea that you don’t know what you don’t know) but because businesses have found agile better-suited to addressing today’s challenges. Detailed three-year plans, extensive Gantt charts, and work breakdown structures simply have less relevance in today’s world. Agile, with its emphasis on fast learning and experimentation, has proven itself to be more appropriate for today’s unpredictable business environment.

Agility Requires Data You Can Trust

Whereas a plan-driven approach requires an extensive analysis phase, today’s context demands frequent access to high-quality data and information to facilitate quick course correction and validation. One of these critical sources of data is targeted assessments. The purpose of any assessment is to gather information. And the quality of the information collected is a direct result of the quality of the assessment. 

Think of an assessment as a measuring tool. If we were studying a physical object, we might use measuring devices to assess its length, height, mass, and so on. Scientists have developed sophisticated definitions of many of these physical characteristics so we can have a shared understanding of them.

However, people—especially groups of people—are not quite so straightforward to measure: particularly if we’re talking about their attitudes and feelings. It’s not really possible to directly measure concepts like culture and teamwork in the same way we can measure mass or length. Instead, we have to look to the discipline of psychometrics—the field of study dedicated to the construction and validation of assessment instruments—to assist us in measuring these complex topics.

Survey researchers often refer to an assessment or questionnaire as an “instrument,” because the purpose is to measure. We measure to learn, and we learn to apply our knowledge in pursuit of improvement. This is one reason why assessment is such an integral part of the educational system. Properly designed, assessments can be a powerful tool to help us validate our approach, understand our strengths, and identify areas of opportunity.

Ensuring Quality is Built into the Assessment

Since meaningful information is so critical to fast inspection and adaptation, it’s important to use high-quality assessments. After all, if we’re going to leverage insights from the assessments to inform our strategy and guide our decisions, we need to be confident we can trust the data.

How do we know that an assessment instrument is measuring what it purports to? It’s so important to use care when designing the assessment tool, and then use data to provide evidence of both its validity (accuracy) and reliability (precision). Here’s how we ensure quality is built into our assessment.

Step 1: Prototype

All survey instrument development starts with a measurement framework. When Comparative Agility partnered with SAFe® to design the new Business Agility assessment, subject matter experts leveraged their experience from the original Business Agility survey to explore enhancements. 

The original Business Agility survey had generated a variety of important insights and proved to be incredibly popular among SAFe customers. But one area of potential improvement was the language used in the assessment itself. Customers wanted to leverage a proven SAFe survey to understand an organization’s current state, without first requiring the organization to have gone through comprehensive training. With the former Business Agility survey, this proved difficult, since the survey instrument often referred to SAFe-specific topics that many had not been exposed to yet.

To address this issue, subject matter experts (SPCTs, SAFe Fellows) teamed up with data scientists from Comparative Agility to craft SAFe survey items that would be meaningful at the start of a SAFe implementation, while avoiding terms that would require prior knowledge. This work resulted in a prototype survey or “minimum viable product.” 

Step 2: Test and Validate

Once the new Business Agility survey instrument was developed, we released it to beta and began to collect data. Several people in the SPCT community were asked to participate in a pilot. In follow-up interviews, respondents were asked about their experience with the survey. Together with respondents, the survey design team, and additional subject matter experts, we examined the results. (We also received external feedback from a Gartner researcher to help improve the nomenclature of some of the survey items.) Only once the team has been satisfied with the reliability and validity of the beta survey instrument will it be ready for production.

Step 3: Deploy and Monitor

Even after the Business Agility survey instrument reaches the production phase, the data science team at Comparative Agility and Scaled Agile continuously monitor the assessment for data consistency. A rigorous change management process ensures that any tweaks made to survey language, post-deployment, are tested to ensure they don’t negatively impact the accuracy.

Integrating Flow and Outcomes
Although validated assessments are a critical component of a data-driven approach to continuous improvement, they’re not sufficient. To gain a holistic perspective and complete the feedback loop, it’s also important to measure Flow and Outcomes. 

Flow
Flow metrics express how efficient an organization is at delivering value. When operating in complex environments characterized by uncertainty and volatility, flow metrics help organizations identify performance across the end-to-end value stream, so you can identify impediments to agility. A more comprehensive overview of Flow metrics can be found in the SAFe knowledge article, Metrics.

OutcomesFlow metrics may help us deliver quickly and effectively, but without understanding whether we’re delivering value to our customers, we risk simply “delivering crap faster.” Outcome metrics address this challenge by ensuring that we’re creating meaningful value for the end-customer and delivering business benefits. Examples of outcome metrics include revenue impact, customer retention, NPS scores, and Mean Time to Resolution (MTTR). 

Embracing a Culture of Data-Driven, Continuous Improvement

It’s important to note that although data and insights help inform our strategy and guide our decisions, to make change stick and ultimately to drive sustainable cultural change, we need to appreciate that data is a means to an end.

That is, data—even though it’s validated, statistically significant, and of high quality—should be viewed not as a source of answers, but rather as a means to ask better questions and uncover new insights in our interactions with people. By having data guide us in our conversations, interactions, and how we define hypotheses, we can drive a culture of inquiry and continuous improvement. 

Just like when a survey helps us better understand how we feel, the assessment provides us with an opportunity to interact in a more meaningful way and increase our understanding. The data itself is not the goal but a way to help us learn faster, adapt quicker, and remove impediments to agility.

Start Improving with Your Own Data

As 17 software industry professionals noted some twenty years ago at a resort in Snowbird, Utah, becoming more agile is about “individuals and interactions over processes and tools.” 

To start your own journey of data-driven, continuous improvement today, activate your free Comparative Agility account in the Measure & Grow area of the SAFe Community Platform.

About Matthew

Matthew Haubrich is the Director of Data Science at Comparative Agility.

Matthew Haubrich is the Director of Data Science at Comparative Agility. Passionate about discovering the story behind the data, Matt has more than 25 years of experience in data analytics, survey research, and assessment design. Matt is a frequent speaker at numerous national and international conferences and brings a broad perspective of analytics from both public and private sectors.

Share:

Back to: All Blog Posts

Next: Everything You Wanted to Know About SAFe® Enterprise (but Were Afraid to Ask)

Honest Assessments Achieve Real Insights

In this post, I share my experience of running a series of Measure and Grow assessments at a government agency in the UK I’m working with—including the experiments that we decided to run and our learnings during the SAFe transformation process.

The last year has been a voyage of discovery for all of us at Radtac. First, we had to figure out how to deliver training online and still make it an immersive learning experience. Then, we needed to figure out how to do PI Planning online with completely dispersed teams. Once that was sorted, we entered a whole new world of ongoing, remote consulting that included how to run effective Measure and Grow assessments.

In this post, I share my experience of running a series of Measure and Grow assessments at a government agency in the UK I’m working with—including the experiments that we decided to run and our learnings. The agency has already established and runs 15 Agile Release Trains (ARTs). We agreed that we wouldn’t run assessments for 15 ARTs because we wanted to start small and test the process first. Therefore, we picked four ARTs to pilot the assessments and only undertake the Team and Technical Agility and Agile Product Delivery assessments.

Pre-assessment Details

What was really important was that each ART we had selected had an agility assessment pre-briefing where we set the context with the following key messages:

  1. This is NOT a competition between the ARTs to see who had the best assessment.
  2. The assessments will support the LACE in identifying the strengths and development areas across the ARTs.
  3. The results will be presented to leadership in an aggregated form. Each ART will see only their results; no individual ART results will be shared with other ARTs.
  4. The results will identify where leadership can remove impediments that the teams face.
  5. We need an honest assessment to achieve real insight into where leadership and the LACE can help the teams.

In addition, prior to the assessments, we asked the ARTs to:

  1. Briefly review the assessment questions.
  2. Prioritise attendance with core team members with a cross-section of their team.

Conducting the Assessment

The assessment was facilitated by external consultants to provide some challenges to the responses. We allotted 120 minutes for both the Technical and Team Agility and Agile Product Delivery assessments, but most ARTs completed them within 90 minutes. We used Microsoft Teams as our communication tool and Menimeter.com (Menti) to poll the responses.

Each Menti page had five to six questions that the team members were asked to score on a scale of 1 to 5–with 1 being false, 3 being neither false nor true, and 5 is true. To avoid groupthink, we didn’t show the results until all questions and all members had been scored. Because Menti shows a distribution of scores, where there was a range in the scoring, we explored the extremes and asked team members to explain why they thought it was a 1 while others thought it was a 5. On the rare occasion that there was any misunderstanding, we ran the poll again for that set of questions.

Scaled Agile Partners
Some results from the Team and Technical Agility poll.

What we found after the first assessment was that there was still a lot of SAFe® terminologies that people didn’t understand. (Based on this and similar feedback, Scaled Agile recently updated its Business Agility assessment with simpler, clearer terminology. This is helpful for organizations that want to use it before everyone has been trained or even before they’ve decided to adopt SAFe.) So, for the next assessment, we created a glossary of definitions, and for each set of questions before they scored, we reminded them of some of the key terminology definitions.

The other learning was that for some of the questions, team members didn’t have a particular experience, and therefore scored a 1 (false) which distorted the assessment. Going forward, we asked team members to skip the question if they had no experience. We also took a short break between the assessments. And of course, no workshop would be complete without a feedback session at the end, which helped us improve each time we completed the assessments.

Here is a quote from one of the ARTs:

“As a group, we found the Agile Assessment a really useful exercise to take part in. Ultimately, it’s given our supporting team areas to focus on and allowed us to pinpoint areas where we can drive improvements. The distributed scores for each question are where we saw a great deal of value and highlighted differences in opinion between roles. This was made more impactful by having a split of engineers and supporting team roles in the session. The main challenge we had about the session was how we interpreted the questions differently. To overcome this, we had a discussion about each question before starting the scoring, and although this made the process a little longer, it was valuable in ensuring we all had the same understanding.”

Post-assessment Findings

We shared the individual ART results with its team members so that they could consider what they as an ART could improve themselves. As a LACE, we aggregated the results and looked for trends across the four ARTs. Here’s what we presented to the leadership team:

  1. Observations—what did we see across the ARTs?
  2. Insights—what are the consequences of these observations?
  3. Proposed actions—what do we need to do as a LACE and leadership team? We used the Growth Recommendations to provide some inspiration for the actions.

We then made a commitment to the teams that we would provide feedback from the leadership presentations.

Next Steps

We need to run the assessments across the other 11 ARTs and then repeat the assessments every two to three Program Increments.

You can get started with Measure and Grow, including the updated Business Agility assessment and tools on the SAFe® Community Platform.

About Darren

Darren is a director at Radtac, a global agile consulting business

Darren is a director at Radtac, a global agile consulting business based in London that was recently acquired by Cprime. As an SPCT and SAFe® Fellow, Darren is an active agile practitioner and consultant who frequently delivers certified SAFe courses. Darren also serves as treasurer of BCS Kent Branch and co-authored the BCS book, Agile Foundations—Principles, Practices and Frameworks.

Share:

Back to: All Blog Posts

Next: Creating Your PI Backlog Content

Measuring Agile Maturity with Assessments – Agile Adoption

Stephen Gristock is a PMO Leader and Lean Agile Advisory Consultant for Eliassen Group, a Scaled Agile Partner. In this blog, he explores both the rationale and potential approaches for assessing levels of Agility within an organization.

A Quick Preamble

Measuring Agile Maturity with Assessments

For every organization that embarks upon it, the road to Agile adoption can be long and fraught with challenges. Depending on the scope of the effort, it can be run as a slow-burn initiative or a more frenetic and rapid attempt to change the way we work. Either way, like any real journey, unless you know where you’re starting from, you can’t really be sure where you’re going.

Unfortunately, it’s also true that we see many organizations go through multiple attempts to “be Agile” and fail. Often, this is caused by a lack of understanding of the current state or a conviction that “we can get Agile out of a box.” This is where an Agile Assessment can really help, by providing a new baseline that can act as a starting point for Agile planning or even just provide sufficient information to adjust our course.

What’s in a Word?

We often hear the refrain that “words matter.” Clearly, that is true. But sometimes humans have a tendency to over complicate matters by relabeling things that they aren’t comfortable with. One example of this within the Agile community is our reluctance to use the term “Assessment.” To many Agilists, this simple word has a negative connotation. As a result, we often see alternative phrases used such as “Discovery,” “Health-check,” or “Review.” Perhaps it’s the uncomfortable proximity to the word “Audit” that sends shivers down our spines! Regardless, the Merriam-Webster dictionary defines “assessment” as:

“the act of making a judgment about something”

What’s so negative about that? Isn’t that exactly what we’re striving to do? By providing a snapshot of how the existing organization compares against an industry-standard Agile framework, an Assessment can provide valuable insight into what is working well, and what needs to change.

The Influence of the Agile Manifesto

When the Agile movement was in its infancy, thought leaders sought to encapsulate the key traits of true agility within the Agile Manifesto. One of the principles of the manifesto places emphasis on the importance of favoring:

“At regular intervals, the Team reflects on how to become more effective, then tunes and adjusts its behavior accordingly”

Of course, this is key to driving a persistent focus on improvement. In Scrum this most obviously manifests itself in the Retrospective event. But improvement should span all our activities. If used appropriately, an Agile Assessment may be a very effective way of providing us with a platform to identify broad sets of opportunities and improvements.

Establishing a Frame of Reference

Just like Agile Transformations themselves, all Assessments need to start with a frame of reference upon which to shape the associated steps of scoping, exploration, analysis, and findings. Otherwise, the whole endeavor is just likely to reflect the subjective views and perspectives of Assessor(s), rather than a representation of the organization’s maturity against a collection of proven best practices. We need to ensure that our Assessments leverage an accepted framework against which to measure our target organization. So, the selected framework provides us with a common set of concepts, practices, roles, and terminology that everyone within the organization understands. Simply put, we need a benchmark model against which to gauge maturity.

Assessment Principles

Measuring Agile Maturity with Assessments

In the world of Lean and Agile, intent is everything. To realize its true purpose, an Assessment should be conducted in observance with the following overriding core principles:

  • Confidentiality: all results are owned by the target organization
  • Non-attribution: findings are aggregated at an organizational level, avoiding reference to individuals or sub-groups
  • Collaboration: the event will be imbued with a spirit of openness and partnership- this is not an audit
  • Action-oriented: the results should provide actionable items that contribute toward building a roadmap for change

Also, in order to minimize distraction and disruption, they are often intended to be lightweight and minimally invasive.

Assessment Approaches

It goes without saying that Assessments need to be tailored to fit the needs of the organization. In general, there are some common themes and patterns that we use to plan and perform them. The process for an archetypal Assessment event will often encompass these main activities:

  • Scoping and planning (sampling, scheduling)
  • Discovery/Info gathering (reviewing artifacts, observing events, interviews)
  • Analysis/Findings (synthesizing observations into findings)
  • Recommendations (heatmap, report, debrief)
  • Actions/Roadmap

Overall, the event focuses on taking a sample-based snapshot of an organization to establish its level of Agile Maturity relative to a predefined (Agile) scale. Often, findings and observations are collected or presented in a Maturity Matrix which acts as a tool for generating an Agile heatmap. Along with a detailed Report and Executive Summary, this is often one of the key deliverables which is used as a primary input to feed the organization’s transformation Roadmap.

Modes of Assessment

Not all Assessments need to be big affairs that require major planning and scheduling. In fact, once a robust baseline has been established, it often makes more sense to institute periodic cycles of lighter-weight snapshots. Here are some simple examples of the three primary Assessment modes:

  • Self-Assessment: have teams perform periodic self-assessments to track progress against goals
  • Peer Assessments: institute reciprocal peer reviews across teams to provide objective snapshots
  • Full Assessment: establish a baseline profile and/or deeper interim progress measurement

Focus on People—Not Process and Tools

Measuring Agile Maturity with Assessments

Many organizations can get seduced into thinking that off-the-shelf solutions are the answer to all our Agile needs. However, even though a plethora of methods, techniques, and tools exist for assessing, one of the most important components is the Assessor. Given the complexities of human organizations, the key to any successful assessment is the ability to discern patterns, analyze, and make appropriate observations and recommendations. This requires that our Assessor is technically experienced, knowledgeable, objective, collaborative, and above all, exercises common sense. Like almost everything else in Agile, the required skills are acquired through experience. So, choosing the right Assessor is a major consideration.

Go Forth and Assess!­

In closing, most organizations that are undergoing an Agile transformation recognize the value of performing a snapshot assessment of their organization against their chosen model or framework. By providing a repeatable and consistent measurement capability, an Assessment complements and supports ongoing Continuous Improvement, while also acting as a mechanism for the exchange and promotion of best practices.

We hope that this simple tour of Assessments has got you thinking. So what are you waiting for? Get out there and assess!

For more information on assessments in the SAFe world, we recommend checking out the Measure and Grow article.

About Stephen Gristock

Stephen Gristock - Specializing in Agile-based transformation techniques

Specializing in Agile-based transformation techniques, Stephen has a background in technology, project delivery and strategic transformations acquired as a consultant, coach, practitioner, and implementation leader. Having managed several large Agile transformation initiatives (with the scars to prove it), he firmly believes in the ethos of “doing it, before you teach/coach it.” He currently leads Eliassen Group’s Agile advisory and training services in the NY Metro region.

View all posts by Stephen Gristock

Share:

Back to: All Blog Posts

Next: How I Prepared to Teach My First Remote Class

Understanding Leading Indicators in Product Development and Innovation

It’s quite common for people to nod knowingly when you mention leading indicators, but in reality, few people understand them. I believe people struggle with leading indicators because they are counterintuitive, and because lagging indicators are so ingrained in our current ways of working. So, let’s explore leading indicators: what they are, why they’re important, how they’re different from what you use today, and how you can use them to improve your innovation and product development.

What Are Leading and Lagging Indicators?

Leading Indicators in Product Development

Leading indicators (or leading metrics) are a way of measuring things today with a level of confidence that we’re heading in the right direction and that our destination is still desirable. They are in-process measures that we think will correlate to successful outcomes later. In essence, they help us predict the future.

In contrast, lagging indicators measure past performance. They look backwards and measure what has already happened.

Take the example of customer experience (CX). This is a lagging indicator for your business because the customer has to have the experience before you can measure it. While it’s great to understand how your customers perceive your service, by the time you discover it sucks it might be too late to do anything about it.

ROI is another example of a lagging indicator: you have to invest in a project ahead of time but cannot calculate its returns until it’s completed. In days gone by you might have worked on a new product and spent many millions, only to discover the market didn’t want it and your ROI was poor.

Online retailers looking for leading indicators of CX might look instead at page load time, successful customer journeys, or the number of transactions that failed and ended up with customer service. I often tell clients that if these leading indicators are positive, we have reason to believe that CX, when measured, will also be positive.

Don Reinertsen shares a common example of leading vs. lagging indicators: the size of an airport security line is a leading indicator for the lagging indicator of the time it takes to pass through security screening. This makes sense because if there is a large line ahead of you, the time it will take to get through security and out the other side will be longer. We can only measure the total cycle time once we’ve experienced it.

If you operate in a SAFe® context, the success of a new train PI planning (which is a lagging indicator) is predicated on leading indicators like identifying key roles, training people, getting leadership buy-in, refining your backlog, socializing it with the teams, etc.

Simple Examples of Successful Leading Indicators

The Tesla presales process is a perfect example of how to develop leading indicators for ROI. Tesla takes refundable deposits, or pre-orders, months if not years before delivering the car to their customers. Well before the cars have gone to production, the company has a demonstrated indicator of demand for its vehicles.

Back in the 90s, Zappos was experimenting with selling shoes online in the burgeoning world of e-commerce. They used a model of making a loss on every shoe sold (by not holding stock and buying retail) as a leading indicator that an online shoe selling business would be successful before investing in the necessary infrastructure you might expect to operate in this industry.

If you are truly innovating (versus using innovation as an excuse for justifying product development antipatterns, like ignoring the customer) then the use of leading indicators can be a key contributor to your innovation accounting processes. In his best-selling book, The Lean Startup, Eric Ries explains this concept. If you can demonstrate that your idea is moving forward by using validated learning to prove problems exist, then customers will show interest before you even have a product to sell. Likewise, as Dantar P. Oosterwal demonstrated in his book, The Lean Machine, a pattern of purchase orders can be a leading indicator of product development and market success.

Leading Indicators Can Be Near-term Lagging Indicators

Let’s loop back and consider the definitions of leading and lagging indicators.

  • Lagging: Measures output of an activity. Likely to be easy to measure, as you’ve potentially already got measurement in place.
  • Leading: Measures inputs to the activity. Often harder to measure as you likely do not do this today.

Think about the process of trying to lose weight. Weight loss is a lagging indicator, but calories consumed and exercise performed are leading indicators, or inputs to the desired outcome of losing weight.

Leading Indicators in Product Development

While it’s true that both calories consumed and exercise performed are activities that cannot be measured until they’re completed, and therefore might be considered near-term lagging indicators, they become leading indicators because we’re using them on the path to long-term lagging indicators. Think back to the CX example: page load time, successful customer journeys, and failed transactions that end up with customer service can all be considered near-term lagging indicators. Yet we can use them as leading indicators on a pathway to a long-term lagging indicator, CX.

How to Ideate Your Leading Indicators

The most successful approach I’ve applied with clients over many years is based on some work by Mario Moreira, with whom I worked many moons ago. I’ve tweaked the language and application a little and recommend you create a Pathway of Leading to Lagging Indicators. To demonstrate this, I will return to the CX example.

Ideate Leading Indicators

If we walk the pathway, we can estimate that an acceptable page load time will lead to a successful user journey, which—if acceptable—will then lead to fewer failed transactions that revert to customer service, which ultimately will lead to a positive customer experience metric.

Work Backwards from Your Lagging Indicator

To create your Leading to Lagging Pathway, start from your lagging indicator and work backwards looking at key successful elements that need to be true to allow your lagging indicator to be successful.

At this stage, these are all presuppositions; as in, we believe these to be true. They stay this way until you’ve collected data and can validate your pathway. This is similar to how you need to validate personas when you first create them.

Add Feedback Loop Cycle Times

Once you have your pathway mapped out, walk the pathway forward from your first leading indicator and discuss how often you can and should record, analyze, and take action for that measure. You should make these feedback loops as short as possible because the longer the loop, the longer it will take you to learn.

Feedback Loop Cycle

All that’s left is to implement your Leading to Lagging Pathway. You may find a mix of measures, some which you measure today and some you don’t. For those you already do measure, you may not be measuring them often enough. You also need to put in place business processes to analyze and take action. Remember that if measures do not drive decisions, then your actions are a waste of resources.

Your leading indicator might be a simple MVP. Tools like QuickMVP can support the implementation of a Tesla-style landing page to take pre-orders from your customers.

Applying Leading Indicators in Agile Product Management

A common anti-pattern I see in many product management functions is a solution looking for a problem. These are the sorts of pet projects that consume huge amounts of R&D budget and barely move the needle on profitability. Using design thinking and Lean Startup techniques can help you to validate the underlying problem, determine the best solution, and identify whether it’s desired by your potential customers and is something you can deliver profitably.

In SAFe, leading indicators are an important element of your epic benefit hypothesis statement. Leading indicators can give you a preview of the likelihood that your epic hypothesis will be proven, and they can help deliver this insight much earlier than if you weren’t using them. Insight allows you to pivot at an earlier stage, saving considerable time and money. By diverting spending to where it will give a better return, you are living by SAFe principle number one, Take an economic view.

Let’s look at some working examples demonstrating the use of leading indicators.

Leading Indicators in Agile Product Management
Leading Indicators in Agile Product Management
Leading Indicators in Agile Product Management
Leading Indicators in Agile Product Management

I hope you can now see that leading indicators are very powerful and versatile, although not always obvious when you start using them. Start with your ideation by creating a Leading to Lagging Pathway, working back from your desired lagging indicator. If you get stuck, recall that near-term lagging indicators can be used as leading indicators on your pathway too. Finally, don’t feel you need to do this alone, pair or get a group of people together to walk through this process, the discussions will likely be valuable in creating alignment in addition to the output.

Let me know how you get on. Find me on the SAFe Community Platform and LinkedIn.

About Glenn Smith

Glenn Smith is SAFe Program Consultant Trainer (SPCT), SPC, and RTE

Glenn Smith is SAFe Program Consultant Trainer (SPCT), SPC, and RTE working for Radtac as a consultant and trainer out of the UK. He is a techie at heart, now with a people and process focus supporting organizations globally to improve how they operate in a Lean-Agile way. You will find him regularly talking at conferences and writing about his experiences to share his knowledge.

View all posts by Glenn Smith

Share:

Back to: All Blog Posts

Next: Traits of the Stoic Agilist

Agility Fuel – Powering Agile Teams

Agility Fuel

One of my favorite analogies for agile teams is to compare them to an F-1 race car. These race cars are the result of some of the most precise, high-performance engineering on the planet, and they have quite a bit in common with high-functioning agile teams. Much like F-1 cars, agile teams require the best people, practices, and support that you can deliver in order to get the best performance out of them.

And just like supercar racing machines, agile teams need fuel in order to run. That fuel is what this post is about. In the agile world, the fuel of choice is feedback. I would like to introduce a new ‘lens’ or way of looking at feedback. I’ll leverage some learning from the art of systems thinking to provide a better understanding of what various metrics are and how they impact our systems every day.

Most often, this feedback is directly from the customer, but there are other types as well. We have feedback regarding our processes and feedback from our machinery itself. In broad terms, the feedback in an agile world falls into three different categories:

  1. Process: Feedback on how the team is performing its agility.
  2. DevOps: This is feedback on the machinery of our development efforts.
  3. Product: The so-called ‘Gemba metrics.’ This segment of feedback is where we learn from actual customer interaction with our product.

Thinking in Feedback

Every agile framework embraces systems thinking as a core principle. In this exercise, we are going to use systems thinking to change how we see, interact with, and make predictions from our feedback. If you want to go deeper into systems, please pick up “Thinking in Systems,” by Donella Meadows or “The Fifth Discipline,” by Peter Senge. Either one of these books is a great introduction to systems thinking, but the first one focuses solely on this topic.

For the purposes of this post, we will be thinking about our feedback in the following format:

Metric

This is the actual metric, or feedback, that we are going to be collecting and monitoring.

Category

Every feedback loop will be a process-, operational-, or product-focused loop.

Stock

Each feedback metric will be impacting some stock within your organization. In each case, we will be talking about how the stock and the feedback are connected to each other.

Type

Balancing: Think of the thermostat in a room; it drives the temperature of the room (the stock) to a specific range and then holds it there. These are balancing feedback loops.

Reinforcing: Because a savings account interest is based on how much is in the account, whenever you add that interest back in, there is more stock (amount in the account) and more interest will be deposited next time. This is a reinforcing feedback loop.

Delay

Feedback always reports on what has already happened. We must understand the minimum delay that each system has built into it, otherwise system behavior will oscillate as we react to the way things used to be.

Limits

We will talk about the limits for each stock/feedback pair so that you can understand them, and know when a system is operating correctly, but has just hit a limit.

A Few Examples

Let’s look at one example metric from each category so that you can see how to look at metrics with this lens.

ART Velocity

Agility Fuel

Discussion:

ART velocity impacts two stocks: Program Backlog and Features Shipped, both of which are metrics themselves. In both cases, ART Velocity is a balancing loop since it is attempting to drive those metrics in particular directions. It drives Program Backlog to zero and Features Shipped steadily upward. In neither case will the stock add back into itself like an interest-bearing savings account.

The upper limit is the release train’s sustainability. So, things like DevOps culture, work-life balance, employee satisfaction, and other such concerns will all come into play in dictating the upper limit of how fast your release train can possibly go. The lower limit here is zero, but of course, coaches and leadership will intervene before that happens.

Percent Unit Test Coverage

Agility Fuel

Discussion:

Percent Unit Test Coverage is a simple metric that encapsulates the likelihood of your deployments going smoothly. The closer this metric is to 100 percent, the less troublesome your product deployments will be. The interesting point here is that the delay is strictly limited by your developers’ integration frequency, or how often they check in code. Your release train can improve the cadence of the metric by simply architecting for a faster check-in cadence.

Top Exit Pages

Agility Fuel

Discussion:

This list of pages will illuminate which ones are the last pages your customers see before going elsewhere. This is very enlightening because any page other than proper logouts, or thank-you-for-your-purchase pages, is possibly problematic. Product teams should be constantly aware of top exit pages that exist anywhere within the customer journey before the value is delivered.

This metric directly impacts your product backlog but is less concerned with how much of anything is in that backlog and more of what is in there. This metric should be initiating conversations about how to remedy any potential problem that the Top Exit pages might be a symptom of.

Caution

Yes, agility fuel is in fact metrics. Actual, meaningful metrics about how things are running in your development shop. But here is the thing about metrics … I have never met a metric that I could not beat, and your developers are no different. So, how do we embrace metrics as a control measure without the agile teams working the metric to optimize their reward at the cost of effective delivery?

The answer is simple: values. In order for anything in this blog post to work, you need to be building a culture that takes care of its people, corrects errors without punitive punishment, and where trust is pervasive in all human interactions. If the leadership cannot trust the team or the team cannot trust its leadership, then these metrics can do much more harm than good. Please proceed with this cautionary note in mind.

Conclusion

This blog post has been a quick intro to a new way of looking at metrics: as agility fuel. In order to make sense of how your high-performance machine is operating you must understand the feedback loops and stocks that those loops impact. If this work interests you, please pay attention to our deep-dive blog posts over on AllisonAgile.com. Soon, we’ll be posting much more in-depth analysis of metrics and how they impact decisions that agile leaders must make.

About Allison Agile

Lee Allison is a SAFe 5.0 Program Consultant

Lee Allison is a SAFe 5.0 Program Consultant who implements Scaled Agile across the country. He fell in love with Agile over a decade ago when he saw how positively it can impact people’s work lives. He is the CEO of Allison Agile, LLC, which is central and south Texas’ original Scaled Agile Partner.

View all posts by Allison Agile

Share:

Back to: All Blog Posts

Next: Tips to Pass Your SAFe Exam and Get Certified

Deep Dive: Measuring Business Agility with Scaled Agile Assessments

Safe Business Agility

In this episode of the SAFe Business Agility podcast, Melissa Reeve, SPC, and Inbar Oren, SAFe® Fellow and principal contributor to the Scaled Agile Framework®, take a deep dive into how organizations can measure their progress toward business agility. Listeners will learn what business agility is, why measuring an organization’s business agility is so important, and how they can use Scaled Agile’s seven business agility assessments.

Click the “Subscribe” button to subscribe to the SAFe Business Agility podcast on Apple Podcasts

Share:

Visit these links to learn more about the business agility assessments referenced in the podcast:

Hosted by: Melissa Reeve

Melissa Reeve is the Vice President of Marketing at Scaled Agile

Melissa Reeve is the Vice President of Marketing at Scaled Agile, Inc. In this role, Melissa guides the marketing team, helping people better understand Scaled Agile, the Scaled Agile Framework (SAFe) and its mission.

Guest: Inbar Oren

Inbar Oren a SAFe Fellow

Inbar Oren a SAFe Fellow and a principal contributor to the Scaled Agile Framework. He has more than 20 years of experience in the high-tech market, working in small and large enterprises, as well as a range of roles from development and architecture to executive positions. For over a decade, Inbar has been helping development organizations—in both software and integrated systems—improve results by adopting Lean-Agile best practices. Previous clients include Cisco, Woolworths, Amdocs, Intel, and NCR.

Working as a Scaled Agile instructor and consultant, Inbar’s current focus is on working with leaders at the Program, Value Stream, and Portfolio levels to help them bring the most out of their organizations, build new processes and culture.

A martial arts aficionado, Inbar holds black belts in several arts. He also thinks and lives the idea of “scale,” raising five kids—including two sets of twins—with his beautiful wife, Ranit.