Connecting OKRs, KPIs, OVSs, and DVSs – For Successful SAFe® Implementation

The title of my post may read like acronym soup but all of these concepts play a critical role in SAFe, and understanding how they’re connected is important for successful SAFe implementation. After exploring some connections, I will suggest some actions you can take while designing, evaluating, or accelerating your implementation.

KPIs and OKRs

The SAFe Value Stream KPIs article describes Key Performance Indicators (KPIs) as “the quantifiable measures used to evaluate how a value stream is performing against its forecasted business outcomes.

That includes:

  • Health of day-to-day performance
  • Work to create sustainable change in performance

Objectives and Key Results (OKRs) are meant to be about driving and evaluating change rather than maintaining the status quo. Therefore, they are a special kind of KPI. Objectives point towards the desired state. Key results measure progress towards that desired state. 

But how do these different concepts map to SAFe’s Operational Value Streams (OVSs) and Development Value Streams (DVSs)? And why should you care?

Changing and Improving the Operation

Like Strategic Themes, most OKRs point to the desired change in business performance. These OKRs would be the ones that company leadership cares about. And they would be advanced through the efforts of a DVS (or multiple ones). 

For example, if the business wants to move to a subscription/SaaS model, that’s a change in the operating model—a change in how the OVS looks and operates. That change is supported by the development of new systems and capabilities, which is work that will be accomplished by a DVS (or multiple ones). 

This view enables us to recognize the wider application of the DVS concept that we talk about in SAFe 5. Business agility means using Agile and SAFe constructs to develop any sort of changing the business needs, regardless of whether that change includes IT or technology.

Whenever we are trying to change our operation, there’s a question about how much variability we’re expecting around this change. Is there more known than the unknown? Or vice versa? Are we making this change in an environment of volatility, uncertainty, complexity, and ambiguity? If yes, then using a DVS construct that employs empiricism to seek the right answers to how to achieve the OKR is essential, regardless of how much IT or technology is involved. We might have an OKR that requires business change involving mainly legal, marketing, procurement, HR, and so on, that would still benefit from an Agile and SAFe DVS approach.

These OKRs would then find themselves elaborated and advanced through the backlogs and backlog items in the various ARTs and teams involved in this OKR. 

In some cases, an OKR would drive the creation of a focused DVS. This is the culmination of the Organize around Value Lean-Agile SAFe Principle. This is why Strategic Themes and OKRs should be an important consideration when trying to identify value streams and ARTs (in the Value Stream and ART identification workshop). And a significant new theme/OKR should trigger some rethinking of whether the current DVS network is optimally organized to support the new value creation goals set by the organization.

Maintaining the Health of the Operation

As mentioned earlier, maintaining the health of the operation is also tracked through KPIs. Here we expect stability and predictability in performance. It’s crucial work but it’s not what OKRs or Strategic Themes are about. 

This work can be simple, complex, or even chaotic depending on the domain. The desire of any organization is to bring its operation under as much control as possible and minimize variability as it makes sense in the business domain. What this means is that in many cases, we don’t need Agile and empiricism in order to actually run the operation. Lean and flow techniques can still be useful to create sustainable, healthy flow (see more in the Organizational Agility competency). 

Whenever people working in the OVS switch to improving the OVS (or in other words working on versus in the operation), they are, in essence, moving over implicitly to a DVS. 

Some organizations make this duality explicit by creating a DVS that involves a combination of people who spend some of their time in the OVS and some of their time working on it together with people who are focused on working on the OVS. For example, an orthopedic clinic network in New England created a DVS comprising clinicians, doctors, PAs, and billing managers (that work the majority of their time in the OVS) together with IT professionals. Major improvements to the OVS happen in this DVS.

Improving the Development Value Stream

The DVS needs to relentlessly improve and learn as well. Examples of OKRs in this space could be: improving time-to-market, as measured by improved flow time or by improving the predictability of business value delivered, as measured by improved flow predictability. It could also be: organize around value, measured by the number of dependencies and the reduction in the number of Solution Trains required. 

This is also where the SAFe transformation or Agile journey lives. There are ways to improve DVSs or the overall network of DVSs, creating a much-improved business capability to enhance its operation and advance business OKRs. 

Implementing OKRs in this space relates more to enablers in the SAFe backlogs than to features or capabilities. Again, these OKRs change the way the DVS works.

Running the Development Value Stream

Similar metrics can be used as KPIs that help maintain the health of the DVS on an ongoing basis. For example, if technical debt is currently under control, a KPI monitoring it might suffice and hopefully will help avoid a major technical debt crisis. If we weren’t diligent enough to avoid the crisis, an objective could be put in place to significantly reduce the amount of technical debt. Achieving a certain threshold for a tech debt KPI could serve as a key result (KR) for this objective. Once it’s achieved, we might leave the tech debt KPI in place to maintain health. 

It’s like continuing to monitor your weight after you’ve gone on a serious diet. During the diet, you have an objective of achieving a healthy weight with a KR tracking BMI and aiming to get below 25. After achieving your objective, you continue to track your BMI as a KPI.

Taking Action to Advance Your Implementation Using OKRs

In this blog post, we explored the relationship between operational and development value streams and the Strategic Themes and OKRs. We’ve seen OVS KPIs and OKRs as well as DVS OKRs and KPIs. 

A key step in accelerating business agility is to continually assess whether you’re optimally organized around value. OKRs can provide a very useful lens to use for this assessment. 

Start by reviewing your OKRs and KPIs and categorize them according to OVS/DVS/Change/Run.

You can use the matrix below.

Run-focused OKRs

If you find some OKRs on the left side of the matrix, it’s time to rethink. 

Run-focused OKRs should actually be described as KPIs. Discuss the difference and whether you’re actually looking for meaningful change to these KPIs (in which case it really can be an OKR—but make sure it is well described as one) or are happy to just maintain a healthy status quo. 

You can then consider your DVS network/ART/team topology. Is it sufficiently aligned with your OKRs/KPIs? Are there interesting opportunities to reorganize around value?

This process can also be used in a Value Stream Identification workshop for the initial design of the implementation or whenever you want to inspect and adapt it.

Find me on LinkedIn to learn more about making these connections in your SAFe context via an OKR workshop.

About Yuval Yeret

Yuval is a SAFe Fellow and the head of AgileSparks

Yuval is a SAFe Fellow and the head of AgileSparks (a Scaled Agile Partner) in the United States where he leads enterprise-level Agile implementations. He’s also the steward of The AgileSparks Way and the firm’s SAFe, Flow/Kanban, and Agile Marketing. Find Yuval on LinkedIn.

Share:

Back to: All Blog Posts

Next: Aligning Global Teams Through Agile Program Management: A Case Study

How to Measure Team Performance: A Scrum Master Q+A

Assessing your team’s agility is an important step on the path to continuous improvement. After all, you can’t get where you want to go if you don’t know where you are. But you probably have questions: How do you measure a team’s agility, anyway? Who should do it, and when? What happens with the data you collect, and what should you do afterwards?

To bring you the answers, we interviewed two of our experienced scrum masters, Lieschen Gargano and Sam Ervin. Keep reading to learn their recommendations for running a Team successfully and Technical Agility Assessment.

Q: How does SAFe help teams measure their agility, and why should I care? 

Measure and Grow is the Scaled Agile Framework’s approach to evaluating agility and determining what actions to take next. Measure and Grow assessment tools and recommendation actions help organizations and teams reflect on where they are and know how to improve. 

The SAFe® Business Agility Assessment measures an organization’s overall agility across seven core competencies: team and technical agility, agile product delivery, enterprise solution delivery, Lean portfolio management, Lean-agile leadership, organizational agility, and continuous learning culture. 

business agility assessment

The SAFe Core Competency Assessments measure each of these core competencies on a deeper level. For example, the Team and Technical Agility (TTA) Core Competency Assessment helps teams identify areas for improvement, highlight strengths worth celebrating, and baseline performance against future growth. It asks questions about how your team operates. Do team members have cross-functional skills? Do you have a dedicated PO? How are teams of teams organized in your agile release trains (ARTs)? Do you use technical practices like test-driven development and peer review? How does your team tackle technical debt?

For facilitators, including scrum masters, the Team and Technical Agility Assessment is a great way to create space for team reflection beyond a typical retrospective. It can also increase engagement and buy-in for the team to take on actionable improvement items.

Q: Who should run a Team and Technical Agility Assessment? 

Running assessments can be tricky. Teams might feel defensive about being “measured.” Self-reported data isn’t always objective or accurate. Emotions and framing can impact the results. That’s why SAFe recommends that a scrum master or other trained facilitator run the assessment. A scrum master, SPC, or agile coach can help ensure that teams understand their performance and know where to focus their improvement efforts. 

Q: When should I do this assessment?

It’s never too early or too late to know where you stand. Running the assessment for your team when you’re first getting started with an agile transformation will help you target the areas where you most need to improve, but you can assess team performance at any time. 

As for how frequently you should run it … it’s probably more valuable to do it on a cadence—either once a PI or once a year, depending on the team’s goals and appetite for it. There’s a lot of energy in seeing how you grow and progress as a team, and it’s easier to celebrate wins that are demonstrated through documented change over time than through general sentiment.

Q: Okay, how do I prepare for and run it?

The agility assessment tools are available free to SAFe members and customers at the Measure and Grow page on the SAFe Community Platform. There you can choose from tools created for us by our partners, AgilityHealth and Comparative Agility.

Before you start the Team and Technical Agility Assessment, define your team’s shared purpose. This will help you generate buy-in and excitement. If the team feels like they’re just doing the assessment because the scrum master said so, it won’t be successful. They have to see value in it for them, both as individuals and as a team. 

Some questions we like to ask to set this purpose include, “What do we want it to feel like to be part of this team, two PIs from now?” And, “How will our work lives be improved when we check in one year from now?”

There are two ways you can approach running this assessment. Option #1 is to have team members take the assessment individually, and then get together to discuss their results as a group. Option #2 is to discuss the assessment questions together and come to a consensus on the group’s answers.

When we’ve run this assessment we’ve had team members do it individually, so we could focus our time together on review and actions. If you do decide to run it asynchronously it’s important as a facilitator to be available to team members, in case they have questions before you review your answers as a team.

Q: What else should I keep in mind?

We like to kick off the assessment with a meeting invitation that includes a draft agenda. Sending this ahead of time gives everyone a chance to prepare. You can keep the agenda loose so you have flexibility to spend more or less time discussing particular areas, depending on how your team chooses to engage with each question.

Q: Is the assessment anonymous? 

Keeping the answers anonymous is really helpful if you want to get more accurate results. We like to be very clear upfront that the assessment will be anonymous, so that team members can feel confident about being honest in their answers. 

For example, with our teams, we not only explained the confidentiality of individuals’ answers but demonstrated in real-time how the tool itself works so that the process would feel open and transparent. We also made it clear that we would not be using the data to compare teams to each other, or for any purpose other than to gain a shared understanding of where we are selecting improvement items based on the team’s stated goals.

Q: Then what? What do I do with the results?

Once you’ve completed the assessment using one of the two approaches, you’ll want to review the sections one by one, showing the aggregate results and allowing the team to notice their top strengths and top areas for improvement. Your job as facilitator is NOT to tell them what you think based on the results; it’s to help guide the team’s own discussion as they explore the answers. This yields much more effective outcomes!

One thing one of us learned in doing the assessment was how much we disagreed on some things. For example, even with a statement as simple as, “Teams execute standard iteration events,” some team members scored us a five (out of five) while others scored us a one. We treated every score as valid and sought to understand why some team members scored high and others low, just like we do when estimating the size of a user story. During this conversation, we learned an important fact. The product owner thought the iteration was executed in a standard way because she was the one executing it. But team members gave that statement a low score because they weren’t included in much of the decision-making. There was no consensus understanding for what “standard iteration events” meant to the team. 

This prompted a conversation about why the team isn’t always included in how the iteration was executed. We talked about the challenge of aligning schedules to share responsibility for decision-making in meetings. And we talked about the impact of team members not having the opportunity to contribute. 

As a result, the assessment did more than help us see where we needed to improve; it showed us where we had completely different perspectives about how we were doing. It prompted rich conversations that led to meaningful progress.

Q: Okay, I ran the assessment; now what? What are the next steps?

With your assessment results in hand, it’s now time to take actions that help you improve. For each dimension of the Team and Technical Agility Assessment, SAFe provides growth recommendations to help teams focus on the areas that matter most and prioritize their next steps. You should: 

  • Review the team growth recommendations together to generate ideas
  • Select your preferred actions (you can use dot voting or WSJF calculations for this; SAFe® Collaborate has ready-made templates you can use)
  • Capture your team’s next steps in writing: “Our team decided to do X, Y, and Z.” 
  • Follow through on your actions, so that you’re connecting them to the desired outcome
  • Check in on your progress at the beginning of iteration retrospectives

Finally, you’ll want to use these actions to set a focus for the team throughout the PI, and check in with business owners at PI planning on how these improvements have helped the organization make progress toward its goals.

Q: I’m ready! How do I get started? 

Fantastic. Just visit the Measure and Grow page at the SAFe Community Platform to choose your assessment tool. While you’re there, you can watch the video for tips or download the Measure and Grow Toolkit for play-by-play guidance. As you’re running the assessment, use the SAFe Collaborate templates to guide the discussion and identify actions and next steps. 

Have fun!

About the authors

Lieschen is a product owner and former scrum master at Scaled Agile.

Lieschen is a product owner and former scrum master at Scaled Agile. She’s also an agile coach and conflict guru—thanks in part to her master’s degree in conflict resolution. Lieschen loves cultivating new ideas and approaches to agile to keep things fresh and exciting. And she’s passionate about developing best practices for happy teams to deliver value in both development and non-technical environments. Fun fact? “I’m the only person I know of who’s been a scrum master and a scrum-half on a rugby team.”

Sam is a certified SAFe® 5.0 Program Consultant (SPC) and serves as the scrum master for several teams at Scaled Agile. His recent career highlights include entertaining the crowd as the co-host of the 2019 and 2020 Global SAFe® Summits. A native of Columbia, South Carolina, Sam lives in Denver, CO, where he enjoys CrossFit and Olympic weightlifting.

Share:

Back to: All Blog Posts

Next: The Unparalleled Value of Emotional Intelligence, Part Two

8 Patterns to Set Up Your Measure and Grow Program for Success

We all know that any time you start something new in an organization it takes time to make it stick, and if teams and leaders find value, they will work to keep a program flourishing. The same is true when you implement a Measure and Grow Program within your organization. It takes planning and effort to get it started, but the rewards will definitely outweigh the efforts in the end.

At AgilityHealth®, our Strategists work with organizations every day to help them set up Measure and Grow programs that will succeed based on their individual needs. Through their experiences, they have noticed some consistent patterns across our customers, both commercial and government, for- and non-profit. Understanding these patterns can help you set up a program that’s right for your organization.

Before we jump into the patterns, let’s review what a Measure and Grow program is. Simply stated, it’s how you will measure your progress toward business agility. When we look at how Enterprise Business Agility was defined by Sally Elatta, AgilityHealth Founder, and Evan Leybourne, Founder of the Business Agility Institute, you can see why this is important.

The ability to adapt to change, learn and pivot, deliver at speed, and thrive in a competitive market.

Sally Elatta, CEO AgilityHealth and Evan Leybourn, Founder, Business Agility Institute

We need to maintain our competitive edge, and in the process, make sure that healthy teams remain a priority—especially as we start to identify common patterns across teams.

Patterns

  1. Define how you will measure success.

Bertrand Dupperin said, “Tell me how you will measure me, and I will tell you how I will behave.” This is true of our teams, our team members, and our leaders. After this success criteria have been defined, allow the team members to measure themselves in a safe environment where they can be open and honest about their maturity with a neutral facilitator. The process of actioning on the data is very powerful for teams.

  1. Provide a way to help teams grow after you measure them.

“Measurement without action is worthless data.” (Thanks, Sally, for another great bit of wisdom.) When you set up your Measure and Grow program, make sure it includes a way for teams to learn and mature.

Some of the common ones we see are:

  • Dojo teams—high-performing teams paired with new or immature teams to help them learn
  • Pre-defined learning paths for teams using instructor-led or virtual learning
  • Intentional learning options for teams through Communities of Practice or other options
  • Pairing/Mentorship/Accountability Partners
  1. Tie the results to the goals.

“Why are we taking the time to do this?” This is a common question that teams and leaders ask when we are starting Measure and Grow programs. They feel that the time reserved for an Inspect and Adapt session could maybe be used to tie up those last few story points or test cases, when in reality there is a corporate objective to mature the teams. Be sure to share these kinds of goals with your teams and managers so they understand that this is important to the organization.

  1. Provide a maturity roadmap that takes the subjectivity out of the questions.

We all have an idea of what “good” looks like, but without a shared understanding of “good”, my “good” might be a 3, my teammate’s might be a 4, someone else’s might be a 2, and so on. When you share a common maturity roadmap to provide context for your assessment, your results will be less subjective.

  1. Measure at multiple levels so that you can correlate the results.

When we just look at maturity from the team perspective, we get one view of an organization. When we look at maturity from the leadership and stakeholder perspectives, we get another view. When we look at both together—the sandwich model—we get a three-dimensional view and can start to surmise cause and effect. This gives a clearer picture of how an organization is performing.

  1. Minimize competing priorities and platforms.

Almost all teams, regardless of organization, share that there are too many systems, too many priorities, too many everything (except maybe pizza slices …). Be sure to schedule your measurement and retrospective time when the team is taking a natural break in their work. Teams should take the time to do a strategic retrospective on how they are working together at the end of every PI during their Inspect and Adapt, so use that time wisely.

  1. Engage the leaders in the process.

When this becomes a “we” exercise and not a “you” exercise, then there is a sense of trust that is built between the teams and their leaders. Inevitably the teams are going to ask the leaders for assistance in removing obstacles. If the leaders are on board from the start and are expecting this, and they start removing them, this creates an atmosphere of psychological safety where teams can be honest about what they need and leaders can be honest about what they expect.

  1. Remember, this is all change, and change takes time.

Roy T. Bennett said, “Change begins at the end of your comfort zone.” It takes time, perseverance, and some uncomfortable conversations to change an organization and help it to grow. But in the end, it’s worth doing.

Get Started

Setting up a Measure and Grow program isn’t without its struggles, but for the organizations and teams that put the time and effort into doing it right, the rewards far outweigh the work that goes into it. If you would like to chat with us about what it would take to set up your Measure and Grow program, we’re ready to help.

About Trisha Hall

Trisha Hall - AgilityHealth’s

Trisha has been part of AgilityHealth’s Nebraska-based leadership team since 2014. As VP of Enterprise Solutions, she taps into her 25 years of experience to help organizations bring Business Agility to their companies and help corporate leaders build healthy, high-performing teams. Find Trisha on LinkedIn.

Share:

Back to: All Blog Posts

Next: Aligning Global Teams Through Agile Program Management: A Case Study

How 90 Teams Used Measure and Grow to Improve Performance by 134 Percent

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

One of our large financial services clients needed immediate help. It was struggling to meet customer demands and industry regulations and needed to align business priorities to capacity before it was outplayed by competitors. The company thought the answer would be to invest in business in Agility practices. But so far, that strategy didn’t seem to be paying off.. 

Teams were in constant flux and the ongoing change was causing unstable, unpredictable performance. The leading question was, “How can we get more output from existing capacity?”

Among the client’s key challenges:

  • No visibility into common patterns across teams
  • Inspect-and-adapt data was stuck in PowerPoint and Excel
  • Output expectations didn’t match current capacity
  • Teams weren’t delivering outcomes aligned to business value

Getting a baseline on team health 

We introduced the AgilityHealth® TeamHealth Radar Assessment to the continuous improvement leadership team, and it decided to pilot the assessment across the portfolio. Within a few weeks after launching the assessment, the organization got a comprehensive readout. It identified the top areas of improvement and key roadblocks for 90+ teams. 

These baseline results showed a lack of a backlog, not to mention a lack of clarity around the near-term roadmap. Teams were committing to work that wasn’t attached to any initiatives and the work wasn’t well-defined. Dependencies and impediments weren’t being managed. And the top areas of improvement matched data collected during inspect and adapt exercises over the previous two years. Even though the organization had previously identified these issues, nothing had been done to resolve them, as leaders did not trust the data until it came from the voice of the teams via AgilityHealth.

The ROI of slowing down to speed up

Equipped with this knowledge, leaders took the time to slow down and ensure teams had what they needed to perform their jobs efficiently. Leaders also developed a better understanding of where they needed to step in to help the teams. The organization re-focused efforts on building a sufficient backlog, aligned with a roadmap, so teams could identify dependencies earlier in the development lifecycle. 

This intentional slow-down drove a return on investment in less than a year and $6M in cost savings—equivalent in productivity to the work of five extra teams—while generating an additional $25M in value for the company.

By leveraging the results of the AgilityHealth assessment, leaders now had the data they needed to take action:

  • A repeatable process for collecting and measuring continuous improvement efforts at the end of every planning increment (PI)
  • Clear understanding of where teams stood in their Agile journey and next steps for maturity
  • Comprehensive baseline assessment results showing where individual team members thought improvement was needed, both from leaders and within their teams

What’s next

An enterprise transformation doesn’t stop with the first round of assessments. Like other Fortune 500 companies, this client plans to continue scaling growth and maturity across the enterprise, increasing momentum and building on what it’s learned.

The company plans to introduce the Agility Health assessment for individual roles, so it can measure role maturity and accelerate the development of Agile skills across defined competencies. It will continue to balance technical capacity with an emphasis on maintaining stable, cross-functional teams since these performance metrics correlate to shipping products that delight customers and grow the business. And to better facilitate “structural agility” (creating and tracking Agile team structures that support business outcomes), it will focus on ensuring the integrity of its data.

Get started

You too can leverage AgilityHealth’s Insights Dashboard to get an overall view of your organization’s Agile maturity: baseline where you are now, discover how to improve, and get to where you want to be tomorrow. Get started by logging into the SAFe Community and visiting the Measure and Grow page.

About Sally

Sally is a thought leader in the Agile and business agility space

Sally is a thought leader in the Agile and business agility space. She’s passionate about accelerating the enterprise business agility journey by measuring what matters at every level and building strong leaders and strong teams. She is an executive advisor to many Fortune 500 companies and a frequent keynote speaker. Learn more about AgilityHealth here.

Share:

Back to: All Blog Posts

Next: 8 Patterns to Set Up Your Measure and Grow Program for Success

Why SAFe Hurts – Implementing SAFe in Business

Why do some people find SAFe® to be helpful in empowering teams, while others find implementing the Framework painful? To be honest, both scenarios are equally valid.

As I was beginning to refocus my career on transforming the operating models and management structures of large enterprises, I found that the behavioral patterns of Agile and the operational cadence of Scrum shined a spotlight on an organization’s greatest challenges. As a byproduct of working faster and focusing on flow, impediments became obvious. With the issues surfaced, management had a choice: fix the problems or don’t.

As we scale, the same pattern repeats, though the tax of change is compounded because change is hard. Meaningful change takes time, and the journey isn’t linear. Things get better, things get worse, then they get better again.

Consultants will often reference the Dunning-Kruger curve when selling organizational change.

Why SAFe Hurts
The Dunning-Kruger curve

The Dunning-Kruger curve illustrates change as a smooth journey. One that begins with the status quo, dips as the change is introduced, and then restores efficiency as organizations achieve competence and confidence in the new model. Unfortunately, that’s not how change works, and depicting organizational change this way is misleading.

Implementing SAFe in Business
The Satir curve

When I’d spend time doing discovery work with a prospective client, I’d instead cite a more accurate picture of change: the Satir curve. The Satir image depicts the chaos of change and better prepares people for the journey ahead. Change is chaotic, and achieving successful change requires a firm focus on the reason why the change is important—not simply the change itself. Why, then, can a SAFe transformation (or any other change) feel painful? Here are the patterns of SAFe transformation that I observed pre-COVID.

The Silver Bullet

An organization buys ‘the thing’ (SAFe) thinking it’s a silver bullet that will solve all of their problems. For example, the inability to deliver, poor quality, dissatisfied customers, unhappy teammates, and crummy products. SAFe can help address these issues, but not by simply using the Framework. The challenge we often face is that leaders just want ‘the thing.’ Management is too busy to learn what it is that they bought. That’s OK though. They did an Agile transformation once and read the article on Wikipedia.

How can you lead what you don’t know? How can you ask something of your team that you don’t understand yourself? Let’s explore. 

Start with Why

Leaders don’t take the time to understand what SAFe is, what problems it intends to help organizations solve, or the intent with which SAFe is best used. Referencing the SAFe Implementation Roadmap, its intent is to avoid some of this pain. We begin by aligning senior leaders with the problems to solve. After all, we’re seeking to solve business problems. As Kotter points out, all change must start with a compelling vision for change. 

With the problem identified, we then discuss if SAFe is the best tool to address those concerns. We continue the conversation by training leaders in the new way of working, and more importantly, the new way to think to succeed in the post-digital economy.

Middle Management

Middle management, sometimes distastefully referred to as the ‘frozen middle,’ is the hardest role to fill in an organizational hierarchy. Similar to how puberty serves as the awkward stage between adolescence and adulthood, middle management is the first time that many have positional responsibility, but not yet the authority to truly change the system.

Middle managers are caught in a position where many are forced to choose between doing what’s best for the team and doing what’s best to get the next position soon. Often, when asked to embrace a Lean and Agile way of working, these managers will recognize that being successful in the new system is in contrast to what senior leaders (who bought the silver bullet but could not make time to learn it) are asking of them.

This often manifests in a conversation of outputs over outcomes. In that, success had traditionally been determined by color-coded status reports instead of working product increments and business outcomes. Some middle managers will challenge the old system and others will challenge the new system, but in either context, many feel the pain. This is the product of a changing system and not the middle manager’s fault. But it is the reason why many transformations will reset at some point. The pain felt by middle management can be avoided by engaging the support of the leadership community from the start, but this is often not the case.

Misaligned Agile Release Trains

Many transformations begin somewhere after the first turn on the SAFe Implementation Roadmap. Agile coaches will often engage after someone has, with the best of intentions, decided to launch an Agile Release Train (ART), but hasn’t understood how to do so successfully.

Why SAFe Hurts
SAFe Implementation Roadmap

As a result, the first Program Increment, and SAFe, will feel painful. Have you ever seen an ART that is full of handoffs and is unable to deliver anything of value? This pattern emerges when an ART is launched within an existing organizational silo, instead of being organized around the flow of value. When ARTs are launched in this way, the same problems that have existed in the organization for years become more evident and more painful.

For this reason, many Agile and SAFe implementations face a reboot at some point. Feeling the pain, an Agile coach will help leaders understand why they’re not getting the expected results. Here’s where organizations will reconsider the first straight of the Implementation Roadmap, find time for training, and re-launch their ARTs. This usually happens after going through a Value Stream and ART Identification workshop to best understand how to organize so that ARTs are able to deliver value.

Implementing SAFe in Business
SAFe Implementation Roadmap

Moving Fast Makes Problems More Obvious

Moving fast (or trying to) shines a big spotlight on our problems and forces us to confront them. Problems like organizational silos, toxic cultural norms, bad business architecture, nightmarish tech architecture, cumbersome release management, missing change practices, and the complete inability to see the customer that typically surface when we seek to achieve flow.

The larger and older an organization is, the more problems there are, and the longer it takes to get to a place where our intent can be resized. Truly engaged leadership helps, but it still takes time to undo history. For example, I’ve been working with one large enterprise since 2013. It’s taken eight years since initial contact for the organization to evolve to a place that allowed them to respond to COVID confidently and in a way that actively supports global recovery. Eight years ago, the organization would have struggled to achieve the same outcome.

When I first started working with this organization, it engaged in multi-year, strategic planning, and only released new value to its customers once every three years. The conceptual architecture diagram resembled a plate of spaghetti—people spent more time building consensus than building products. And the state of the organization’s operations included laying people off with a Post-it note on their monitor and an escort off-campus.

Today, the organization is much healthier in every way imaginable. It’s vastly better than it was, but not nearly as good as it will be. The leadership team focuses on operational integrity, and how maintainable, scalable, and stable the architecture is—and recognizes that the team is one of the most important assets.

Embracing Lean and Agile ways of working at scale begins with the first ART launch. It continues with additional ART launches, a reconsideration of how we approach strategy, technology, and customers. And it accelerates as we focus on better applying the Lean-Agile mindset, values, and principles on a daily basis. This is the journey to #BecomingAgile so that we can best position the team and our assets to serve customers.

Change Is Hard

Change takes time, and all meaningful change is painful because the process challenges behavior norms. The larger the organization is, the richer the history, and the longer it may take to achieve the desired outcome. There will be good days, days when things don’t make sense, and days when the team is frustrated. But all of that is OK. You know what else is ok? Feeling frustrated during the change. It’s important to focus on why the change is taking place. 

A pre-pandemic pattern (that I suspect may shift) is that change in large organizations often comes with evolution instead of revolution. With the exception of a very few clients, change begins with a team and expands as that team gains success and the patterns begin to reach other adjacent areas of the operation. The change will reach a point where supporting organizational structures must also change to achieve business agility.

As mentioned, moving fast with a focus on flow and customer-centricity exposes bottlenecks in the system. At some point, it will become obvious that structures such as procurement, HR, incentive models, and finance are bottlenecks to greater agility. And, when an organization begins to tackle these challenges, really cool things start to happen. People behave based on how they are incentivized, and compensation and performance are typically at odds with the mindset, values, and principles that are the foundation of SAFe.

Let’s Work Together

SAFe itself is not inherently painful. The Framework is a library of integrated patterns that have proven successful when paired with the intent of a Lean-Agile mindset, set of core values, and guiding principles. Organizations can best mitigate the pain associated with change by understanding what’s changing, the reason why the change is being introduced, and a deliberate focus on sound change-management practices. If you’re working in a SAFe ecosystem that feels challenging, share your experience in the General Discussion Group forum on the SAFe Community Platform. Our community is full of practitioners who represent all stages of the Satir change curve, and who can offer their advice, suggestions, and empathy. Together, we’ll make the world a better place to work.

About Adam Mattis

Adam Mattis is a SAFe Program Consultant Trainer (SPCT) at Scaled Agile with many years of experience overseeing SAFe implementations across a wide range of industries. He’s also an experienced transformation architect, engaging speaker, energetic trainer, and a regular contributor to the broader Lean-Agile and educational communities. Learn more about Adam at adammattis.com.

View all posts by Adam Mattis

Share:

Back to: All Blog Posts

Next: A Twist on Professional Development: the Scrum Master Exchange Program

The Scrum Master Exchange Program: A Twist on Professional Development

Scrum Master Exchange

My career path shifted at the beginning of 2021 when I became a full-time scrum master. I knew right away that to be the best scrum master I could be, I’d need to do continuous professional development. 

One disadvantage I noticed right away was that I‘d only seen Agile, Scrum, and SAFe® in action in the context of one, unique, midsize organization. To be more well-rounded in my abilities to lead and coach, I needed to experience companies of different sizes, within different industries, and with a different company culture to see how these principles and practices played out—as well as how their SAFe transformation took shape. The more that I saw, the less I would view my company, my teams’, and my own routines as the only perspective. The more I saw, the more I could view innovation as possible because it worked over there.

That’s when the idea came to me to be my own hero and think of something tailored to my professional development needs. With support from my leaders and peers, I created a Scrum Master Exchange Program. I invited interested scrum masters from Scaled Agile and from Travelport, and we paired and connected. From there, pairs self-organized and scheduled several sessions. 

  • Introduction—in this session, pairs introduced themselves and shared their professional background, strengths, weaknesses, and goals. They also talked about their current context: their company, their teams, a typical day/iteration, current conflicts, and recent successes. 
  • Shadow—in these sessions, one scrum master silently sat in on the other’s scrum events or even parts of their ART’s PI planning events. The silent scrum master noted group dynamics, facilitation techniques, or anything interesting.
  • Debrief—pairs scheduled debrief seasons soon after shadow sessions to share observations, relay positive and constructive feedback, and ask questions.

We closed the program with a retrospective for all participants and a summary email to participants and their people managers. 

What I Learned

So, how did it go? When we came together to review the program and its benefits, we all agreed that the new perspectives, experiences, and what we learned were things we deeply valued. Connecting with our partners and problem solving together was empowering and often resulted in us taking action toward solving our challenges. 

For me personally, as a new scrum master, I gained confidence in my knowledge and abilities. While my partner was extremely experienced, I could empathize with her problems. And I could even inspire her to consider something new, which made me feel competent and affirmed.

I took away a new mindset, now pursuing more simple, effective, and tried-and-true methods, focusing on the purpose. For example, I would get really creative with my iteration retrospectives, but they could be time-consuming to ideate and build, and the results easily became disorganized. My partner had a very simple, organized, centrally located method and kept things predictable. Though mine still isn’t perfect, I continue to take steps to bring my style a bit closer to hers (I can’t abandon all flair!).

Last but not least, I was further reminded that my professional development, my teams’ development, and my company’s development is a journey. I know what you’re thinking: “How cliché.” But the truth is, you can’t do everything, so you might as well do something. By maintaining a relentless improvement mindset and taking small steps, both you and your teams can get better. 

Key Takeaways

Was the exchange program perfect? No. But we all met someone new in our same role and got a peek behind the curtain at their respective organizations. And, if we decide to implement the program again, we know how to improve it. I’m proud of the fact that I noticed a hole in my professional development and took action, learned a ton, and brought some of my peers with me along the way. I’d call that a successful experiment. 

I’d strongly encourage you to try out an exchange in whatever role you have. Here’s how: 

  1. Float the idea with your peers to find people to join you.
  2. Reach out to your networks, your coworkers’ networks, and your company’s networks to select a potential organization with which to work.
  3. Pitch the idea, gain buy-in, and connect with legal for any necessary paperwork.
  4. Finalize the participant lists, pair them up, and send them off.
  5. Don’t forget to run a retro and spot ways to improve.

 Let us know how it goes!

About Emma Ropski

Emma is a Certified SAFe 5 Program Consultant and scrum master

Emma is a Certified SAFe 5 Program Consultant and scrum master at Scaled Agile, Inc. As a lifelong learner and teacher, she loves to illustrate, clarify, and simplify to keep all teammates and SAFe students engaged. Connect with Emma on LinkedIn.

Share:

Back to: All Blog Posts

Next: Aligning Global Teams Through Agile Program Management: A Case Study

How to Scale Up the Circular Economy

This blog post will illustrate, with practical examples, how the principles and practices of the Scaled Agile Framework® (SAFe®) can contribute to scaling up the circular economy. 

The circular economy offers opportunities for better growth through an economic model that is resilient, distributed, diverse, and inclusive. It tackles the root causes of global challenges such as climate change, biodiversity loss, and pollution, creating an economy in which nothing becomes waste, and which is regenerative by design.

Many enterprises are committed to making their products eco-friendlier and participating in global coalitions such as The Plastics Pack. Nevertheless, due to the lack of global standards or lack of dialogue and collaboration, they could create fragmented, small-scale, and sub-optimal solutions. For example, an enterprise might design a product that contains recyclable materials, is built with mono-material components, and is easy to disassemble. Still, it would only maximize its recycling value when embedded in a functioning collection system and treated in proper recycling facilities.

What Is the Solution, Then?

Circularity is a property of a system and not of individual products. It depends on how different actors, products, and information interact with each other. Improving the whole system would require that a group of loosely coupled actors combine their business models to achieve a better collective outcome. The proposed solution is a virtual organization that aligns the strategy and execution of all the stakeholders creating a solution ecosystem.

Let’s look at one example. I will illustrate a management framework to improve the packaging plastics system shown below.

Scale Up the Circular Economy

Applying SAFe Principles to the Circular Economy

SAFe principle #10, Organize around value, recommends creating a virtual organization that would maximize the flow of value. It involves eliminating silos and barriers for collaboration, including the people, the processes, and the tools, from all relevant stakeholders that are trying to achieve the same outcome.

This organization would be called a solution ecosystem, and its goal will be to implement the desired changes. Following SAFe principle #2, Apply systems thinking, the solution ecosystem would include all the actors involved in or impacted by the flow of packaging plastics, from business, government, scientists, and NGOs to end-user communities, including all the necessary activities and information flows required. Decisions would be made collaboratively, iteratively, and based on science-based targets.

The objective of the solution ecosystem would be to deliver a series of interventions to improve the flow of plastics iteratively. The teams would validate each intervention hypothesis through a series of minimum viable products following a roadmap. An intervention example could be, “to get the top 20 manufacturers of packaging plastics to commit to plastic packaging that’s 100% reusable, recyclable, or compostable by 2025,” while the desired outcome would be “to reduce packaging plastics flowing into the ocean by 50%.”

The solution ecosystem comprises small, long-standing, cross-stakeholder, and cross-functional teams or teams of teams dedicated to addressing specific outcomes. They will also have access to part-time specialized resources and count on all the necessary skills to deliver value independently of other teams.

The solution ecosystem could be coordinated top-down, from organizations such as the World Economic Forum, or led by a single enterprise coordinating with all the stakeholders impacted by its products. This organization could reach out vertically to all actors along the supply chain, such as those in logistics, packaging, and wholesale, horizontally to competitors, or circularly to all stakeholders impacted. 

Aligning Strategy to Execution

The solution ecosystem is likely to be composed of many people and organizations. To align strategy and execution, SAFe proposes to create a golden thread. From a single and shared vision to strategic themes to a common backlog of interventions to hold and prioritize all the interventions that will realize those themes.

The overarching vision of the New Plastics Economy is that plastics never become waste. Instead, they re-enter the economy as valuable technical or biological nutrients, creating an effective after-use plastics economy, drastically reducing the leakage of plastics into natural systems, and decoupling plastics from fossil feedstocks.

Scale Up the Circular Economy

Strategic themes are the way to achieve that vision or areas of investment. They are a way to group and classify Interventions. The solution ecosystem’s scientific community would express them in objectives and key results (OKRs). Thus, providing a qualitative and quantitative measurement to evaluate progress and success. An example could be:

Objective: Drastically reduce leakage of plastics into natural systems.

  • Key result 1: Improve after-use infrastructure in high-leakage countries by x% 
  • Key result 2: Increase the economic attractiveness of keeping materials in the system
  • Key result 3: Increase investments in efforts related to substances of concern by x %

The teams would strive to accomplish the strategic themes by implementing a series of interventions.  The solution ecosystem’s backlog is the prioritized list of interventions to be done. For example, it might look like this:

  1. Bio-benign materials
  2. Reversible adhesives 
  3. Super-polymer
  4. Plastics toolkit for policymakers 
  5. Bid data service to track the flow of dangerous chemicals
  6. Food delivery containers as a service

Collaborative Decision-making Process

SAFe recommends using Participatory Budgeting (PB) as a tool for budget allocation across the same enterprise business units. We could expand PB for multi-stakeholder decision-making, as many municipalities use it, gathering all the stakeholders’ voices. All the stakeholders impacted would be heard, voice their concerns, choose their priorities, and learn about other stakeholders’ concerns. The PB process should be done periodically to create a rolling wave agreed plan.

Creating a Balanced Portfolio

To maintain a well-balanced portfolio, SAFe proposes several budget guardrails:

  • Capacity allocation: This technique classifies interventions into different types and allocates a percentage of the available capacity to each kind, such as building the basic science, writing communications material for end-users, or drafting policy documents. Every three months, we can decide the percentage allocation to each type, keeping the desired balance across all categories.
  • Investment horizons: Classifying interventions by their impact timeframe allows leadership to maintain the right balance between the immediate, short, and long term. Quick wins are needed to win the hearts and minds of the naysayers, while the more difficult things usually take longer.
  • Epic approval: Decentralizing decision making is fundamental to reduce time-to-market and to improve flow. Nevertheless, substantial initiatives that impact multiple stakeholders need to go through an approval process based on a short business case. 

Project to Product

The traditional project approach would have required well-defined Interventions with fixed scope, fixed budget, and a fixed timeframe, such as building a clearly defined database of biomaterials at the cost of £2m over one year. One major drawback of this approach is that the success criteria of the intervention usually focus more on staying within these artificial constraints rather than on achieving the desired outcome of increasing the percentage of biomaterials used in packaging plastics by x%. Another problem is that designs and plans must be agreed upon upfront to obtain funding and approval. At that moment is when we know the least about the problem and the solution. Hence, it becomes harder to pivot later if needed.

The book Project to Product proposes a product approach, where funding is associated with long-standing teams working on a set of interventions related to the desired outcome. They would iteratively validate hypotheses and measure progress irrespective of the validity of their initial plans and assumptions. Products must be launched and maintained during their life cycle and have multiple target users with evolving needs. 

For instance, the budget would be related to a product called ‘biomaterials for packaging,’ including research, product launch, product support in life, and end-of-life activities, rather than related to a project to launch a new packaging material.

Timeboxing

SAFe principle #1, Take an economic view, proposes that we work incrementally and iteratively. Working in small timeboxes and on small pieces of independently valuable work would allow us to obtain the best economic outcome. We will get quick feedback; the value will get accumulated over time, and it will enable us to test our hypothesis and pivot quickly if needed.

SAFe principle #7, Cadence and synchronization, promotes that all teams involved in the solution ecosystem get together every three months to collaboratively plan the work for the next three months. This recurrent process helps evaluate progress toward the shared outcome, manage cross-team dependencies, and facilitate cross-team collaboration to create a stable and predictable rhythm of key events. 

Every three months, all teams demonstrate their accomplishments to evaluate progress objectively. They would get together to reflect on how they deliver value and look for opportunities to improve the process.

Epic Owner

The Epic Owner is a new role that would work at the solution ecosystem level to track and shepherd the intervention through its life cycle and across all the teams involved. In our example, the Epic Owner for the biomaterials database would be accountable to define the scope, building the short business case, getting it approved, building the teams across all stakeholders, tracking progress, being a consultant to the delivery teams, and evaluating whether they are meeting the desired outcome. It is a role, not a title. Hence, it might be fulfilled by a group of people.

Transparency

Transparency and visualization of all the work and all the dependencies by everyone are key. Kanban boards would allow us to see every intervention’s status to match demand with available capacity. A dependency board would show when each intervention will be delivered and its dependencies with other teams.

Decentralized Decision-making

No amount of central planning will be enough at this scale. To enable decentralized decision-making, we need to create a framework that provides organizational clarity and technical competence. This would allow individual teams to make decisions independently with the confidence that those will be good decisions. An example could be that a team can decide to increase the cost of the solution up to £1,000 to produce an additional reduction on the amount of plastics leakage into the ocean, as long as there is no impact on any of the other planetary boundaries.

References and Sources of Inspiration

Several reports are calling for organizations like the proposed solution ecosystem that could lead a multi-stakeholder systemic change:

  • The Metabolic Institute proposed that The Netherlands implements a regional ecosystem approach to scale up circular economy innovation.
  • The Ellen MacArthur Foundation calls for a global, independent collaboration initiative that brings together all actors across the value chain from consumer goods companies, plastic packaging producers, plastics manufacturers to cities, businesses involved in the collection, sorting and reprocessing, policymakers, and NGOs.
  • J. Konietzko writes, “Ecosystem innovation aims at changing how actors relate to each other and how they interact to achieve the desired outcome… circular products and services often maximize their circularity in conjunction with other assets. A circular ecosystem perspective thus goes beyond the question, what is our value proposition? Instead, it asks, how does our offering complement other products and services that together can provide a superior and circular ecosystem value proposition?”
  • D. Meadow, in her book Thinking in Systems, says, “You can’t predict a system, but you can dance with it.” Hence, do not design a solution upfront at the enterprise level, expecting the whole ecosystem to react as you hoped. Instead, implement a management framework that allows you to work iteratively at the system level, which we call the solution ecosystem; listen to the feedback, and react accordingly. 

Conclusion

In this blog post, I proposed a management framework, adapted from the Scaled Agile Framework, to manage a multi-stakeholder ecosystem to scale up solutions for the circular economy. At this stage, these are ideas extrapolated from my experience in business agility transformations and my readings into the circular economy. Please get in touch with me via LinkedIn to explore these ideas further, or if you have a concrete initiative you would like to apply them to.

About Diego Groiso

Diego Groiso Scaled Agile Partner

As a Principal Consultant at Radtac, a Scaled Agile Partner, Diego supports companies in their Business Agility journeys as an Enterprise Agile Coach, Trainer, and Release Train Engineer. Recently, he has transformed the whole infrastructure department of a global utility company, as well as launched and coached several Agile Release Trains within the Digital Transformation Programme in a global telecom company. He has a passion for the circular economy as one of the solutions to climate change. Connect with Diego on LinkedIn.

Share:

Back to: All Blog Posts

Next: Aligning Global Teams Through Agile Program Management: A Case Study

Can SAFe Make the World a Better Place?

Recently, someone asked me to explain the benefits of SAFe® over other Agile frameworks. Before I answer, I want to point out that I am not opposed to other scaling frameworks (we don’t do that!). What I can do, however, is speak from my own experience and give you some insight into why I choose to specialise in SAFe.

One of the best books I read last year was Switch: How to change things when change is hard, by Chip and Dan Heath. The book talks about communicating a vision and uses a great analogy of the elephant and the rider. The rider represents the rational self, and the elephant the emotional self. If the rational brain interests you, I suggest you look at the customer stories at the Scaled Agile website. There you can explore a treasure trove of case studies based on data.

I’m going to focus on the elephant. A common question in SAFe classes is where the data sits behind the statistic, “30 percent increase in employee engagement.” I usually answer this question by telling a true story about a Programme Manager I worked with called Steve (his real name isn’t Steve). 

Steve worked in a large organisation for 20 years. Over the years, he honed his craft by emulating those who had gone before him. He worked his way up the project management ladder and became highly respected by all who had the pleasure of working with him. There was just one problem, Steve had learned that the best way for his projects to succeed was to be at the centre of everything. Every decision went through him, and every status report had to be filled out precisely the way he did it; if not, you would have to do it again. Now, this all sounds sensible: Steve knew what his stakeholders needed to know, and he made sure that they had the information they needed to sail through his milestones with minimal fuss. There was one fly in the ointment: Steve had to work 60 hours a week to maintain control.

The organisation that Steve works in decided to go SAFe and identified his programme as an ideal candidate to launch an ART. Steve was willing to try but was sceptical about a new approach. We followed the SAFe Implementation Roadmap, trained everyone and got ready for PI planning. This is the point in our story when everything changed.

In the first PI planning, the room’s energy was unlike anything seen before. Because everyone who needed to be there was in the same room, the teams managed to unblock a capability in the first hour of the first team breakout that had stumped everyone for three months. The momentum continued to build from there; the ART launch was a tremendous success.

To understand why I think SAFe is so brilliant, we need to fast forward a few PIs. It was the summer season, and Steve took some time off to relax and soak up the sun. For the first time in years, Steve was not on his phone. He was not checking emails. He relaxed. When I caught up with him shortly after his holiday, he said, “Thanks to SAFe, I’ve got my life back.” Steve was no longer working crazy hours to stay in control. He had let go of many day-to-day decisions. He trusted the teams to make the calls on the things they were close to so that he could focus on the strategy.

I believe that SAFe is so fantastic because it gives us just the right balance of guidance and flexibility. The 10 SAFe Principles help us put the changes in behaviour into practice. As a coach, they are at the front of my mind whenever I’m thinking about implementing SAFe. And for people like Steve, they help put the mindset into practice and apply it to their own context. We can’t be overly prescriptive; every context is different. 

Steve’s story is far from unique; I’ve seen many people’s lives change for the better as they embrace a new way of working. That is why I do what I do. Business benefits are essential, the ability to respond to changes in the market is critical, but I’m all about the people. 

It’s no accident that the first value of the Agile Manifesto is individuals and interactions over processes and tools, or that the first pillar of the SAFe House of Lean is respect for people and culture. What could be more important than making the world a better place for people to work? What could be more valuable than improving the happiness and wellbeing of our people?

So, can SAFe make the world a better place? I believe so!

About Tim

Tim is an experienced SPCT

Tim is an experienced SPCT who has been working in Agile and software for the last 12 years. Over the years, Tim has worked in a variety of industries such as telecom, pharma, and aviation, leading large transformation initiatives. Connect with Tim on LinkedIn.

AgilityHealth Insights: What We Learned from Teams to Improve Performance – Agility Planning

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

At AgilityHealth®, our team has always believed there’s a correlation between qualitative metrics (defined by maturity) and quantitative metrics (defined by performance or flow). A few years ago, we moved to gather both qualitative and quantitative data. Once we felt we had a sufficient amount to explore, we partnered with the University of Nebraska’s Center for Applied Psychological Services to review the data through our AgilityHealth platform. The main question we wanted to answer was: What are the top competencies driving teams to higher performance? 

Before we jump into the data, let’s start by reviewing what metrics make up “performance.” Below are the five quantitative metrics that form the Performance Dimension within the TeamHealth® radar: 

  • Time-to-market 
  • Quality
  • Predictable Delivery
  • Responsiveness (cycle time)
  • Value Delivered
AgilityHealth

During the team assessment, we ask the team and the product owner about their happiness and their confidence in their ability to meet the current goals. We consider these leading indicators for performance, so we were curious to see what drives the qualitative metrics of Confidence and Happiness as well. 

Methodology 

We analyzed both quantitative and qualitative data from teams surveyed between November 2018 and April 2021. There were 146 companies representing a total of 4,616 teams (some who took the assessment more than once) which equates to more than 46,000 individual survey responses.

We used stepwise regression to explore and identify the top five drivers for each outcome. Stepwise regression is one approach in building a model that explains the most predictive set of competencies for the desired outcome. 

The results of our analysis identified the top five input drivers for each of the performance metrics in the TeamHealth assessment, along with the corresponding “weight” of each driver. We also uncovered the top five drivers of Confidence and Happiness for teams and product owners. These drivers are the best predictors for the corresponding metrics. All drivers are statistically significant, and each metric has the driver’s ranked order. 

By focusing on increasing these top five predictors, teams should see the highest gain on their performance metrics. 

Results

 After analyzing the top drivers for each of the performance metrics, we noticed that a few kept showing up as repeat drivers across performance. 

AgilityHealth

When analyzing the drivers for Confidence and Happiness, we found these additional predictors:

AgilityHealth

We know from experience that shorter iterations, better planning and estimating, and T-shaped skills all lead to better performance—but we now have data to prove it. It was a welcome surprise to see self-organization and creativity take center stage, as it did in our analysis. We’ve always coached managers to empower teams to solve problems, but for the first time, we have the data to back it up. 

Recommendations

Pulling these patterns together, it’s clear that if a team wants to impact its performance in an efficient way, it should focus on weekly iterations, T-shaped team members, effective planning and estimating, enabling creativity and self-organization, role clarity, and right-sizing and skilling. Teams that invested in these drivers saw a 37 percent performance improvement over teams that didn’t. So when in doubt, start here!

We’re excited to share that you can now see the drivers for each competency inside the AgilityHealth platform. We hope it helps you make informed decisions about where to invest your time and effort to improve your performance.

Visit the AgilityHealth page on the SAFe® Community Platform to learn more about these assessment tools and get started!

About Sally

Sally- Agile and business agility space

Sally is a thought leader in the Agile and business agility space. She’s passionate about accelerating the enterprise business agility journey by measuring what matters at every level and building strong leaders and strong teams. She is an executive advisor to many Fortune 500 companies and a frequent keynote speaker. Learn more about AgilityHealth at https://www.agilityhealthradar.com.

Share:

Back to: All Blog Posts

Next: How Do We Measure Feelings?

How Do We Measure Feelings? – SAFe Transformation

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

As business environments feature increasing rates of change and uncertainty, agile ways of working are becoming the dominant way of operating around the globe. The reason for this dominance is not that agile is necessarily the “best” way of working (agile, by definition, embraces the idea that you don’t know what you don’t know) but because businesses have found agile better-suited to addressing today’s challenges. Detailed three-year plans, extensive Gantt charts, and work breakdown structures simply have less relevance in today’s world. Agile, with its emphasis on fast learning and experimentation, has proven itself to be more appropriate for today’s unpredictable business environment.

Agility Requires Data You Can Trust

Whereas a plan-driven approach requires an extensive analysis phase, today’s context demands frequent access to high-quality data and information to facilitate quick course correction and validation. One of these critical sources of data is targeted assessments. The purpose of any assessment is to gather information. And the quality of the information collected is a direct result of the quality of the assessment. 

Think of an assessment as a measuring tool. If we were studying a physical object, we might use measuring devices to assess its length, height, mass, and so on. Scientists have developed sophisticated definitions of many of these physical characteristics so we can have a shared understanding of them.

However, people—especially groups of people—are not quite so straightforward to measure: particularly if we’re talking about their attitudes and feelings. It’s not really possible to directly measure concepts like culture and teamwork in the same way we can measure mass or length. Instead, we have to look to the discipline of psychometrics—the field of study dedicated to the construction and validation of assessment instruments—to assist us in measuring these complex topics.

Survey researchers often refer to an assessment or questionnaire as an “instrument,” because the purpose is to measure. We measure to learn, and we learn to apply our knowledge in pursuit of improvement. This is one reason why assessment is such an integral part of the educational system. Properly designed, assessments can be a powerful tool to help us validate our approach, understand our strengths, and identify areas of opportunity.

Ensuring Quality is Built into the Assessment

Since meaningful information is so critical to fast inspection and adaptation, it’s important to use high-quality assessments. After all, if we’re going to leverage insights from the assessments to inform our strategy and guide our decisions, we need to be confident we can trust the data.

How do we know that an assessment instrument is measuring what it purports to? It’s so important to use care when designing the assessment tool, and then use data to provide evidence of both its validity (accuracy) and reliability (precision). Here’s how we ensure quality is built into our assessment.

Step 1: Prototype

All survey instrument development starts with a measurement framework. When Comparative Agility partnered with SAFe® to design the new Business Agility assessment, subject matter experts leveraged their experience from the original Business Agility survey to explore enhancements. 

The original Business Agility survey had generated a variety of important insights and proved to be incredibly popular among SAFe customers. But one area of potential improvement was the language used in the assessment itself. Customers wanted to leverage a proven SAFe survey to understand an organization’s current state, without first requiring the organization to have gone through comprehensive training. With the former Business Agility survey, this proved difficult, since the survey instrument often referred to SAFe-specific topics that many had not been exposed to yet.

To address this issue, subject matter experts (SPCTs, SAFe Fellows) teamed up with data scientists from Comparative Agility to craft SAFe survey items that would be meaningful at the start of a SAFe implementation, while avoiding terms that would require prior knowledge. This work resulted in a prototype survey or “minimum viable product.” 

Step 2: Test and Validate

Once the new Business Agility survey instrument was developed, we released it to beta and began to collect data. Several people in the SPCT community were asked to participate in a pilot. In follow-up interviews, respondents were asked about their experience with the survey. Together with respondents, the survey design team, and additional subject matter experts, we examined the results. (We also received external feedback from a Gartner researcher to help improve the nomenclature of some of the survey items.) Only once the team has been satisfied with the reliability and validity of the beta survey instrument will it be ready for production.

Step 3: Deploy and Monitor

Even after the Business Agility survey instrument reaches the production phase, the data science team at Comparative Agility and Scaled Agile continuously monitor the assessment for data consistency. A rigorous change management process ensures that any tweaks made to survey language, post-deployment, are tested to ensure they don’t negatively impact the accuracy.

Integrating Flow and Outcomes
Although validated assessments are a critical component of a data-driven approach to continuous improvement, they’re not sufficient. To gain a holistic perspective and complete the feedback loop, it’s also important to measure Flow and Outcomes. 

Flow
Flow metrics express how efficient an organization is at delivering value. When operating in complex environments characterized by uncertainty and volatility, flow metrics help organizations identify performance across the end-to-end value stream, so you can identify impediments to agility. A more comprehensive overview of Flow metrics can be found in the SAFe knowledge article, Metrics.

OutcomesFlow metrics may help us deliver quickly and effectively, but without understanding whether we’re delivering value to our customers, we risk simply “delivering crap faster.” Outcome metrics address this challenge by ensuring that we’re creating meaningful value for the end-customer and delivering business benefits. Examples of outcome metrics include revenue impact, customer retention, NPS scores, and Mean Time to Resolution (MTTR). 

Embracing a Culture of Data-Driven, Continuous Improvement

It’s important to note that although data and insights help inform our strategy and guide our decisions, to make change stick and ultimately to drive sustainable cultural change, we need to appreciate that data is a means to an end.

That is, data—even though it’s validated, statistically significant, and of high quality—should be viewed not as a source of answers, but rather as a means to ask better questions and uncover new insights in our interactions with people. By having data guide us in our conversations, interactions, and how we define hypotheses, we can drive a culture of inquiry and continuous improvement. 

Just like when a survey helps us better understand how we feel, the assessment provides us with an opportunity to interact in a more meaningful way and increase our understanding. The data itself is not the goal but a way to help us learn faster, adapt quicker, and remove impediments to agility.

Start Improving with Your Own Data

As 17 software industry professionals noted some twenty years ago at a resort in Snowbird, Utah, becoming more agile is about “individuals and interactions over processes and tools.” 

To start your own journey of data-driven, continuous improvement today, activate your free Comparative Agility account in the Measure & Grow area of the SAFe Community Platform.

About Matthew

Matthew Haubrich is the Director of Data Science at Comparative Agility.

Matthew Haubrich is the Director of Data Science at Comparative Agility. Passionate about discovering the story behind the data, Matt has more than 25 years of experience in data analytics, survey research, and assessment design. Matt is a frequent speaker at numerous national and international conferences and brings a broad perspective of analytics from both public and private sectors.

Share:

Back to: All Blog Posts

Next: Everything You Wanted to Know About SAFe® Enterprise (but Were Afraid to Ask)