What is a SAFe Practice Consultant-T (SPCT) and How Can You Become One?

SAFe Program Consultant Trainer

“I am so glad I did it. It is unreservedly the single most important thing I have done in my career. If you are a seasoned professional with a commitment to lifelong learning and are wondering what your next career move might be, I highly recommend you take a look at the SPCT program.”

 Michael Casey, SPCT, Agile Big Picture

As more organizations engage with SAFe®, it’s even more critical that we have knowledgeable, experienced SAFe leaders to help transform large enterprises and continue to shape the way SAFe is being implemented. If you have deep SAFe knowledge, are a lifelong learner, are SAFe Practice Consultant (SPC)-certified, and excel in training and coaching, I invite you to consider becoming a SAFe Practice Consultant-T (SPCT).

“As is the case with any certification, you should carefully evaluate SAFe instructors and consultants, and make sure that they have demonstrated experience that is relevant to the role you are asking them to take on. Do not rely on certifications alone as a measure of the skills of a consultant or prospective employee. A notable exception to this is the SAFe Program Consultant Trainer (SPCT) certification [now SAFe® Practice Consultant-T], which does require demonstrated experience with Agile, software development or product management, training and consulting. If you’re hiring someone who has [an] SPCT certification, you can be confident that they do have experience in these areas, as well as experience with SAFe implementation at multiple organizations. However, SPCTs are in short supply. As of February 2020, there are fewer than 100 people worldwide holding this certification.”

Gartner, “A Technical Professional’s Guide to Successful Adoption of the Scaled Agile Framework (SAFe),” Kevin Matheny, Bill Holz, 13 April 2020

SPCTs are among the most highly regarded SAFe experts in the world. They are also the most sought-after SAFe Trainers, SAFe Trusted Advisors, and SAFe Transformation Architects among enterprises seeking to improve their methods of working and pursue business agility.

As transformation catalysts, SPCTs share their vast industry expertise and skills through teaching, coaching, and handling the most challenging SAFe implementations. And they are experts at communicating, consulting, and creating SAFe knowledge.

SPCT is the most advanced certification you can achieve with SAFe and can be career-changing through job advancement and new opportunities.

SPCT credentials bring the highest level of credibility, which opens doors for you and generates confidence within the organization that you’re helping to create the highest-quality SAFe implementation.

That said, you should know that our selection process has very high standards, and not everyone will get in. To be accepted into the program, you must not only meet the skills and experience prerequisites but have presence and gravitas. The requirements and expectations are slightly different for partners and enterprise employees.

Nominees must either be sponsored by a Gold Partner or an enterprise customer with a SAFe® Enterprise Subscription. However, both must be full-time employees of their respective organizations and are expected to have several years of experience in the tech industry, with five years of Lean-Agile experience and five years of software/systems/product experience.

Here’s how the process works:

  1. Get nominated: If you or someone you know would make a good candidate, request access to the SPCT portal by sending an email to spct@scaledagile.com. Through the portal, you can submit your documented accomplishments as you achieve them.
  2. Have an interview: After your nomination requirements have been reviewed and accepted, you’ll have two screening interviews with SPCT guides. Our team will then determine whether you’d be a good fit for the program.
  3. Attend Immersion Week: If you’re accepted, you’ll be invited to attend SPCT Immersion Week (currently, we hold three to four classes per year) where nominees showcase their knowledge, skills, and abilities in training and consulting with SAFe. You’ll also learn how to teach an Implementing SAFe® class and may even work on a class project that contributes to SAFe’s intellectual property.
  4. Complete field experience: After Immersion Week, you’ll need to complete additional certification requirements that include teaching SAFe classes, completing SAFe implementations, and finishing the required readings.
  5. Co-teach an Implementing SAFe® class: Lastly, you’ll participate in a pairing test by co-teaching an Implementing SAFe® class with one of our guides. During this class, we’ll evaluate your presentation, training, and coaching skills.

I believe becoming an SPCT is a valuable and rewarding career goal to aspire to—but I’m not the only one. Here’s what some of our SPCTs have to say:

“SPCTs are differentiated in the marketplace. The SPCT certification is rare and the knowledge and expertise it represents is valuable and much in demand.” 

—Simon Chesney, iSPCT, Western Digital Corporation

“Becoming an SPCT takes hard work, but it will pay you back many times over what you put into it in personal growth and career advancement. Get on board the SPCT program and you won’t look back!” 

—Michael Casey, SPCT, Agile Big Picture

I encourage you to explore what it takes to become an SPCT to see if this would be a good fit for you or someone you know.

Learn more by contacting spct@scaledagile.com.

About Adam Mattis

Adam Mattis is a SAFe® Fellow and SAFe® Practice Consultant-Trainer (SPCT) at Scaled Agile with many years of experience overseeing SAFe implementations across various industries. He’s also an experienced transformation architect, engaging speaker, energetic trainer, and a regular contributor to the broader Lean-Agile and educational communities. Learn more about Adam at adammattis.com.

View all posts by Adam Mattis

Share:

Back to: All Blog Posts

Next: Scrum Master Stories: Resolving Conflict

Large Solution Refinement: Paving the Super-Highway of Value Delivery

This post is the second in a series about success patterns for large solutions. Read the first post here.

Backlog refinement is integral to the Scrum process because it prevents surprises and maintains flow in iterative development. Regular backlog review ensures the backlog is ready for iteration planning. An Agile team understands how much they still need to refine the backlog items before the next iteration planning and beyond.

When applying SAFe® to large, complex, cyber-physical systems, you must expand backlog refinement to include more viewpoints. The complexity of a large solution is rarely fully comprehended by one or a few individuals, and the size of the large solution exacerbates the impact of risks that can escape into large solution planning.

So how do we find the balance between overpreparation, which limits ownership and innovation by the solution builders, and under-refinement, which can undermine the solution and the flow of value delivery?

To answer these questions, we adapted the following success patterns for large solution backlog refinement.

Use the Dispatcher Clause

The dispatcher principle guides large solution refinement by preventing the premature dispatch of requirements to Agile Release Trains (ARTs), solution areas, or Agile teams. Premature dispatching can cause risks like:

• Misalignment in the development of different solution components
• Missed opportunities for economies of scale across organizational constructs
• Sub-optimization of lower priority solution features

In contrast, making the right trade-off decisions at the right level drives holistic and innovative solutions.

Key stakeholder viewpoints that are often overlooked include marketing, compliance, customer support, and finance. Ensuring these voices are heard during refinement work can prevent issues that might remain undetected until late in the solution roadmap.

For complex solutions, we discovered that a planning conference is more effective than pre-and post-PI Planning events alone. This event mimics a PI Planning event and is intended to align upcoming PI work across ARTs and solution areas. To keep the conference focused and productive, it should only include representatives from the participating ARTs. We will cover specific planning conference details in a later blog post.

The goal of the planning conference is to provide a boundary for the large solution refinement work. Preparation for key decisions that can be made in the planning conference should be part of the refinement work. But making key decisions is part of the planning conference. However, key stakeholder inputs that cannot be reasonably gathered during the planning conference should be included in the refinement work.

For example, in Figure one, a review of the key behavior-driven development (BDD) demo and testing scenarios by a customer advisory board is valuable input in refinement. The customer advisory board will not attend the two-day planning conference, so their advance input provides guardrails on the backlog work that’s considered.

Agree on the Definition of Ready

The definition of the readiness (DoR) criteria for a large solution backlog is often multidimensional. Consider, for example, the architectural dimension of the solution. The architecture defines the high-level solution components and how they interact to provide value. The choice of components is relevant to system architects in the contributing ARTs and stakeholders in at least these areas:

• User experience
• Compliance
• Internal audit and standards
• Corporate reuse
• Finance  

Advancing the backlog item—a Capability or an Epic—through the stages of readiness often requires review and refinement from the various stakeholders.

Figure one is an example Definition of Ready Maturity Model. It shows the solution dimensions that must be refined in preparation for the solution backlog. Levels zero to five show how readiness can advance within each dimension. The horizontal contour lines show that progression to the intermediate stages of readiness is often a combination of different levels in each dimension.

Applying SAFe for Agility
Figure 1. Definition of Ready Maturity Model example

This delineation is helpful when monitoring the progression of a backlog item to intermediate readiness stages on a Kanban board.

The key to balancing over-preparation and under-refinement is to distinguish between work that an ART or solution area can complete independently without a high risk of rework. For example, final costs could be prohibitively high without a Lean business case to scope the solution. Another common high-risk impact of under-refinement is unacceptable usability caused by the siloed implementation of Features by the ARTs.

The Priority BDD and Test Scenarios in Figure one represent how features are used harmoniously. These scenarios provide guardrails to help ARTs prioritize and demonstrate parts of the overall solution without significant rework of a PI.

Identifying the dimensions, levels, and progression of readiness is a powerful organizational skill for building a large solution.

Leverage Refinement Crews

Regular large solution refinement is necessary to ensure readiness. The complexity of a large solution warrants greater effort and participation than Solution Management can cover. And the number of key decisions grows in direct proportion to the size of a solution.

Our experience shows that roughly 10 percent of those who participate in large solution development should participate in a regular refinement cadence. If the total participation is 450 people, then 45 representatives from across ARTs or solution areas should set aside time for weekly refinement iterations.

Backlog refinement for a large solution requires more capacity than a typical backlog refinement session. The refinement crews determine a cadence of planning, executing, and demonstrating the refinement work. One-week iterations, for example, help drive focus on refinement to ensure readiness.

We also discovered that refinement crews of six to eight people should swarm refinement work within iterations. These groups are usually created based on individual skills and their representation within stakeholder groups. Alignment with crews and dimensions or skillsets is determined during the planning of refinement iterations. The goal is always to move the funnel item to the next refinement maturity level in the next iteration.

Our experience says that each refinement crew requires at least three to four core participants. The other crew members can come from stakeholder organizations outside the Solution Train.

Readiness progress must be reviewed on a regular cadence with solution train progress. Progress can be represented in the Solution Kanban between the Funnel and Backlog stages, as shown in Figure two. In our example, these stages replace the Analyzing state provided as a starting point in SAFe.

Applying SAFe for Agility
Figure 2. Refinement Stages in Solution Kanban

The organization must also allow each refinement step to vary over time, as it makes sense for the solution. For example, as the development of the solution progresses toward a releasable version, the architecture should stabilize. Therefore the readiness of the backlog item in the architecture dimension should progress very quickly, if not skip some readiness steps. As solutions approach a major release, the contributors’ capacity can shift from readiness to execution of the current release or readiness for the next release.

Because refinement happens in a regular cadence of iterations, weekly, for example, the refinement crews should be empowered to make these decisions in refinement iteration planning.

Employ Dynamic Agility

So is there a definitive template of dimensions with levels and a step-by-step process for determining the DoR? Not quite. And we don’t think that a prescriptive process is best for most organizations.

Instead, we advocate for using the organizational skill of dynamic agility.

As the size and complexity of a solution grow, so do the number and type of variables: compliance type, hardware types, skills required, size of the development organization, size of the enterprise/business, specialization of customer types, and so on. This complexity is augmented by company culture challenges, workforce turnover, and technology advancements in emerging industries.

Individuals’ motivation and innovation suffer when they get lost in the morass of complexity. When things don’t get done, more employees are added to help fix the problem. This workforce growth only magnifies the complexity again.

In contrast, the organizational skill of dynamic agility stimulates autonomy, mastery, and purpose for individuals within teams, teams-of-teams, and large solutions.

Consider the House of Dynamic Agility represented in Figure three.

Applying SAFe for Agility
Figure 3. House of Dynamic Agility

How can dynamic agility be applied to large solution refinement? DoR identification and maintenance of its dimensions and levels happen through a regular cadence of the right events. How often should these occur, for how long, and who should attend? What elements will represent and communicate the DoR? What roles are best suited to own and facilitate the management and use of DoR over time? How will collaboration across the organization happen most efficiently for maximum benefit? These questions are best determined in the context of the large solution.

Conclusion

Large solutions require a balance of preparation and execution to achieve an optimal flow of value. Conducting backlog refinement in preparation for a large solution planning conference and PI Planning lets decomposed work items be implemented without risk of rework. Avoiding over-specification in refinement allows ARTs to innovate and accomplish within the guardrails of refinement. Enabling large solutions to leverage dynamic agility builds ownership, collaboration, and efficiency in a large-scale endeavor.

Look for the next post in our series, coming soon.

About Cindy VanEpps, Project & Team, Inc.

Cindy VanEpps -  SAFe® Program Consultant Trainer (SPCT)

From crafting space shuttle flight design and mission control software at Johnson Space Center to roles including software developer, technical lead, development manager, consultant, and solution developer, Cindy has an extensive repertoire of skills and experience. As a SAFe® Program Consultant Trainer (SPCT) and Model-based Systems Engineering (MBSE) expert, her thought leadership, teaching, and consulting rely on pragmatism in the application of Agile practices.

About Wolfgang Brandhuber, Project & Team, Inc.

Chief Scrum Master, and Agile Head Coach in various Agile environments

Dr. Wolfgang Brandhuber has been a Scrum Developer, Product Owner, Scrum Master, Chief Scrum Master, and Agile Head Coach in various Agile environments. His passion is large solutions. Since the advent of the large solution level in the Scaled Agile Framework in 2016, he has set up and helped solution trains improve their complex systems. During his 18 years as a professional consultant, he worked over 16 of those in the Agile world and more than nine years with SAFe. Among other certifications, he is a certified SAFe® Program Consultant Trainer (SPCT), a Kanban University Trainer (AKT), and an Agility Health Trainer (AHT).

About Malte Kumlehn, Project & Team, Inc.

Malte Kumlehn, Project & Team, Inc.

Malte helps deliver complex ecosystems, people, Cloud, AI, and data-powered digital transformations toward business agility. He pioneers intelligent operating models for portfolios with large solutions as a SAFe® Fellow, advisory board member, and executive advisor in this field. He guides executives in developing the most challenging competencies that allow them to deliver breakthrough results through Lean-Agile at scale. His experience has been published by Accenture, Gartner, and the Swiss Association for Quality over the last ten years.

Learn more about Project & Team.

How to Measure Team Performance: A Scrum Master Q+A

Assessing your team’s agility is an important step on the path to continuous improvement. After all, you can’t get where you want to go if you don’t know where you are. But you probably have questions: How do you measure a team’s agility, anyway? Who should do it, and when? What happens with the data you collect, and what should you do afterwards?

To bring you the answers, we interviewed two of our experienced scrum masters, Lieschen Gargano and Sam Ervin. Keep reading to learn their recommendations for running a Team successfully and Technical Agility Assessment.

Q: How does SAFe help teams measure their agility, and why should I care? 

Measure and Grow is the Scaled Agile Framework’s approach to evaluating agility and determining what actions to take next. Measure and Grow assessment tools and recommendation actions help organizations and teams reflect on where they are and know how to improve. 

The SAFe® Business Agility Assessment measures an organization’s overall agility across seven core competencies: team and technical agility, agile product delivery, enterprise solution delivery, Lean portfolio management, Lean-agile leadership, organizational agility, and continuous learning culture. 

business agility assessment

The SAFe Core Competency Assessments measure each of these core competencies on a deeper level. For example, the Team and Technical Agility (TTA) Core Competency Assessment helps teams identify areas for improvement, highlight strengths worth celebrating, and baseline performance against future growth. It asks questions about how your team operates. Do team members have cross-functional skills? Do you have a dedicated PO? How are teams of teams organized in your agile release trains (ARTs)? Do you use technical practices like test-driven development and peer review? How does your team tackle technical debt?

For facilitators, including scrum masters, the Team and Technical Agility Assessment is a great way to create space for team reflection beyond a typical retrospective. It can also increase engagement and buy-in for the team to take on actionable improvement items.

Q: Who should run a Team and Technical Agility Assessment? 

Running assessments can be tricky. Teams might feel defensive about being “measured.” Self-reported data isn’t always objective or accurate. Emotions and framing can impact the results. That’s why SAFe recommends that a scrum master or other trained facilitator run the assessment. A scrum master, SPC, or agile coach can help ensure that teams understand their performance and know where to focus their improvement efforts. 

Q: When should I do this assessment?

It’s never too early or too late to know where you stand. Running the assessment for your team when you’re first getting started with an agile transformation will help you target the areas where you most need to improve, but you can assess team performance at any time. 

As for how frequently you should run it … it’s probably more valuable to do it on a cadence—either once a PI or once a year, depending on the team’s goals and appetite for it. There’s a lot of energy in seeing how you grow and progress as a team, and it’s easier to celebrate wins that are demonstrated through documented change over time than through general sentiment.

Q: Okay, how do I prepare for and run it?

The agility assessment tools are available free to SAFe members and customers at the Measure and Grow page on the SAFe Community Platform. There you can choose from tools created for us by our partners, AgilityHealth and Comparative Agility.

Before you start the Team and Technical Agility Assessment, define your team’s shared purpose. This will help you generate buy-in and excitement. If the team feels like they’re just doing the assessment because the scrum master said so, it won’t be successful. They have to see value in it for them, both as individuals and as a team. 

Some questions we like to ask to set this purpose include, “What do we want it to feel like to be part of this team, two PIs from now?” And, “How will our work lives be improved when we check in one year from now?”

There are two ways you can approach running this assessment. Option #1 is to have team members take the assessment individually, and then get together to discuss their results as a group. Option #2 is to discuss the assessment questions together and come to a consensus on the group’s answers.

When we’ve run this assessment we’ve had team members do it individually, so we could focus our time together on review and actions. If you do decide to run it asynchronously it’s important as a facilitator to be available to team members, in case they have questions before you review your answers as a team.

Q: What else should I keep in mind?

We like to kick off the assessment with a meeting invitation that includes a draft agenda. Sending this ahead of time gives everyone a chance to prepare. You can keep the agenda loose so you have flexibility to spend more or less time discussing particular areas, depending on how your team chooses to engage with each question.

Q: Is the assessment anonymous? 

Keeping the answers anonymous is really helpful if you want to get more accurate results. We like to be very clear upfront that the assessment will be anonymous, so that team members can feel confident about being honest in their answers. 

For example, with our teams, we not only explained the confidentiality of individuals’ answers but demonstrated in real-time how the tool itself works so that the process would feel open and transparent. We also made it clear that we would not be using the data to compare teams to each other, or for any purpose other than to gain a shared understanding of where we are selecting improvement items based on the team’s stated goals.

Q: Then what? What do I do with the results?

Once you’ve completed the assessment using one of the two approaches, you’ll want to review the sections one by one, showing the aggregate results and allowing the team to notice their top strengths and top areas for improvement. Your job as facilitator is NOT to tell them what you think based on the results; it’s to help guide the team’s own discussion as they explore the answers. This yields much more effective outcomes!

One thing one of us learned in doing the assessment was how much we disagreed on some things. For example, even with a statement as simple as, “Teams execute standard iteration events,” some team members scored us a five (out of five) while others scored us a one. We treated every score as valid and sought to understand why some team members scored high and others low, just like we do when estimating the size of a user story. During this conversation, we learned an important fact. The product owner thought the iteration was executed in a standard way because she was the one executing it. But team members gave that statement a low score because they weren’t included in much of the decision-making. There was no consensus understanding for what “standard iteration events” meant to the team. 

This prompted a conversation about why the team isn’t always included in how the iteration was executed. We talked about the challenge of aligning schedules to share responsibility for decision-making in meetings. And we talked about the impact of team members not having the opportunity to contribute. 

As a result, the assessment did more than help us see where we needed to improve; it showed us where we had completely different perspectives about how we were doing. It prompted rich conversations that led to meaningful progress.

Q: Okay, I ran the assessment; now what? What are the next steps?

With your assessment results in hand, it’s now time to take actions that help you improve. For each dimension of the Team and Technical Agility Assessment, SAFe provides growth recommendations to help teams focus on the areas that matter most and prioritize their next steps. You should: 

  • Review the team growth recommendations together to generate ideas
  • Select your preferred actions (you can use dot voting or WSJF calculations for this; SAFe® Collaborate has ready-made templates you can use)
  • Capture your team’s next steps in writing: “Our team decided to do X, Y, and Z.” 
  • Follow through on your actions, so that you’re connecting them to the desired outcome
  • Check in on your progress at the beginning of iteration retrospectives

Finally, you’ll want to use these actions to set a focus for the team throughout the PI, and check in with business owners at PI planning on how these improvements have helped the organization make progress toward its goals.

Q: I’m ready! How do I get started? 

Fantastic. Just visit the Measure and Grow page at the SAFe Community Platform to choose your assessment tool. While you’re there, you can watch the video for tips or download the Measure and Grow Toolkit for play-by-play guidance. As you’re running the assessment, use the SAFe Collaborate templates to guide the discussion and identify actions and next steps. 

Have fun!

About the authors

Lieschen is a product owner and former scrum master at Scaled Agile.

Lieschen is a product owner and former scrum master at Scaled Agile. She’s also an agile coach and conflict guru—thanks in part to her master’s degree in conflict resolution. Lieschen loves cultivating new ideas and approaches to agile to keep things fresh and exciting. And she’s passionate about developing best practices for happy teams to deliver value in both development and non-technical environments. Fun fact? “I’m the only person I know of who’s been a scrum master and a scrum-half on a rugby team.”

Sam is a certified SAFe® 5.0 Program Consultant (SPC) and serves as the scrum master for several teams at Scaled Agile. His recent career highlights include entertaining the crowd as the co-host of the 2019 and 2020 Global SAFe® Summits. A native of Columbia, South Carolina, Sam lives in Denver, CO, where he enjoys CrossFit and Olympic weightlifting.

Share:

Back to: All Blog Posts

Next: The Unparalleled Value of Emotional Intelligence, Part Two

To Accelerate Impact, Measure Team Performance and Cohesion

This post is part of an ongoing blog series where Scaled Agile Partners share stories from the field about using Measure and Grow assessments with customers to evaluate progress and identify improvement opportunities.

As organizations move from team-level agile to enterprise agility, predictive analytics and statistical insights play an increasingly important role in improving how organizations operate. Gartner predicts that this year, AI will create 6.2 billion hours of worker productivity globally, resulting in $2.9 trillion of business value. The rationale for this increased focus on data-driven insights is clear: while business environments continue to grow more complex and uncertain, the need for fast decision-making and agility has never been greater. 

By identifying potential problems before they become organizational challenges and applying proprietary algorithms to large amounts of data, we can identify patterns and direct organizational attention where it matters. But the insights must be shared in a way that helps change leaders improve their decision-making. 

To make complex statistical analysis useful, it should be presented in a way that inspires action.

The Impact Matrix: Measuring performance and cohesion

This challenge is the motivation behind the Impact Matrix, a canvas that immediately identifies how teams are doing based on two essential vectors of team potency: performance and cohesion. Getting an understanding of teams helps change leaders quickly recognize challenges, prioritize efforts, and develop improvements.

Let’s take a closer look at a sample Impact Matrix report and explore how it can accelerate an organization’s transformation efforts.

The Impact Matrix

As illustrated in this example, the teams (represented as dots) in an organization’s portfolio are positioned on the canvas based on their relative score across two vectors; performance and cohesion.

Depending on a respective team’s score and relative position, we can quickly identify a theme of focus, categorize a strategic approach, and pinpoint essential questions that leaders should consider when deciding next steps. 

Amplify: High performance, high cohesion (green zone)

Teams in the Amplify quadrant are performing at a relatively high level and there are no major disconnects between the team members. Organizations benefit from observing these teams, understanding what makes them perform consistently, and trying to amplify these norms across the broader organization. Some helpful questions to ask include, “To what degree is the environment enabling teams to perform at this level?” “What role does management play in empowering these teams to do so well?” and, “How can coaching help these teams sustain—and even exceed—their current levels?”

Align: High performance, low cohesion (yellow zone)

When teams are in the Align quadrant they are performing well, but there are significant disagreements and disconnects between team members. Organizations benefit from keeping a close eye on teams in this quadrant, as a lack of team cohesion is a leading indicator of deteriorating performance. Questions to consider for teams in this context include, “Are certain team members dominating conversations?” “Is there sufficient psychological safety so all team members can feel comfortable speaking up?” and, “Is there a clear purpose that team members can rally around?”

Mitigate: Low performance, low cohesion (red zone)

Teams in the Mitigate quadrant are indicating they need help: they’re not only performing poorly, but they’re also disconnected. Organizations benefit from listening to and engaging with these teams to help alleviate their challenges. Questions that may be helpful in this context include, “What are immediate actions we can take to ease the current situation?” “How can we better understand why the team feels challenged?” and “How can the organization give the team a safe environment to work out challenges?”

Improve: Low performance, high cohesion (yellow zone)

Teams in the Improve quadrant usually don’t remain there for long. These teams are performing relatively poorly, but they’re aware of their challenges—and typically, they take steps to improve their situation. Organizations benefit by helping these teams accelerate their improvement efforts and providing them with the necessary resources. Questions these teams should ask include, “What steps can the team take to start alleviating current challenges?” “How can the organization help?” and, “What insights do the data give us about where to start?”

Conclusion

By leveraging data and sophisticated analytics, the Impact Matrix helps change leaders accelerate their transformation efforts by focusing their work where it matters, pointing them in the right direction, and ultimately supercharging their ability to lead organizational change. Although data and meaningful analytics are insufficient to give you all the answers you need, they can help you ask better questions and complement your overall transformation strategy.

Do it yourself: Run the Impact Matrix on your release train or portfolio today to get a comprehensive picture of how teams are performing, and find out immediately where you can provide the most value to your organization. Activate your free Comparative Agility account on the SAFe Community Platform.

Matthew Haubrich is the Director of Data Science at Comparative Agility.

Matthew Haubrich is the Director of Data Science at Comparative Agility. Passionate about discovering the story behind the data, Matt has more than 25 years of experience in data analytics, survey research, and assessment design. Matt is a frequent speaker at numerous national and international conferences and brings a broad perspective of analytics from both public and private sectors.

Share:

Back to: All Blog Posts

Next: How 90 Teams Used Measure and Grow to Improve Performance by 134 Percent


We’re Giving More Than a Donation for Pride Month – Agility Leadership

I wanted to share a learning moment I and my colleagues at Scaled Agile had recently. June is Pride Month, and some employees requested that we modify our logo to include the rainbow. This request led to an internal debate about whether altering our logo was a trivial act or a meaningful symbol.

People raised valid points. “Others are doing it. Why aren’t we showing our support?” and, “We don’t do enough externally to support the Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, Asexual (LGBTQIA+) community, so changing our logo feels like an empty gesture.” Ultimately, we decided not to modify our logo but instead encourage an employee-driven campaign that the company could share on its social channels.

Personally, I saw the request to alter our logo as a non-issue. Here’s why: As an openly gay male executive at Scaled Agile, I lead one of our largest global regions. No one has ever questioned my capabilities, and I’ve always felt accepted. As a leader here, I have opportunities all the time to lead by example. And I consistently get feedback from employees that they appreciate my approach. People who know me professionally and personally know I don’t have a “work Brendan” that’s different from my “personal Brendan.” My customers know this too. I’ve always been proud of this, and I feel totally supported in this regard at Scaled Agile. 

Scaled Agile participates in Pledge 1% Colorado, and every year we donate a significant part of our time and profits to lots of good causes. While we haven’t yet focused on the LGBTQIA+ community, we do give back to many other underrepresented communities through volunteering and donations. Few companies of our size have matched our commitment to giving back. Our company was founded by a strong team, and we’ve never wavered in our support for the gay community.

Early on in Scaled Agile’s existence, we chose to hire the best talent. And we ended up with a large and enthusiastic LGBTQIA+ employee base. I’m here to say that you can find a place to hang your rainbow hat here with us. Fostering a welcoming workplace where LGBTQIA+ people feel safe, supported, and trusted is giving back, and it’s worth getting loud about. I’m fortunate that I’ve always found these qualities in my employers; I vet them in that regard. Providing an environment where LGBTQIA+ people can grow their skills in a welcoming way is worth more than any donation we could make to an LGBTQIA+ organization. 

Many young LGBTQIA+ people struggle and wonder whether they’ll have a safe future. Showing them that we can thrive and choose whatever career path we want is very important to me. There are LGBTQIA+ adults who go to work every day living a tale of two selves: they are fearful, and rightfully so. When people are forced to hide who they are, they miss out on the right to be their authentic selves, and out of preservation, they show up as a different self. I’ve seen the pain this causes. I’m committed to continuing to play a strategic role in growing this company so that more people can enjoy a safe, fun, and respectful workplace. As a member of the LGBTQIA+ community, I think this is the best giveback we can provide.

I’m a big proponent of providing donations to communities in need. You hope your money goes to the right people at the right time for the right reasons. And you trust that the organization is using your funds wisely. But controlling your contribution to the LGBTQIA+ community by hiring us, no questions asked, and providing us with an amazing, supportive team of colleagues and customers, elicits a tremendous feeling of pride in me.

It can be risky for leaders like me to pen posts like this because they’ll stick with you forever. But leading by example means being vulnerable. We should celebrate who we all are together as well as the fact that our company is having a big impact by offering more than just words or donations. I’ll participate in developing our more concrete LGBTQIA-focused initiatives, and in the meantime, we’ll keep on giving.

About Brendan Walsh

Brendan Walsh

As an active member of the Colorado tech startup community, Brendan has enjoyed growing some of the most successful Colorado-based companies for 25 years and counting. He lives in Denver, along with his partner of 16 years, Aaron. The two have had the privilege of living abroad for several years and always looked forward to bringing their life experiences back to Colorado. Their four-legged, rescued son, Rex, rules the house—just to be clear.

Share:

Back to: All Blog Posts

Next: Three Lessons I Learned in My First Year as a Product Owner

Three Lessons I Learned in My First Year as a Product Owner – Agility Planning

My First Year as a Product Owner

I had managed marketing teams before, but being a product owner (PO) of an Agile marketing team was a completely new concept. As a team member, I was fortunate to spend a year watching POs do the job, which gave me a leg up. But I never really appreciated the intricacy of the position until I became one. Looking back at a year in the role, here are three key lessons I’ve taken from this experience.

The Scrum Master Is a PO’s Best Friend

Stop trying to do it all by yourself. You can’t, and you don’t have to. The scrum master is your co-leader. They don’t just run retros; they’re your sounding board and partner.  

Consider this: scrum masters spend their entire day thinking about how to support the team. Not the customer, not the executives—the team. So, listen to them. When they give you constructive criticism, listen. If they give you advice, listen. Scrum masters are often the ones at the back of the room watching everyone’s body language and unspoken communication while you’re busy thinking about the stories and features. They can catch things you don’t, so listen to them.

Planning Is Hard but Don’t Give Up

A year ago, you would find me crying after each iteration planning. Somehow we would start with 270 percent of capacity and be lucky if we got down to 170 percent—almost twice as much work planned than we could ever physically complete. If our planned capacity was ridiculous, our predictability was nonexistent. One iteration, we’d complete 120 percent, the next 50 percent—who knew what you were going to get from us.  

But we stuck with it.

We invested in iteration planning and backlog refinement. We went back to basics, agreeing on the definition of a “1” so we could do relative sizing. We started planning poker, where everyone on the team had a say in how to size stories, even if they personally were not doing the work.  And we started getting more serious and explicit about what we could and couldn’t accomplish inside two weeks.

iteration planning

A year later, I beam with pride. We’re a predictable and high-performing team. When we tell another team we can deliver something within an iteration; it’s the truth. Not a gut check, and employees don’t have to work insane hours to make it happen.

Pro Tip: If you’re struggling with iteration planning, I strongly recommend downloading the Iteration Planning Facilitator Checklist on the SAFe Community Platform. There’s also a good instructional video on the Team Events page.

PI Planning Is Not A Drill

I usually start thinking about PI Planning in iteration four. I don’t have features, I don’t know what the pivots will be, but I’m already thinking about what conversations I have to have to get my team ready. I’ve already got my finger in the air to sense the direction of the proverbial wind. My scrum master and I spend a lot of time thinking about preparing the team for PI Planning, creating space for exploration, and making sure we discuss every possible dependency, so there aren’t surprises later.

Virtual PI Planning offers another level of complexity. It’s absolutely critical that I have everything organized for my team and me, documented, and ready to go before we log in. The team knows where to find information, what the marketing objectives are, and what teams we need to sync with to plan our work.

Are you a PO? What lessons have you learned? What do you wish you knew when you started? Join the conversation in the SAFe Product Owners/Managers Forum on the SAFe Community Platform.

About Hannah Bink

Hannah Bink heads the Marketing Success team at Scaled Agile

Hannah Bink heads the Marketing Success team at Scaled Agile. She has nearly 15 years of B2B marketing experience and studied business at Pennsylvania State University. Prior to Scaled Agile, Hannah spent the majority of her career in telecommunications and healthcare sectors, running global marketing divisions. She is also author of the “Musings of a Marketeer” blog, and lives in Denver, Colorado.

View all posts by Hannah Bink

Share:

Back to: All Blog Posts

Next: How Vendors Can Apply Customer Centricity When Organizing Around Value

How Vendors Can Apply Customer Centricity When Organizing Around Value

Customer Centricity
In this post, we look at how a B2B vendor should organize around value when building products that are used to support its customer’s business operations. A common example might be a vendor building a CRM system.

Organizing Around Value

Lots of organizations are organized around functional silos—such as business, system engineering, hardware, software, testing/QA, and operations. These structures exist because they support specialization and allow organizations to grow and manage their people effectively. It’s why so many organizations are set up in this way. And many organizations persist in this siloed structure even when they start their journey toward business agility.

For example, they create Agile teams that map to specialized components or subsystems. Similarly, they create Agile Release Trains around entire departments or functions. From a change management perspective, it’s easier to keep the current structure and apply Agile ways of working on top of it. But value doesn’t flow in silos. In many cases, these so-called Agile teams or teams of teams struggle to deliver valuable increments because what’s valuable requires collaboration across the silos. Additionally, teams struggle to see how their work builds up to something valuable for the customer. 

Business agility and digital transformation are all about speed of learning and value creation in the face of a dynamic changing environment. The classic organizational structure isn’t optimized for this speed, which is why SAFe® introduces the value stream network as part of a dual operating system that runs parallel to the organizational hierarchy.

Customer Centricity
The value stream network within a dual operating system.

Development value streams (DVDs) are the organizational construct used by SAFe to create this value stream network. DVDs are where the essential activities of defining, implementing, and supporting innovative, digitally enabled solutions occur. Defined correctly, DVDs are able to deliver valuable business solutions on their own with minimal dependencies on other parts of the business.

Alongside the DVDs are Operational Value Streams (OVSs) which describe the steps needed to deliver value to the organization’s customers. Examples might include providing a consumer loan or provisioning a software product. With this in mind, the DVS has one purpose: to create profitable OVSs. They do this either by creating the systems that the OVS relies on to operate effectively or by building the products that the OVS sells. With this in mind, understanding how the work is done in the OVSs helps us understand what value looks like, how it flows from demand to delivery in our context, and how we might organize our DVDs to support this.

What’s the Right Approach to Defining Operational Value Streams?

Identifying the OVS is a relatively straightforward exercise for a technology/development organization trying to organize effectively around the value that the wider organization is delivering to customers. People supporting systems that are used when providing these services (digital or physical) can easily apply customer-centric thinking and identify an OVS oriented around the needs of the real external customer.

fulfillment OVS
Example of a fulfillment OVS.

It gets tougher to map an OVS when you’re a vendor selling your product/solution for use as part of another organization’s operational activities. A great example is an independent software vendor (ISV). Other examples include vendors building and selling cyber-physical systems such as medical devices and manufacturing equipment. 

Consider vendor ACME Corp, which provides banks and financial institutions with banking systems. ACME Corp is building an AI-powered loan underwriting system that will fit into their banking systems portfolio. What OVS should ACME Corp model when considering how it might organize around value?

Many SAFe practitioners would suggest that ACME Corp model an OVS that focuses on how it would market, sell, and operate its solution.

Customer Centricity

Vendor IT folks supporting systems like CRM and ERP find it a useful way to model the business process they’re supporting. With this OVS in mind, they can make sure they organize around the whole buyer journey. And they can apply systems thinking to explore what features to introduce to the systems supporting this process.

The problem with this approach is that this OVS is mainly from the vendor and buyer journey perspectives. It doesn’t provide any information on how the solution will be used or the kind of work that it will support. An alternative approach is to use the real customer’s experience/journey as the OVS. Basically, take the same perspective that an internal technology organization would when building systems for these customers.

Customer Centricity

Both the buyer journey OVS and customer-centric journey OVS exist. The question is: which of them is more useful to focus on? Remember that we map OVSs in order to build a hypothesis around what’s an effective DVS. In this example, both OVS perspectives can be useful. 

The customer-centric fulfilment OVS focusing on the solution context within which the ISVs product lives are the perspective that product development/engineering should focus on—this is where the systems/products/solutions that they create exist. This perspective would be more relevant to people building the products the vendor is selling because it would get them closer to their customers. It would also help them apply systems thinking to which features can really support generating value for these customers and for the enterprise serving these customers.

Customer Centricity OVS

Emphasize Customer Centricity as Part of Value Stream Identification

The example above illustrates why vendors can find it daunting to figure out which OVS to focus on. Going down the software product OVS perspective often leads to confusion and lack of guidance because it’s disconnected from how the products are used and from the solution context. A common move vendors make at this point is to fall back to organizing around products. Being able to explore, build, deploy, release, and operate/maintain a product can be a significant improvement for some organizations.

DVS

The problem with this structure is that it still has silos. And once we look at the value the vendor is trying to create, we might see a lot of dependencies between these silos. The management challenge is to connect the right silos—those that need to collaborate to deliver the value that the organization’s strategy is pointing toward. 

Leveraging customer centricity using the customer’s fulfilment OVS can help the vendor transcend product silos and maximize the value created by their product portfolio. More vendors we work with that started with ARTs and DVDs oriented around products find themselves with heavy coordination overhead across different DVDs and ARTs when executing on strategic themes and portfolio initiatives. 

Going back to our AI-powered underwriting product actually supports multiple steps in the customer-centric OVS, and requires new features across a range of the vendor’s products. Maximizing the value of AI-powered underwriting requires collaboration and coordination with the groups developing these products. If all of these different products are built by different DVSs, this coordination will be slow and painful. If the vendor organizes around value and brings the right people with the ability to get AI-powered underwriting integrated into the different products, time-to-market and quality will be improved. People would also feel more motivated and engaged since they’re very focused and effective. 

Using a customer-centric OVS is key to understanding your real solution’s context. This can enable you to organize effectively to minimize dependencies and enable collaborations that streamline the customer journey. Which is essentially the goal of most products—to help a business better serve its customers.

Customer Centricity OVS
Example of a vendor creating a DVS modeled around a customer-centric OVS.

When a DVS is created to support a customer-centric OVS, the organization can use techniques including value stream mapping and design thinking to innovate “in the Gemba—where the real value flows.” When this DVS includes everyone needed to explore, build, deploy, and support solutions that cut across the customer-centric OVS, we’ve truly created a network operating system that’s organized around value. And we’ve taken a huge step toward enabling real business agility. 

Join our webinar on June 9, 2021 with SAFe Fellow Andrew Sales. You’ll learn tactical advice and tips to identify operational and development value streams that help optimize business outcomes. We hope to see you there!

About Yuval Yeret

Yuval Yeret is the head of AgileSparks (a Scaled Agile Partner)

Yuval Yeret is the head of AgileSparks (a Scaled Agile Partner) in the United States where he leads enterprise-level Agile implementations. He’s also the steward of The AgileSparks Way and the firm’s SAFe, Flow/Kanban, and Agile Marketing. Yuval is a SAFe Program Consultant Trainer (SPCT5), a Professional Scrum.org Trainer (PST), an internationally recognized Kanban Trainer, a thought leader, recipient of the Lean/Kanban Brickell Key Award, and a frequent speaker at industry conferences.

View all posts by Yuval Yeret

Share:

Back to: All Blog Posts

Next: Winning the Customer with Experience Architecture

Winning the Customer with Experience Architecture – Agility Planning

 Experience Architecture

As the post-digital economy begins to boom and the worlds of business process and technology come together, it’s time to think about how we optimize the whole from a unified, customer-centric perspective. Some organizations have begun to master the idea of experience architecture, whereas others are just beginning.

During my years consulting, I’ve had the opportunity to work with complex system architecture. Such as the APIs and data structures across multiple federal agencies that manage annual earnings and death records for every person in the United States. I’ve also experienced complex business architectures responsible for moving passengers, aircraft, and cargo around the world in a safe and predictable manner.

What was obvious to me in these and other scenarios was that there was no way we could treat the disciplines of business architecture and technical architecture as independent variables. Especially if these organizations hoped to keep up with the speed of innovation. In early experiments, I hypothesized that by pairing business architects with their peer application architects, we could better design experiments to achieve business outcomes that were efficient and technically sound. My hypothesis was partially proven. There were still missing pieces of the equation.

In later experiments, I treated the various architects in the value stream as an Agile team composed of all architectural perspectives that we needed to deliver the solution. Those perspectives included business architecture, application architecture, systems architecture, data architecture, information security, and even Lean Six Sigma Black Belts to help keep the group focused on flow efficiency. That experiment had some cool outcomes, though I had come to realize one obvious hole: the lack of consideration for the people in the system. We needed a different skill set. We needed experience in architecture.

Curated Experiences and Powerful Moments

In their book The Power of Moments, Chip and Dan Heath give many examples of how people remember exceptionally good and exceptionally poor experiences. The authors illustrate how average experiences are largely forgettable. Conversely, experiences that are repeated merge together so that customers develop a general perception of the experiences, but don’t remember any experience in particular.

My family visits Disney. A lot.

Before I met my wife, I’d been to Disney World twice in my life. Once when I was eight and again as a teenager. I have fond memories of each trip and can remember specific moments. These were positive experiences and here’s why.

Since having met my wife, I’ve been to Disney an average of twice annually and as many as four times in a single year. Each of those experiences has been positive, but I can’t articulate why. I know that I enjoy Animal Kingdom and the Avatar ride in Pandora. I know that the Magic Kingdom gives me anxiety and that you can get prosecco at Epcot. Unlike the trips of my youth which are memorable, not a single visit as an adult stands out. The experiences have merged.

 Experience Architecture

My Disney experience is what most customers experience as they interact with our value streams. The customer will form a general feeling and only talk about the experience: if it was exceptionally good or exceptionally bad.

Need proof? Check out the reviews on Amazon or Google. You’re likely to see mostly very positive and very negative reviews without much in the middle. There is power in moments. The moments are what we remember and they can be curated when an organization makes investments in experience architecture.

Map Value Streams and Understand the Experience

Similar to the steps of value stream identification and business architecture, the first step in articulating a customer experience is to map the operational value stream. With alignment on how the business operates, the next step in understanding the customer experience is to visualize the technology that supports the operation and the development value streams that maintain the technology. Speaking of value streams, check out this cool webinar where I talk about them with Danny Presten.

With the value streams mapped, the next step is to embark on a journey to optimize the whole by eliminating technical and operational debt. With the help of business architecture, we can leverage the time focused on improvement to begin identifying opportunities for large-scale improvement in operational throughput. Additionally, the organization can begin investing in capability modeling with the goal of running more experiments for strategy implementation, faster.

With the operational value stream mapped, the underlying architecture understood, and a commitment made to relentless improvement, we can now begin exploring the customer experience.

Map the Experience

Now that we’re ready to map the customer experience, we begin by seeking to better understand the customer. SAFe® advocates using design thinking as a framework for customer centricity to best use personas, empathy maps, and experience maps. The art of experience mapping follows similar best practices used in other forms of value stream mapping. The distinct difference is that we engage with customers to understand their journey from their perspective.

Below is an example of an experience map that depicts the experience of an online public learner in the SAFe ecosystem. At the top, you’ll notice the phases of the customer journey, followed by the operational value stream. We continue by seeking to understand the customer’s goal within each component of the value stream, the touchpoints that Scaled Agile has with the customer, and finally, the customer’s happiness after having completed the operational component.

Similar to other types of value stream mapping, with the customer experience articulated, we can now start on a path to relentlessly improve the customer experience and curate memorable moments.

 Experience Architecture

Curate Unforgettable Moments

The Heath brothers explain the power of moments as a key theme in their book. For me, the true power of moments became evident when I purchased a new home in July of 2020. Veterans United Home Loans has made a significant investment in its customer experience and has taken advantage of the power of moments. The proof? The fact that nearly a year later I am blogging about my mortgage experience.

If you’ve ever bought a home, you can probably empathize when I say that it’s a stressful experience. In any mortgage transaction there are two particularly stressful phases for future homeowners: approval, and more notably, underwriting. Through their work in experience mapping, Veterans United was able to recognize this and curate moments to help ease the stress a little. When I received the approval for my home, my mortgage broker, Molly (do you remember the name of your mortgage broker?), sent me a pair of Veterans United socks.

 Experience Architecture

Yes, socks. They weren’t the best quality, they were kind of corny, but they made me smile and I’m still talking about them. Moment curated. 

When I closed underwriting, the curated moment was a little more personal. Molly had done her homework and knew that I liked to barbecue. So, she sent me a nice set of outdoor cooking utensils. As you sit there and ponder the ROI on socks and cooking tools, remember that you now know about Molly and Veterans United. ROI achieved. 

What low-cost, high-impact moments can you curate for your customers? How can you turn an otherwise forgettable experience into something that people remember for years to come? These actions are key to winning in the post-digital economy. Consumers want to know your organization is human. They want to know that you care. What can you do to help make that connection?

Experience Architecture: Conclusion

Success in the post-digital economy will require business agility and a clear focus on the customer. Experience architecture is something organizations should employ to better understand the customer so they can release on demand, as determined by the market and customer.

If you’re an experienced experience architect, consider sharing your stories in our General SAFe Discussion Group on the SAFe Community Platform. To learn more about working with varying architectural disciplines, while maximizing the amount of work not done, and embracing a just enough, just-in-time approach, check out these architectural runway articles.

About Adam Mattis

Adam Mattis is a SAFe Program Consultant Trainer (SPCT) at Scaled Agile with many years of experience overseeing SAFe implementations across a wide range of industries. He’s also an experienced transformation architect, engaging speaker, energetic trainer, and a regular contributor to the broader Lean-Agile and educational communities. Learn more about Adam at adammattis.com.

View all posts by Adam Mattis

Share:

Back to: All Blog Posts

Next: Aligning Global Teams Through Agile Program Management: A Case Study

Measuring Agile Maturity with Assessments – Agile Adoption

Stephen Gristock is a PMO Leader and Lean Agile Advisory Consultant for Eliassen Group, a Scaled Agile Partner. In this blog, he explores both the rationale and potential approaches for assessing levels of Agility within an organization.

A Quick Preamble

Measuring Agile Maturity with Assessments

For every organization that embarks upon it, the road to Agile adoption can be long and fraught with challenges. Depending on the scope of the effort, it can be run as a slow-burn initiative or a more frenetic and rapid attempt to change the way we work. Either way, like any real journey, unless you know where you’re starting from, you can’t really be sure where you’re going.

Unfortunately, it’s also true that we see many organizations go through multiple attempts to “be Agile” and fail. Often, this is caused by a lack of understanding of the current state or a conviction that “we can get Agile out of a box.” This is where an Agile Assessment can really help, by providing a new baseline that can act as a starting point for Agile planning or even just provide sufficient information to adjust our course.

What’s in a Word?

We often hear the refrain that “words matter.” Clearly, that is true. But sometimes humans have a tendency to over complicate matters by relabeling things that they aren’t comfortable with. One example of this within the Agile community is our reluctance to use the term “Assessment.” To many Agilists, this simple word has a negative connotation. As a result, we often see alternative phrases used such as “Discovery,” “Health-check,” or “Review.” Perhaps it’s the uncomfortable proximity to the word “Audit” that sends shivers down our spines! Regardless, the Merriam-Webster dictionary defines “assessment” as:

“the act of making a judgment about something”

What’s so negative about that? Isn’t that exactly what we’re striving to do? By providing a snapshot of how the existing organization compares against an industry-standard Agile framework, an Assessment can provide valuable insight into what is working well, and what needs to change.

The Influence of the Agile Manifesto

When the Agile movement was in its infancy, thought leaders sought to encapsulate the key traits of true agility within the Agile Manifesto. One of the principles of the manifesto places emphasis on the importance of favoring:

“At regular intervals, the Team reflects on how to become more effective, then tunes and adjusts its behavior accordingly”

Of course, this is key to driving a persistent focus on improvement. In Scrum this most obviously manifests itself in the Retrospective event. But improvement should span all our activities. If used appropriately, an Agile Assessment may be a very effective way of providing us with a platform to identify broad sets of opportunities and improvements.

Establishing a Frame of Reference

Just like Agile Transformations themselves, all Assessments need to start with a frame of reference upon which to shape the associated steps of scoping, exploration, analysis, and findings. Otherwise, the whole endeavor is just likely to reflect the subjective views and perspectives of Assessor(s), rather than a representation of the organization’s maturity against a collection of proven best practices. We need to ensure that our Assessments leverage an accepted framework against which to measure our target organization. So, the selected framework provides us with a common set of concepts, practices, roles, and terminology that everyone within the organization understands. Simply put, we need a benchmark model against which to gauge maturity.

Assessment Principles

Measuring Agile Maturity with Assessments

In the world of Lean and Agile, intent is everything. To realize its true purpose, an Assessment should be conducted in observance with the following overriding core principles:

  • Confidentiality: all results are owned by the target organization
  • Non-attribution: findings are aggregated at an organizational level, avoiding reference to individuals or sub-groups
  • Collaboration: the event will be imbued with a spirit of openness and partnership- this is not an audit
  • Action-oriented: the results should provide actionable items that contribute toward building a roadmap for change

Also, in order to minimize distraction and disruption, they are often intended to be lightweight and minimally invasive.

Assessment Approaches

It goes without saying that Assessments need to be tailored to fit the needs of the organization. In general, there are some common themes and patterns that we use to plan and perform them. The process for an archetypal Assessment event will often encompass these main activities:

  • Scoping and planning (sampling, scheduling)
  • Discovery/Info gathering (reviewing artifacts, observing events, interviews)
  • Analysis/Findings (synthesizing observations into findings)
  • Recommendations (heatmap, report, debrief)
  • Actions/Roadmap

Overall, the event focuses on taking a sample-based snapshot of an organization to establish its level of Agile Maturity relative to a predefined (Agile) scale. Often, findings and observations are collected or presented in a Maturity Matrix which acts as a tool for generating an Agile heatmap. Along with a detailed Report and Executive Summary, this is often one of the key deliverables which is used as a primary input to feed the organization’s transformation Roadmap.

Modes of Assessment

Not all Assessments need to be big affairs that require major planning and scheduling. In fact, once a robust baseline has been established, it often makes more sense to institute periodic cycles of lighter-weight snapshots. Here are some simple examples of the three primary Assessment modes:

  • Self-Assessment: have teams perform periodic self-assessments to track progress against goals
  • Peer Assessments: institute reciprocal peer reviews across teams to provide objective snapshots
  • Full Assessment: establish a baseline profile and/or deeper interim progress measurement

Focus on People—Not Process and Tools

Measuring Agile Maturity with Assessments

Many organizations can get seduced into thinking that off-the-shelf solutions are the answer to all our Agile needs. However, even though a plethora of methods, techniques, and tools exist for assessing, one of the most important components is the Assessor. Given the complexities of human organizations, the key to any successful assessment is the ability to discern patterns, analyze, and make appropriate observations and recommendations. This requires that our Assessor is technically experienced, knowledgeable, objective, collaborative, and above all, exercises common sense. Like almost everything else in Agile, the required skills are acquired through experience. So, choosing the right Assessor is a major consideration.

Go Forth and Assess!­

In closing, most organizations that are undergoing an Agile transformation recognize the value of performing a snapshot assessment of their organization against their chosen model or framework. By providing a repeatable and consistent measurement capability, an Assessment complements and supports ongoing Continuous Improvement, while also acting as a mechanism for the exchange and promotion of best practices.

We hope that this simple tour of Assessments has got you thinking. So what are you waiting for? Get out there and assess!

For more information on assessments in the SAFe world, we recommend checking out the Measure and Grow article.

About Stephen Gristock

Stephen Gristock - Specializing in Agile-based transformation techniques

Specializing in Agile-based transformation techniques, Stephen has a background in technology, project delivery and strategic transformations acquired as a consultant, coach, practitioner, and implementation leader. Having managed several large Agile transformation initiatives (with the scars to prove it), he firmly believes in the ethos of “doing it, before you teach/coach it.” He currently leads Eliassen Group’s Agile advisory and training services in the NY Metro region.

View all posts by Stephen Gristock

Share:

Back to: All Blog Posts

Next: How I Prepared to Teach My First Remote Class

How I Prepared to Teach My First Remote SAFe Class – SAFe Training

Teach My First Remote SAFe Class

In March 2020, I co-taught my first SAFe® class. I made a big course Kanban board on the wall with each lesson and designed flip charts for feedback. I printed out the entire trainer guide (trees, RIP) and took physical notes on each page and lesson I was accountable for presenting. I printed and cut out all of the features and stories for the PI Planning simulation, divided up the pennies, and organized the room with pens and sticky notes.

I still have the “business executive” visor I like to show off to friends. Little did we know, those few days teaching that course were our last days in the office together.

In March 2021, I co-taught my first remote SAFe class. I didn’t print out or physically organize a single thing, but I did spend a lot of time preparing: I’d say three to four times as much. This time it was browser tabs, online tools, email messages, and files. And since this was my first teaching environment for any subject in a remote space, I had a lot to learn and explore. 

Luckily, I wasn’t completely alone in my exploration. The SAFe® Community Platform centralized a lot of the resources and information I needed to make the class a success. 

Scaled Agile-provided Preparation

Course enablement. Just as with in-person teaching, mastering the content before teaching it is vital. Listening to SAFe experts discuss the intent of each lesson and subsequently passing the exam was a great (and mandatory) first step.

Remote Trainer Badge. Taking this learning plan helped prime my mind for teaching in a remote context. It gave me confidence and allowed me to see opportunities in teaching remote rather than just its limitations. I got tips from some of SAFe’s best trainers on creative ways to teach, appropriate adjustments, and reframed expectations. For example, even with a pre-course webinar to prepare your students and yourself for the tools and technology to use in class, you should still have a plan A, plan B, and plan C, because anything can happen. 

The SAFe® Virtual Classroom. With Virtual Classroom, I didn’t need to find a collaboration tool, buy a subscription, rebuild all of the activities, and have my students register for it. In one click and with no extra effort, my activities were set. Thank goodness for Virtual Classroom! I could spend my precious time elsewhere instead of tediously recreating activities and adding, copying, and pasting every user story in the PI Planning simulation.

Knowledge-check questions. At the beginning of every trainer guide, there’s a link to a set of quiz questions associated with each lesson written in the style of the certification exam. Right now, it’s still a bit tedious to transfer all of the knowledge-check questions and answers to a polling tool, but this ended up being a highlight for several of our students. It was a great review of each lesson and was a good litmus test to give confidence that the students were learning and retaining information. 

Self-guided Preparation

Reviewing each slide. Getting very comfortable with the content and flow of the course is important to me. This largely means going through each slide and adding notes for stories, metaphors, and analogies—no trees harmed this time. Taking the time to get creative with the content enabled me to set up jokes and prepare realia props to surprise and delight students.

Preparing each activity. This may seem tedious and redundant, but really getting clear on the activities and exactly how they will be performed set both me and my students up for success. The virtual space can be confusing sometimes, so getting crystal clear on resources, breakout rooms, timeboxes, and objectives is key, especially when there are a few ways to run activities. 

Virtual audience engagement research. This means Google searching and YouTube browsing about how to make a remote class effective and fun. I wanted to get suggestions from experts in the general business of video conferencing, from webinars to interactive courses. I learned about alternatives to slide decks, relevant icebreakers, and online tools to keep the class on track. 

Was the class 100% perfect? No. But I went in feeling prepared, taking advantage of several available resources. I took risks and tried new things. And ultimately, I learned from the experience.

I discovered that remote SAFe teaching is nothing to be afraid of. For many people like me, it’s simply something new, something different, and something with which to experiment, have fun, and fail fast. In the words of one of my favorite professors, “The best teachers are the ones who try.” So, get caught trying.

About Emma Ropski

Emma Ropski is a certified SAFe 5 Program Consultant and scrum master

Emma is a certified SAFe 5 Program Consultant and scrum master at Scaled Agile. As a lifelong learner and teacher, she loves to illustrate, clarify, and simplify helpful concepts to keep all teammates and students of SAFe engaged.

View all posts by Emma Ropski

Share:

Back to: All Blog Posts

Next: Aligning Global Teams Through Agile Program Management: A Case Study