Scrum Inc. is the premier provider of Scrum training and consulting. We help companies and individuals dramatically increase productivity and quality. Follow this blog to get information on Scrum training, consulting and much more.
We live in an age of disruption. A time when entire industries can be upended by a single, thinly staffed start-up. With customers demanding near-constant innovation and the concept of brand loyalty on the wane, more and more executives are wondering if making an entire company an Agile enterprise is how to not just thrive but survive.
Dr. Jeff Sutherland and Darrell Rigby explore this question in the cover story of the May-June issue of the Harvard Business Review. Together, the co-authors mine their decades of experience, research, and understanding to fill the article with expert insights and relevant case studies.
In this podcast, Host Tom Bullock talks with both men about their article. Sutherland and Rigby also delve deeper into how executives can scale agile across an organization. They discuss everything from budgeting and compensation, to common pitfalls, and keys to a successful implementation.
Jeff Sutherland is the co-creator of Scrum and founder of Scrum Inc.
Darrell Rigby is a partner with Bain and Company and head of its Global Innovation Practice.
Co-authors Jeff Sutherland and Darrell Rigby begin their Harvard Business Review cover story with this acknowledgment, “By now most business leaders are familiar with agile innovation teams.”
Most, but not all.
This supplemental edition of the Agile at Scale podcast is all about the teams. But not just an explanation of the basics. Host Tom Bullock asks Jeff Sutherland and Darrell Rigby to explain the latest thinking about agile teams.
Janelle Monae is a hard artist to put into a box. Hip Hop? R&B? An Afrofuturist like Sun Ra? Is she like Prince or Annie Lennox or Lauryn Hill? Maybe an apt comparison would be David Bowie, a genre and gender rethinking of the future. Maybe she is the future. I probably have nothing original to say, except she is a remarkable talent making sui generis musical commentary on the current moment and mode.
Which made my discovery of this Radical Reads piece list of her favorite books reinforce for me that Scrum is a universal human accelerant, whether you are working on software, rocket ships, AI, or really interesting music.
“One of the last books that we read as a company was Scrum: The Art of Doing Twice the Work in Half the Time. We have some incredible people working with Wondaland Records, and we knew that if we were going to be releasing five artists, including myself, we’d need to figure out a way to make sure we were on schedule. We wanted to find the quickest ways to get quality results we were all happy with. It solves problem on how we write music and how we run the company. It’s the way people use it in the tech industry, inspired by the tech world. It justifies the way we run Wondaland Records.” -JM
It's not many people that have Isaac Asimov, Octavia Butler, Ray Kurzweil, and Malcolm X on their list of favorite authors. I do myself, and to be honest thought I was the only person with that particular brand of eclecticism. It is an honor to be included on the Electric Lady's bookshelf.
I was asked to work with a global architecture firm developing parts of Apple’s new campus. This campus is built for scaling Scrum. Open, flexible team rooms with minimal furniture ring the building, and every five meet together in an open room for Scrum of Scrums and MetaScrum. Every five of those meet in an open room for Scrum of Scrum of Scrums and MetaMetaScrum. And in the center of that sits the cafeteria to maximize cross-pollination and chance meetings.
Previously, I gave input into the interior build-out of the Bill and Melinda Gates Foundation Campus, and at that time was surprised by how many single-person cubicles were requested by many employees.
With 20 years of research (Stanford), more than 4,000 companies (CA Associates), we do now know the fastest teams do what Apple does: “swarm”, or work together on one piece of work until it is done. This requires a backlog of work that is chunked in ways that the whole team could work on together, and not by specific discipline or phase but by module or value. This then creates the demand for the open floor plan. But, an open floor plan just doesn’t make sense for companies that have not incented cross-functional teams and re-split their backlogs into slices of customer-visible value or swap-able modules.
Walk into any of the 5 wealthiest companies in the world and we see massive caverns of beautiful, comfortable, flexible open space, inhabited by roving bands of self-organizing high-performing teams with furniture on wheels if they have furniture at all. Walk into many struggling companies and we see a tragically hilarious mix of open space and cube-farms, with people fleeing the open space and struggling to get peace-and-quiet to shelter from interruptions to their individually-assigned task-sized and skill-specific work items. Here an office floor plan adaptation of Conway’s law is perfectly visible: the structure of the work dictates the best fit structure of the workplace.
So what to do?
Best I have seen to date: if the work is split for individual work, the more private space the better. If the work is split such that an entire cross-functional team is required to call the work “done”, the team demands an open space to do that work and even the miniature “phone rooms” and bookable conference rooms sit unused.
Specific implementations I’ve seen be successful in the fastest teams I have had the pleasure to work with or around:
Power drops from the ceiling for rapid prototyping or computing.
Two-tiered rolling desks that are load rated to >200kg. The top tier often becomes a pedestal for a bolted-down large screen to show whatever is most important to the team and stakeholders. The lower “desk” is a high seat as often as it is a laptop stand or anvil.
Rolling double-sided whiteboards often become sound deflectors as much as scrum boards.
A cluster of bean-bag chairs and a cushy easy-clean couch every so often is invaluable as the team’s energy sinks and rises during the cadence of their sprint.
Some portable white noise speakers around these areas are also helpful.
The building itself is typically the biggest and most industrial possible, where people don’t worry about hitting something on the floor with a hammer or drilling something onto a wall. The building should provide access to all the comfortable human I/O on a regular grid or distance (bathrooms, garbage/recycling/compost, kitchen or snack shelves, printers/scanners, tool cribs, supply shelves). The most significant infrastructure investment seems to be holding the entire space from floor to ceiling at near the ideal temperature, which usually means a grid of ceiling fans and HVAC vents offset from the ceiling power drops.
I hope that helps.
If you have a great work space, please share it with the community at large. I would love to see you upload photos of your awesome team space or workspace and tag our Twitter handle @scruminc or share your data and experience in the comments section below.
There is no broad agreement on what constitutes artificial intelligence (AI). Still, this much is clear, we have already entered the age of AI and business. Just ask Amazon, Google, Apple, Facebook, really any company that is looking to replace algorithms with something much more complex, responsive, even predictive of customer and market wants and demands.
As more and more organizations look to machine learning to gain a competitive advantage, they’re also looking at the best ways for their AI labs to operate. And no, we’re not just talking about their technology stack. Scrum is the best way to create AI.
In fact, machine learning systems may be the optimal platform to achieve all the benefits of Scrum.
Why Scrum is the Best Way to Create AI - Vimeo
Scrum Thrives In Complex Systems
Scrum operates best in complex environments where small changes to the system can create surprising and unknown behaviors. Just such environments are at the heart of any machine learning project.
A small problem with the code or even a poorly tagged data set can cause the AI to learn a bad pattern, or go entirely in the wrong direction.
So it's actually extremely important, when both computation time and cost are high, to identify mistakes early because even a small problem left unaddressed may compound and invalidate months of lab time. That is a very expensive problem to have.
Machine Learning and Rapid Inspect and Adapt Cycles
Whether lab teams are using Keras, Tensorflow, CUDA, or something else, they all face a similar set of problems not addressed by their technology stack. Are they prioritizing development? Are they testing their AI and delivering updates continuously. In short, are they and their AI continuously delivering, learning and improving?
These are the same set of issues regular software teams face. And Scrum has been empirically proven to address these issues.
In fact, a machine learning lab may be the optimal place to gain all of the boosted productivity associated with Scrum teams.
AI projects inherently have a wide range of test set data which the lab can observe in real time. Scrum’s focus on rapid inspect and adapt cycles means AI Scrum teams can observe how their AI is progressing. More importantly, because AI Scrum teams can inspect, iterate and adapt extremely quickly and at a regular cadence, AI Scrum teams quickly find errors, make changes and rapidly adapt their machine learning system to ensure it is always improving.
Scrum and AI was the subject a conversation between Joe Justice, President of Scrum@Hardware and Alex Sutherland, Scrum Inc.’s Chief Technology Officer. They were interviewed by Scrum Inc.’s Tom Bullock.
"Naturally, leaders who have experienced or heard about agile teams are asking some compelling questions. What if a company were to launch dozens, hundreds, or even thousands of agile teams throughout the organization? Could whole segments of the business learn to operate in this manner? Would scaling up agile improve corporate performance as much as agile methods improve individual team performance?"
These questions about corporate performance are posed in the newly published Harvard Business Review article "Agile at Scale". Scrum@Scale is the answer. It is designed for strategic agility that drives stock market performance, not just faster IT groups.
Scrum Gathering 2018 Kicks Off With Exciting Announcement About Scrum@Scale
Great partnerships just make sense from the very beginning. And that is how we feel about the new joint venture between the Scrum Alliance and Scrum@Scale.
Announced at the sold-out 2018 Scrum Gathering in Minneapolis, Minnesota, the partnership will focus on training, coaching and promoting Dr. Jeff Sutherland’s framework for Agile transformation via scaling Scrum across entire organizations.
Classes for both trainers and practitioners have been underway for some time and the numbers of both continue to grow. We have upcoming classes everywhere from Boston and Bogota, to Charlotte, Copenhagen, Minneapolis, and more. You can find a list and register for classes at scrumatscale.com.
You can also learn more about Scrum@Scale by exploring our case study library and reading the Scrum@Scale guide. For more on the announced joint venture can be found in our press release.
The Scrum and Agile community lost a giant this weekend. Mike Beedle was a close friend and inspiration to many of us. Our thoughts are with his family.
"Mike was an amazing and magical guy that could take a new idea like Scrum and not only build hyper-productive teams but deliver a hyper-productive company! He is irreplaceable in the Scrum community and he will be missed greatly."
The process efficiency of executing a story on a Scrum team is the most important metric for team performance, because a team can easily double velocity in one sprint by driving process efficiency up over 50%. An Indian team asked me what KPIs they should use and I told them just use process efficiency. They drove it to 80% within three days and on the fourth day had completed all work for a two-week sprint.
Process efficiency is defined as the real work time divided by the calendar time to get to done. The required data is easily available in any Scrum tooling. What we want to see is average process efficiency for stories completed in a sprint in real time. We want to abandon hours as a reporting tool for Scrum teams as data on over 60,000 teams in a Rally survey shows that the slowest teams use hours. The fastest teams use small stories, no tasking, and no hourly estimation. How can we estimate process efficiency for these teams?
Here is a simple way to calculate process efficiency for one story. If the velocity of the team is 50 and the story is 5 points then the real work time for the story can be estimated to be 5/50 of a sprint. If the story is started at the beginning of the sprint and finished at the end of the one week sprint then it uses 5 days of a 5-day sprint or 1 sprint. If we divide 5/50 by 1 we get 10%. Make that number more than 50% and you will double velocity.
Our Webside Scrum team is implementing this for our company and there are a few questions that come up:
Do we just count business work periods in the denominator? What about weekends?
We use an interrupt buffer? How do we handle interrupts.
Webside process efficiency is over 100% because we have lots of small stories that get implemented really fast. Should we use a weighted average so that larger stories count more?
As we implement this I will update the blog on our approach. Even discussing how to implement a process efficiency metric has caused my Scrum team to introduce more discipline into backlog management and execution of stories.
Frequently there are great debates about the use of the Fibonacci sequence for estimating user stories. Estimation is at best a flawed tool but one that is necessary for planning work.
User story estimation is based on Department of Defense research in 1948 that developed the Delphi technique. The technique was classified until the 1960’s (there are dozens of papers on the topic at rand.org). Basically, the Rand researchers wanted to avoid the pressure towards group conformity that typically led to bad estimates. So they determined that estimates had to be done in secret. Initially, the estimates would be far apart because people had different perceptions of the problem so they would have them talk about highs and lows after estimating in secret, then estimate in secret again. At Rand Worldwide you can read the original papers that demonstrate convergence.
Rand researchers then studied the effect of the numbers estimators can choose and found a linear sequence gave worse estimates than an exponentially increasing set of numbers. There are some recent mathematical arguments for this for those interested. The question then--if you want the statistically provable best estimate--is what exponentially increasing series to use. The Fibonacci is almost, but not quite exponential and has the advantage that it is the growth pattern seen in all organic systems. Why does the Fibonacci sequence repeat in nature? So people are very familiar with it and use it constantly in choosing sizes of clothes. For example, tee shirt sizes are Fibonacci. Since some developers are averse to numbers (a really strange phenomenon for those working with computers) they can use tee shirt sizes and their estimates are easily translated to numbers.
Microsoft repeated this research in recent years in an award-winning IEEE paper. As a result, Microsoft has abandoned hourly estimation on projects. See Laurie Williams, Gabe Brown, Adam Meltzer, Nachiappan Nagappan (2012) *Scrum + Engineering Practices: Experiences of Three Microsoft Teams. *IEEE Best Industry Paper Award, 2011 International Symposium on Empirical Software Engineering and Measurement.
So the Agile community has converged on the Fibonacci as the sequence to use. Unfortunately, many agile teams do not use it properly and try to get everyone to agree on one Fibonacci number which gives you mathematically and experientially provable bad estimates through forced group conformity. This is the very thing the Rand researchers invented the Delphi Technique to avoid.
Over and over again, researchers have shown that hourly estimates have very high error rates. This is true even if the user is an expert. It’s the tool that’s the problem. If you want to practice based on evidence, relative size estimates simply deliver a much more accurate estimate.
The first Scrum team finished it’s first Sprint 24 years ago last week. My goal then, and it continues to be, was high-performance teams. In the decades since I have been solely focussed on that. How do we enable people to accomplish more, live better lives, and fundamentally change the trajectory of their success?
Millions of people globally are now meeting every day for their Daily Scrum. It is astonishing how Scrum has reshaped the world of work across many disciplines and beyond software. The Standish Group data shows that Agile projects are three times less likely to completely fail than waterfall projects and four times more likely to succeed. This is what is driving companies to want to become “Agile.”
Currently, the existential threat to Scrum is “Bad Scrum.” So I have spent the last few years codifying the best-practices for scaling scrum and thinking about what works and what doesn’t and put a name on it: Scrum@Scale.
Scrum@Scale is the framework for organizations to iteratively develop the best way for Scrum to work in their context. So I’ve kept it simple. The Scrum@Scale Guide is just a few pages. Just like with Scrum, and the Scrum Guide, it’s free. You don’t need to ask my permission. Grab it, use it, and share your knowledge and your experience.
Each and every Scrum team is different, even teams within the same organization. They have their own culture, ways of working, successes, failures, their own context. But they follow a common framework. And within that framework they iteratively develop novel solutions to the problems they are trying to solve.
To spread good Scrum, and to have the impact we want to have on work, we need more people than just Scrum Inc. teaching it. We had a pilot last year with Angela Johnson of the Co-Lead Team and learned a ton. We are posting her classes on the Scrum Inc. website as well as scrumatscale.com. Last week, we graduated our first class of Scrum@Scale trainers. There are 17 of them.
You can read about the process of becoming a trainer including all of the qualifications and benefits on our site scrumatscale.com. I do want to point out one thing we do require. To become a Scrum@Scale trainer you have to have scaled Scrum and you have to submit a case study of that work. What did you learn? What was hard? What worked? And that case study has to be shared. Not just with Scrum Inc. Not with other Scrum@Scale trainers. With the world. We want the world to hear about our trainers’ victories, and for those stories to become reference points in a global conversation.
We’ve already had hundreds of people take our Scrum@Scale classes, many of them coaches and trainers. The consistent feedback we’ve gotten is that this is a codification of the best scaling practices they had been applying for years. We want to foster a community where the people and organizations transforming themselves learn from each other.
We’ll be posting all of the case studies from this first trainer class in the next few weeks at scrumatscale.com. We want to create a living library of the myriad ways to effectively scale Scrum within a common framework. A library that is there whether you are at a Fortune 100 company in the US or at a hyper growth startup in Hyderabad.
As you might know, I stepped down as CEO of Scrum Inc. in January. One of the main reasons is so I could focus my efforts on spreading Scrum and Scrum@Scale to as many people as possible whose lives will be better for it.