Blog by Johanna Rothman expert in Managing Product Development. I help you identify your problems and seize the opportunities you know exist–but can’t find yet. I provide assessments, workshops and training, coaching, speaking, and facilitation as part of my packaged services. This blog will help you understand a little about what I can provide.
I've been working with some clients who are trying to find the magic way to slice and dice their project portfolios. Their organizations treat the software people (IT or Engineering) as a shared service. That means the software people “service” the rest of the organization. While the organization might use products and product lines, the software people work in projects.
The big question these people have is: How do you assess all the projects?
Let me first discuss why organizing by projects makes sense for some of my clients.
Work in Projects to Minimize, Start, and Finish
Working in projects is not necessarily bad. I've said in the past that as soon as we have more work than teams to do the work, we need to switch from one product to another. I like the idea of using the project as a container for that switch.
I tried to show how that might work for one team with a primary product and “other” work in the image in this post, to manage product capabilities and value over time.
Imagine one product with three releases: Projects 1, 2, 3. For any number of reasons, the software people need a break before the next project for this product. (The software people need feedback. The users might need time to change how they work. Maybe the project was a series of experiments.)
The team works on Projects 1, 2, 3 for the duration of each of those projects. In between, the team works on “other” work. That work might be other projects for different products. It might be periodic work, such as quarterly reports. It might even be emergency projects, or ad hoc work. Not everything is a project.
Whatever the work is, the work offers value to the customers, and by extension, the organization.
Products vs Value Streams
I'm going to rant a little about the term “value stream.” I'm not sure why people use that word instead of products. Yes, I'm showing my ignorance, and asking for your help to educate me.
I think of the product and the value stream as the same thing:
Both products and the value stream create capabilities for the user. (At some point, there is a user, even if you sell to a distributor.)
Both products and the value stream need to release so that the user can use the work.
Any work that's done but not released is inventory. (See Knowing When You Release Value.) Sofware inventory is not an asset. That's because software inventory tends to age and not be useful by the time you finally release it.
I happen to find it more useful to think about what's in the product that the customer can use vs. what's not in the product yet. (If you have other useful ways to think about this notion of product vs value stream, do let me know. I'd rather be embarrassed about something I didn't understand, instead of leading my clients astray.)
In my experience, the more people talk about value streams, the more they think the stream never gets interrupted. The stream is continuous.
But these clients of mine work in a “shared service,” centralized department. They must interrupt these streams so they have a shot of doing the work everyone needs. That's why I like thinking about products, with projects as a container for this work.
Flow Work Through Product-Based Teams
I don't use the words “epic” or “theme.” I talk about feature sets. That means the Reports module is a set of features. So is the Search product. So is the Email feature set. I don't differentiate between what they are, because they are all more than one small story.
If we can agree that a team can become expert on the Reports, Search, or Email, then the team can finish the projects for their product/feature set.
By finish, I mean release something of value to the users. That's why I find the word “project” as a container for:
“Here's what/the value we want to release now, for these customers. We'll do another project, with other features/value for those customers, later. In the meantime, as the customers learn to use these features, we'll move to another product or other work.”
Projects, as a container for work, tend to nudge the team to finish the work. In my experience, the project-as-container also nudges the user, managers, whomever to stop adding more work. The team finishes projects. They tie a bow on that work. They can move to other work.
When you have small projects, it's easier to assess the projects. I'll address assessing the various projects in Part 2.
I've led various project kickoffs over the years. Back in the closer-to-waterfall days, we had to introduce ourselves to each other. We could then move to the project purpose and release criteria. Now that agile teams stay together, we can change the kickoff to more project-specific work.
I don't want to do too little—in the project or in the kickoff—to be useful. I live the tension between helping the team deliver something valuable as quickly as possible vs. the ability to evolve the product as quickly as possible.
Many new-to-agile-thinking teams want everything defined upfront. I help them see we can use iterative and incremental thinking on everything for the project.
Here are my assumptions:
The team has already created some working agreements in the past.
The team uses those working agreements and knows how to change them, if necessary.
If the team has not worked together in the past, I offer a couple of options:
Spend a one-hour timebox (or less) to draft working agreements and plan to revisit the agreements in a week or earlier.
See how they work and perform little kaizens every day for about 15 minutes to refine how they work.
I'm not a big fan of creating working agreements before you do any work. I do like to revisit and refine (that inspect and adapt thing!) the working agreements when the team and the work changes.
That said, here's how I've facilitated what I'll call “agile” kickoffs in the past:
Write the project vision as a team so we know the purpose and where we're headed.
Write the release criteria as a team so we know when we're done.
Verify that we know the constraints, drivers, and floats for our project, so we know how to make tradeoffs.
These are all part of the project charter. I tend to timebox the charter to 60-90 minutes. We don't need to be perfect. We need to know who the product is for, what our first deliverables might be, and when we're done. We can iterate on this, too.
You might need more than that. I've also included these activities, again as a team. I tend to timebox these activities, too. Think 30-60 minutes for each of these:
Create a risk list. I often ask this question, “What will we do when we realize we don't have enough test automation to know the state of the product after each build?”
Start architecture thinking. I ask the team to create either several low-cost/small spikes/prototypes/experiments so we can learn, or start a paper vision of the architecture.
We now know where we're headed in terms of who our users are, what success looks like, and what done means. We might be done. We might need one more thing: the stories for the first backlog.
Write Stories as a Team
Some people advocate the PO generating the initial backlog of stories. I used to advocate that. I don't any longer.
I now recommend that as soon as the team has spent an hour or so in kickoff (maybe two hours if you do everything here), that they work as a team to create that first backlog of stories. How many stories? Six to ten.
If the team uses iterations, and they have a shot of finishing one story a day, ten stories is enough. If the team uses flow, they might only need six or seven stories.
When the team has to collaborate to define the working skeleton or the first valuable work, they learn a lot more about their working agreements and their possible risks than they could possibly learn by thinking about those agreements or risks.
Charter as a Team
When agile teams charter their work, they are more likely to create their successful agile project. The more collaborative the kickoff, the more likely the team will continue that way.
I have also seen teams succeed when the PO brought their product/project vision to the team and the team workshopped that vision for their project.
Do let me know if you have used other activities for your agile project kickoffs. I find the frame of “how little” thinking helps me a lot here.
A colleague asked me about the kinds of documentation the team might need for their stories. He wanted to know what a large geographically distributed team might do. What was reasonable for the stories, the epics, and the roadmap? How little could they do for requirements documentation?
I start with the pattern of Card, Conversation, Confirmation when I think about requirements and how much documentation. I use the guideline: If I can't write large enough on the front of the card to see, my story is too large. I don't use larger cards. I break the story. That's one way to create minimum documentation—break the story into usable pieces.
As with all interesting questions, this depends on the team's context. Here are ways to think about your context for how to create minimum requirements documentation:
How much does the team collaborate?
Too many distributed teams operate as people in silos, handing work off to each other. If you don't have enough hours of overlap, it's true, you need to hand off work.
If the team hands off work, they're working in resource efficiency, not flow efficiency. And, they need a lot of description of what the requirements are because they aren't collaborating as a team.
Are the requirements phrased as tasks or problems to solve?
Let's assume that the product owner writes the requirements, and probably alone. (That's not a good idea, but I think it's this team's reality.)
Note that when the PO writes alone, the PO is not following the pattern of Card, Conversation, Confirmation. The reason that pattern exists is to create collaboration between the customer and the team. Because this team already doesn't have collaboration time, it's quite difficult to use an agile approach that depends on collaboration.
If the PO writes stories or epics as problems to solve, the PO manages some of the documentation effort. That's because the PO is at least halfway answering the “why is this important” question. If the PO adds acceptance criteria to each story and epic and the team agrees on release criteria at the various levels, the team might be close to
If the PO writes functional requirements, the team has no idea why this is a valuable requirement. They can't use their problem-solving skills to refine and understand the real problem to solve.
My experience: teams who don't understand the real problem require a lot more documentation.
How available is the product owner?
If the PO is not very available to the team (distributed or not), the team needs much more documentation.
Agile approaches assume a collaborative approach, including with the PO. Why does anyone think any agile approach would work without collaboration?
Who needs to see roadmaps and for what reasons?
Too often, I see larger organizations think of roadmaps as guarantees of when they will get which feature. This is legacy thinking, where the organization thinks they won't get a chance to replan until this project is over.
No, agile projects can release as often as they can get to done. I like one-day or smaller stories. New teams often have to work to get there. But why demand a roadmap more than a few months in advance? (See my roadmaps series.)
Base Requirements in Collaboration
My colleague works in an organization where the teams are doing their best to use an agile approach, often Scrum. The managers think an agile approach is a Silver Bullet. It's not. An agile transformation is a cultural change and requires the managers become active participants. Mere “allowance” of an agile approach is not enough.
The question my colleague didn't ask: is an agile approach right for this team? Maybe. Maybe not. A team who hands off work to each other, as in the image at the top of this post, barely collaborates.
We wrote a lot about this in From Chaos to Successful Distributed Agile Teams. Teams who are not organized for team-based collaboration—never mind customer-based collaboration—rarely benefit from an agile approach. I often recommend an incremental approach instead.
How much documentation does your team need for its requirements (for anything)? It depends on your context. These questions might help.
When Esther and I wrote Behind Closed Doors: Secrets of Great Management, we didn't really think one-on-ones were a secret. But, managers weren't conducting the one-on-ones regularly. The managers canceled for other “higher priority” meetings.
The first modern management book is about how managers manage themselves. Part of that management is how and when they decide to conduct one-on-ones.
At first, I wasn't sure I really needed to include that chapter. But, way too many managers still don't work to find good times for one-on-ones. And, they cancel the one-on-ones.
I'm not just talking about the one-on-one between a manager and person doing the knowledge work. I'm also talking about a more senior manager conducting one-on-ones with his or her management team members. Managers deserve one-on-ones, too.
I have two rules about one-on-ones:
Create a regular cadence for the one-on-one, at least once every two weeks. Longer than that and you're not creating that trusting relationship. You don't gain the organizational information you need.
Find a time that works for both of you. That means understanding how the other person needs to organize his or her time.
I recommend you read Paul Graham's Maker's Schedule, Manager's Schedule. If you have not yet considered how you block (or don't) your time, that essay might help you see your options.
I've been working at the intersection of the project portfolio and the product roadmaps. (You can tell because of the various posts about information persistence.) Here's what I find when I work with my clients:
They have years worth of projects in the project portfolio.
They have years worth of ideas in various states of description in what they're calling product roadmaps.
They have years worth of defects in the defect tracking system.
All these possibilities create a cognitive load when people attempt to assess their work. Or, even find the ideas.
I've advocated the use of a parking lot for years. Some of my clients think they're using a parking lot. But, they continue to assess the work in the parking lot.
No, the point of a parking lot is so you don't have to look at it.
I've started to take a more aggressive approach. I suggested to a client that they delete—yes, erase from their project portfolio tool—any project over three months old. I suggested they limit all roadmaps to not more than three months. And, I suggested that they delete anything over three months old in their defect tracking system.
You should have seen their faces. Horror. Disgust. Fear.
One person grinned. I asked that person what she thought.
“Great idea. If we see those ideas pop up again, we can add them the way I add to my closet. I live in a small apartment. If I buy something new, I have to get rid of something old.”
The more stuff we have, the more difficult it is to manage all that stuff. Even if you use a parking lot.
A Cleaning Experiment
The other people in the meeting were a little dubious. I asked what they might consider as a small experiment. They decided on these actions:
Copy everything so they had a backup of it. Nice way of managing risk.
Delete, delete, delete.
Work for a couple of weeks in the teams and for the product backlog. See what they finished.
Assess the project portfolio with just the various projects in progress at the end of a month. See if they could stop anything because they'd done enough.
At the end of the first week, several teams had added back some of what they called technical debt. The POs—by agreement—removed some features so the teams could fix the problems. Much grumbling.
However, because the teams fixed their problems, their cycle time decreased. That meant the teams were able to make faster progress on the remaining features.
That work cycle: rediscover old problems, fix them, improve overall cycle time occurred several more times for each team. They used the experiment loop to see if they made more progress.
Most of the teams did. One team realized they had not thought enough about the order of the work. They had to bring work they'd originally thought was six months out back to the present. However, they discovered that just one week in.
And the project portfolio? The meetings were much faster. The decisions crisper, because they weren't considering work not in this relative time period.
A month in, the project portfolio people discovered an opportunity. Because they hadn't planned so far in advance, they were able to capitalize on that opportunity.
This has been my experience in my business and in several of my clients. It might not work for everyone. What have you got to lose?
If you're not ignoring the parking lots, delete everything and start over again. If you can't delete “everything,” use cycle time to consider not much more than three months of work.
Planning and replanning doesn't get the work done. Execution gets the work done. Execution without cognitive load is much better than execution with that load.
See what you can do—maybe even experiment!—to clean your backlogs so you don't have cognitive load every time you look at all the work.
I was thinking of starting with managing the organization issues. The way organization attempt to “manage performance” and do/do not manage the project portfolio has such an effect on how people can manage.
But, here's the kicker.
If you can't manage yourself, should you even consider managing other people? Probably not.
That's why I started here. I don't even have a cover yet. No editing. If you don't like to read books in progress, don't buy this book.
However, if you'd like to see how to manage yourself:
Recognize and avoid micromanagement
See when to delegate and how
See the myth of the indispensable employee and what else to do
And you like reading books in progress, yes, please do get this book.
I'm planning on releasing the books as a set of three: manage yourself (this one), manage others, and manage the organization. And, yes, I have no idea if this is the real title. I'm stressing my writing perfection rules, but I would like more feedback sooner.
I thought that was brilliant. She went on to explain that when we talk about “debt” managers think they have dials to manage the debt.
Uh oh. Wrong.
When managers think in cost accounting terms, such as mortgages, they think they can:
Predict the cost of maintaining that debt, in both money and time. (Do nothing to pay off or manage that debt.)
Predict the cost of paying off that debt, in both money and time. (Do something. Often add more people.)
But technical debt isn't like a mortgage. You don't pay it off and end up with something that often increases in value.
If you don't pay off the technical debt and/or if you allow the debt to increase, you might end up with something with less value than you started with.
In normal circumstances, we expect houses with mortgages to at least maintain their value—if not increase in value—over the life of the debt.
If we're smart about car loans, we expect we have many more years of life in the car once we're done paying off the car loan.
Software isn't like houses or cars.
A software product is the instantiation of what the team learns up until now. See Why Do We Estimate, Anyway? When the team learns more and releases that learning, the product changes. We hope we created more value. We don't always do so.
The more we allow our technical debt (and I prefer Doc Norton's term of cruft) to persist, the more it affects the rest of the code and the product:
We can no longer easily add functionality. Cycle time increases.
The builds might become circular. Cycle time increases.
The tests at every level become more complex or they miss edge cases. Cycle time increases.
Worse, keep the technical debt and everyone starts agitating for a rewrite to avoid developing anything else for that particular product.
Technical debt is not a cost we have not yet paid off on a product that increases in value over time. Instead, technical debt—cruft—decreases our ability to add more value to the product over time. That's the increase in cycle time problem.
The technical debt/cruft does not help us rewrite or rearchitect the product better the next time. The insufficiencies in the code and tests make it much more difficult to see what to change.
Since too many managers think about technical debt as a kind of mortgage, let me offer another frame for that debt: the balloon mortgage on a house.
In both kinds of mortgages, you pay the same amount a month. The difference is at the end of the balloon period. At the end of the balloon, you owe a lump sum payment of the entire remainder of the loan. Contrast that to a normal mortgage, where the bank expects you to continue paying the loan for the rest of the loan period.
Here's an example: Assume you take on debt of $100,000. Both loan types mean you pay $506.69 each month. The difference is if you have a balloon mortgage of 7 years, you owe about $91,000 as a lump sum at the end of 7 years. In the 30-year regular mortgage, you owe about $99,000 but the bank expects you to continue to pay over time. Much of the interest is front-loaded.
It's all about the expectation of repayment.
In software, we expect we can continue to add features and support the product with tests. If we can't, the cruft becomes a balloon payment where we need to start a rewrite or re-architecture with not much notice.
Technical debt is not a loan. Technical debt incurs a Cost of Delay for future work. You borrow against the future to keep cruft in the product now.
Technical debt/cruft creates a drag on everything the organization might do.
I don't run my business that way. Should you? Maybe the first step is to rename it so managers don't think that kind of debt is like a mortgage.
At the Influential Agile Leader workshop earlier this year, I led a session about scaling and how you might think about it. I introduced the topic and explained that “scaling” might not be the answer. My experience is that when people use frameworks for larger efforts, they experience these unexpected side effects:
The framework actions often require more manager-type people to control the actions of others.
The framework creates less throughput, not more.
One of the participants asked, “But what if we have to scale?”
“Have to?” I asked.
“Our managers think that's the only way to get out of our current problems.”
Okay. Notice that management defined this one solution to the problem as opposed to considering the entire system. Or, management didn't consider how they got there.
I asked the participant if the managers had used a retrospective or did any investigation into root causes before they decided on scaling.
He said he wasn't sure, but he suspected they had not.
I asked if any of the teams succeeded at using an agile approach at the team level. No, none of them had.
My First Scaling Experience
My very first job out of school was a development role on a very large telecommunications system for the Department of Defense. We implemented wireless, broadcast, the equivalent of webinars, all kinds of stuff. Back in 1977.
When I started on the program, we had a couple hundred people on the program. We were only 10-12 person-years behind. At one point, about a month or two into the job, I asked my boss, “Why don't we hire those people now, and when they're useful, they can ‘make up' the time?”
He sighed. “We tried that. That's why all you new grads are here.”
Six months later, I suggested, “Let's get 150 people off this program. All I do is undo the work they did because they're wrong. They don't have the information they needed when they need it. I would be much faster if I could choose this team (and I named about a dozen people), and we worked alone.”
My boss leaned back in his chair. “What would the other people do?”
“It doesn't matter. Pay them to not come to work. We would be faster.”
My boss didn't like that. Even when I showed him some delays, he didn't like it.
I didn't have the words for the ideas of Cost of Delay, nor of flow efficiency. I didn't know how to map our cycle time. I knew that if we didn't have to hand off, we would be faster.
Even back in 1977, we worked by feature set. Yes, in vertical slices. We integrated as we proceeded. We made progress. And, at the end of the 18 months I was there, we were now 30 person-months behind.
We had the handoff and cycle time delays with scaling. Granted, we used paper back then to manage our requirements, but we still couldn't keep the information current.
Problems I See Now with Scaling
Often, management decides they need scaling because they want “more with less.” As in less time.
Or, management delayed the start of this project so they want to “make up time.”
There's an easy way to manage those problems without using scaling: use a small team or two to work off a ranked backlog. Make sure you think about the backlog in “how little” terms, rather than how much.
“Too much time” is often a result of management dithering. Or of not seeing delays.
Then there are the actual large effort problems. We probably could not have developed that telecommunications system with 12 people. Not back then. Maybe not even now. Twelve people didn't have enough domain expertise. However, we probably could have limited the number of people to between 50-60 and we would have finished faster.
However, I've worked on large programs, where the teams were small and we had few controlling people and few control points.
Here are problems I've seen with scaling as an answer, agile approach or otherwise:
People on the program can organize themselves under these conditions:
The teams understand the domain.
The teams understand the customers' problems.
They can deploy/release by themselves.
They (or someone who understand the customer needs) rank the work.
I suggested these ideas to this colleague. He said, “Management will never go for that. They still want to control things.”
Agile Scaling is the Answer to Whose Problems?
When I hear about scaling frameworks, I wonder who the customers are. I'm pretty sure the frameworks are the answer to managers who want the results of an agile approach. However, these managers often don't want to change themselves and change the culture for an agile approach.
More controllers and more control points do not create an agile culture. Or, often, the product results the management wants.
When I hear, “We have to scale,” I wonder who the “we” really is. I wonder what they want from scaling.
Agile “scaling” is rarely the answer to the organization's problems. More often, it's descaling. Helping managers visualize the cycle time might be a first step.
I wrote a series about agile scaling a while ago. You might want to read it.