Loading...

Follow LeadingAgile - Agile Transformation Blog on Feedspot

Continue with Google
Continue with Facebook
Or

Valid


Tips for ScrumMasters of Distributed Teams w/ Jessica Wolfe - YouTube

This week Jessica Wolfe and Dave Prior respond to a question about how to be an effective ScrumMaster when you are not in the same location as your team. To complicate matters even more, try stepping into the ScrumMaster role in place of an SM who was colocated with part of the team.  If that doesn’t seem challenging enough—try two Product Owners working with the team.

Here is the question:

My Scrum team consists of 4 devs in San Diego and 3 devs and 2 POs remotely (I know, breaking a rule right there having 2 POs and multiple projects assigned to one team) The previous scrum master for our team was located in San Diego and was able to have actual facetime with the portion of the team 2 times a week on average. Tuesday for Daily Scrum and Wednesdays for Sprint Review, Sprint Retrospective and Sprint Planning (and then off week’s Daily Scrum and Backlog Refinement). While we do everything virtually (Sprint backlog, Product Backlog, screenshare ceremonies encouraging video (but not required)), it was something the team welcomed, it was a rallying day, and I believe allowed the SM a better check on the morale/heartbeat of over half the team – seeing them in person, having easier one on one time available if needed. I have now taken over as SM for this team and another team (other team is all remote and they never met in person regularly), so I’m wondering if you might have any suggestions to foster that same closeness, continue to keep a close pulse on the team and provide a safe environment for openness and collaboration while serving the team from across the country.

Audio Version Only

Tips For ScrumMasters of DIstributed Teams w/ Jessica Wolfe - SoundCloud
(1510 secs long, 10 plays)Play in SoundCloud

Links from the Podcast

If you are curious about Jason Kelce’s speech at the Eagles parade, here you go:

Contacting Jessica

If you’d like to contact Jessica you can reach her at:

Contacting Dave

If you’d like to contact Dave you can reach him at:

Send Us Your Questions

If you have a question you’d like to submit for an upcoming podcast, please send them to dave.prior@leadingagile.com

Upcoming Classes

And if you are interested in taking one of our upcoming Certified Scrum Master or Certified Scrum Product Owner classes, you can find all the details at https://www.leadingagile.com/our-gear/training/

The post Tips for ScrumMasters of Distributed Teams w/ Jessica Wolfe appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

By now, everyone has seen the “test automation pyramid” a thousand times or more. You can find countless illustrations of it online.

The base comprises a large number of automated checks of small scope. As we ascend, we check progressively larger chunks of code and we need relatively fewer cases than in the layer below.

The core idea is to get as much value from each check as we can with the least investment of time, money, and effort. Checks higher on the pyramid involve more resources and more interfaces than checks lower on the pyramid, so they are inherently more expensive. The more we can learn through less-expensive, lower-level checks, the better off we are.

When working with organizations that are only now beginning to fill the gaps in test automation, there’s always a lot of discussion about tools. There’s a tendency for people to want to avoid throwing numerous different tools into the mix, as that will make the environment harder to understand and maintain, and increase the chances that Thing One won’t work properly alongside Thing Two.

That’s sound thinking, but it can be taken beyond the point of diminishing returns. In the context of automated checking, different tools may provide the best results depending on where we are on the pyramid.

Microtests

All the illustrations of the pyramid that I’ve seen so far show “unit tests” at the base. With contemporary development practices, the base is really even smaller than that; it’s microtests.

Microtests are written and maintained by programmers. They are the basis of test-driven development (TDD). A single example will exercise just one logical path through just one very small-scale unit of code. In order to do this, the example has to be written in the same language as the code under test. (It’s possible someone has created a low-level testing tool that contradicts that statement, but as a general rule it’s the reality on the ground.)

So, if you had some C# code like this:

using System;

namespace Prime.Services
{
    public class PrimeService
    {
        public bool IsPrime(int candidate) 
        {
            if (candidate < 2 || candidate % 2 == 0)
            {
                return false;
            }
            int boundary = (int)Math.Floor(Math.Sqrt(candidate));
            for (int i = 3 ; i <= boundary ; i += 2)
            {
                if (candidate % i == 0)
                {
                    return false;
                }
            }
            return true;
        } 
    }
}

then you would want to have a microtest for each logical path through the method, IsPrime. Maybe something like this:

using Microsoft.VisualStudio.TestTools.UnitTesting;
using Prime.Services;

namespace Prime.UnitTests.Services
{
    [TestClass]
    public class PrimeService_IsPrimeShould
    {
        private readonly PrimeService _primeService;

        public PrimeService_IsPrimeShould()
        {
            _primeService = new PrimeService();
        }

        [DataTestMethod]
        [DataRow(-1)]
        [DataRow(0)]
        [DataRow(1)]
        public void ValuesLessThan_2_AreNotPrime(int value)
        {
            var result = _primeService.IsPrime(value);

            Assert.IsFalse(result, $"{value} is not prime");
        }

        [DataTestMethod]
        [DataRow(4)]
        [DataRow(80)]
        [DataRow(2000)]
        public void ValuesDivisbleBy_2_AreNotPrime(int value)
        {
            var result = _primeService.IsPrime(value);

            Assert.IsFalse(result, $"{value} is not prime");
        }

        [TestMethod]
        public void Value_81_IsNotPrime()
        {
            Assert.IsFalse(_primeService.IsPrime(81), "81 is not prime");
        }

        [TestMethod]
        public void Value_13_IsPrime()
        {
            Assert.IsTrue(_primeService.IsPrime(13), "13 is prime");
        }
    }
}

Similary, if you had some COBOL code to determine the next invoice date, like this:

       
           2000-NEXT-INVOICE-DATE.  
           EVALUATE TRUE
               WHEN FEBRUARY 
                    PERFORM 2100-HANDLE-FEBRUARY
               WHEN 30-DAY-MONTH
                    MOVE 30 TO WS-CURRENT-DAY
               WHEN OTHER 
                    MOVE 31 TO WS-CURRENT-DAY
           END-EVALUATE              
           MOVE WS-CURRENT-DATE TO WS-NEXT-INVOICE-DATE
           .

then you would want one microtest for each logical path through the paragraph, 2000-NEXT-INVOICE-DATE. Maybe something like this:

                TESTCASE "IT DETERMINES THE NEXT INVOICE DATE IN A 30-DAY MONTH" 
           MOVE "20150405" TO WS-CURRENT-DATE
           PERFORM 2000-NEXT-INVOICE-DATE
           EXPECT WS-NEXT-INVOICE-DATE TO BE "20150430"
       
           TESTCASE "IT DETERMINES THE NEXT INVOICE DATE IN A 31-DAY MONTH" 
           MOVE "20150705" TO WS-CURRENT-DATE
           PERFORM 2000-NEXT-INVOICE-DATE
           EXPECT WS-NEXT-INVOICE-DATE TO BE "20150731"
               
           TESTCASE "IT DETERMINES THE NEXT INVOICE DATE IN FEB, NON LEAP"
           MOVE "20150205" TO WS-CURRENT-DATE
           PERFORM 2000-NEXT-INVOICE-DATE
           EXPECT WS-NEXT-INVOICE-DATE TO BE "20150228"
       
           TESTCASE "IT DETERMINES THE NEXT INVOICE DATE IN FEB, LEAP"
           MOVE "20160205" TO WS-CURRENT-DATE
           PERFORM 2000-NEXT-INVOICE-DATE
           EXPECT WS-NEXT-INVOICE-DATE TO BE "20160229"

A point to take from these examples is that when we need to isolate a small section of a unit of code, exercise just that section, and make assertions about the behavior that section exhibits in isolation from the rest of the code, then we have to write our automated checks in the same language as the code under test. A Python or Ruby program can’t directly look at the result of a C# method or a COBOL paragraph. A testing tool in Python or Ruby would have to check the validity of the method or paragraph indirectly, as part of a larger-scope check. That way lies madness.

That’s the basis of the reasoning that test cases should be written in the same language as the code under test. The question is: Does the same reasoning apply at higher levels of the test automation pyramid?

The nature of UI and API checks

It’s a commonplace for people to say things like, “We’re a Java shop, so we need to use Java-based testing tools even for high-level checks against APIs and UIs.” It makes sense to write microtests in Java for applications written in Java. But is that the right choice for writing automated checks against a web page, a mobile device, a CICS screen, a command-line interface, a SOAP API, or a RESTful API?

What about organizations in which applications are written in more than one language? What we often see in those cases is that the technical staff divide into “camps” and endlessly debate the choice of tools. “Our team writes microservices in Java, so we need to use [for instance] RestAssured for API checks,” versus “Our team writes microservices in C#, so we need to use [for instance] SpecFlow for API checks.” Now you have to maintain two code bases of automated checks.

The thing is, API checks are not the same thing as microtests. They don’t assert the results of individual methods in the application. They assert the results of service invocations, usually over HTTP. When you invoke a service over HTTP, like when you do a Google search, do you need to know what language the service was written in?

Similarly, checks against mobile apps, web pages, CICS screens, and command-line interfaces don’t know or care about the programming languages used in the applications behind the APIs. There’s no technical reason that such checks have to be written in the same language as the code under test.

In fact, forcing the issue and requiring all testing tools to be in the same language as applications can easily cause more harm than good.

Knowledge gaps (and fear of same)

The second most common reason people want to stick to a single language for all automated checks is that the technical staff may not be familiar with other languages. “Our developers know Java very well, but they don’t know Ruby. Therefore, they must use [for instance] Cucumber-JVM rather than Cucumber.”

The people who raise this concern are usually in one of two groups: (a) non-technical managers who (apparently) assume the human brain has capacity to learn exactly one programming language, and (b) programmers who are (unfortunately) close to the wrong end of the bell curve (although awesome.)

There are two fundamental logical errors behind this concern.

First, the technical staff already knows and uses multiple programming languages and related tools, such as scripting languages, markup languages, job control languages, and tools for configuring, integrating, building, running, packaging, and deploying their code. Even if the application as such is written in just one language, the technical staff has to use a range of different tools to work with the code base.

Second, competent programmers enjoy learning new languages. They entered the field in the first place because they enjoy solving problems and creating software. The programmers will be happy to have the opportunity to learn (a) testing skills and (b) new languages and tools.

Fit for purpose

Ideally, we’d like to use whatever automated checking tools make sense for each category of checking we need to perform, and for each layer of the test automation pyramid. We’ve already seen the necessity to write microtests in the same language as the code under test.

For UI and API checks, we want to choose tools that give us good functionality and flexibility for checking UIs and APIs; not necessarily for asserting the results of individual Java or C# methods and so forth. It’s a different use case.

Testing tools add value, but aren’t magic

There’s no magic involved in accessing a service over HTTP. To illustrate, let’s access a service using *nix command-line programs that are commonly installed. We don’t want to imply that this is a great way to create a large suite of executable checks that will be maintained for years. The purpose is only to show that there’s no need to write API checks in the same programming language that was used to write the system under test.

As of the date of publication, there’s a sample microservice on Heroku we can use for this demonstration. It’s called rpn-service, and it is a Reverse Polish Notation (RPN) calculator. Using curl to invoke the service and jq to see what it returns, we get the following:

curl -s 'http://rpn-service.herokuapp.com' | jq '.'

And the result:

{
  "usage": [
    {
      "path": "/calc/*",
      "description": "pass values in postfix order, like this: /calc/6/4/5/+/*. To avoid conflicts with URL strings, use \"d\" instead of \"/\\\" for division."
    }
  ]
}

So, when we invoke the service with no arguments it returns usage help. We can see the JSON response document contains a key “usage” that has an array of entries with one entry. Let’s check to ensure the “description” entry contains the text, “pass values in postfix order”:

curl -s 'http://rpn-service.herokuapp.com' | jq '.usage[0].description' | perl -wnE'say /pass values in postfix order/g'

That gets us:

pass values in postfix order

Wrapping that in a bash script, we can check to see that the regex finds a match and call that a ‘pass’.

#!/bin/bash

if [[ $(curl -s 'http://rpn-service.herokuapp.com' | jq '.usage[0].description' | perl -wnE'say /pass values in postfix order/g') ]]; then
  echo 'pass'
else
  echo 'fail'
fi

You can see that we don’t need any special testing tools to check the result of an API call, and that we don’t need to write our executable checks in the same programming language as the system under test.

Good testing tools add value beyond that, of course. They help us with organizing test cases, hiding ugly details under the covers, and running subsets of test suites based on criteria that we define, such as long- vs. short-running cases or cases pertaining to particular application features. They’re also generally easier to live with than command-line programs and shell scripts.

The point is the idea that tests and application code must be written in the same programming language is a myth, or perhaps merely a fear.

Considerations for choosing API testing tools

Different organizations have different needs and often have unique technical environments. The following considerations are often relevant in larger corporate IT organizations:

  1. Services may be hosted in-house or in the cloud, and may reside on a range of different platforms. These often include some flavor of Linux (usually Red Hat Enterprise or Suse), some flavor of Unix (usually IBM AIX or HP-UX, and sometimes Solaris even though it is being phased out), an enterprise platform that exposes a Unix-like shell (HP NonStop/Tandem, IBM zOS), and/or some flavor of Microsoft Windows.
  2. The majority of software developers working in large corporate IT shops use Microsoft Windows development systems. Some use Apple OSX or some flavor of Linux. Those who do mobile development as well as API development most likely use Apple OSX.
  3. API test suites may be quite large, often containing thousands of cases. At various points in the development cycle, subsets of these cases must be executed, but not the entire suite. Different criteria for grouping and selecting test cases may apply.
  4. API checks must be executable in an interactive mode as well as being scriptable for inclusion in a CI build.
  5. Services are most often invoked using RESTful or SOAP-based standards. In older IT shops, there may be service-like interfaces exposed internally, based on older interfaces that may be non-standard.
  6. One function of automated test suites is to provide documentation of the system under test. In contrast with conventional documentation, executable documentation that is maintained in sync with production code can never be out of date or inaccurate.
  7. Different kinds of testing provide different kinds of value. Both example-based and property-based testing are generally advisable.

Considerations 1 and 2 suggest we want tools that are platform-agnostic. In a pure Microsoft shop, VSTS and SpecFlow and friends might be fine. Most corporate environments are technically heterogeneous. Tools built on cross-platform lanaguages like Java (e.g., RestAssured, JBehave, Cucumber-JVM), Ruby (e.g., Cucumber), JavaScript (e.g., Cucumber-JS), or Python (e.g., Behave) may be more suitable. Tools that run as separate applications may be good choices, as well (e.g., SoapUI, FitNesse). Developers can use them on their Windows development boxes, and the same tools can run on various target platforms.

Considerations 3 and 4 suggest we want tools that provide straightforward ways to organize and re-organize test cases, and to select subsets of the test suite for execution based on any criteria we want to define.

Consideration 5 suggests we want tools that can handle SOAP, REST, and custom APIs without too much trouble.

Consideration 6 suggests we want tools that support test case definition in a form that is understandable to both technical and non-technical stakeholders.

Consideration 7 suggests we want tools that can support example-based and property-based testing.

Expressing examples with the Given-When-Then pattern

Any behavioral checks will specify preconditions, an action against the system under test, and expected postconditions. The Given-When-Then pattern is widely used to state preconditions (Given), actions (When), and postconditions (Then) for behavioral examples.

Gherkin is a popular language for expressing examples based on the given-when-then pattern. Here is the same example we used above for checking the usage help for the RPN calculator service:

Feature: Reverse Polish Notation calculator service

Scenario: As a person, I want to know what I can do with the RPN service.

Given I want to know how to call the RPN service
When I invoke the RPN service
Then I receive usage documentation

Here’s a snippet of code for Cucumber-JVM that runs the examples (omitting boilerplate Java code):

Given("^I want to know how to call the RPN service$", () -> {
    valuesToPush = EMPTY_STRING;
});
          
When("^I invoke the RPN service$", () -> {
    jsonResponse = get(RPN_SERVICE_BASE_URI + valuesToPush);
});
          
Then("^I receive usage documentation$", () -> {
    assertTrue(jsonResponse
        .getBody()
        .getObject()
        .getJSONArray("usage")
        .getJSONObject(0)
        .getString("description")
        .startsWith("pass values in postfix order"));
});

Most of the tools mentioned above can parse gherkin examples, and the code to execute them is roughly similar to this Java example.

Considerations for choosing UI testing tools

If we assume once again that we need to support a large corporate IT environment, then the considerations listed above for API checking tools also apply to UI checking tools. UI checking can be considerably more complicated than API checking. Additional considerations include:

  1. Timing issues – different elements of a web page may be served asynchronously. Network delay can be variable and unpredictable.
  2. Browser implementation differences – different browsers, different versions of the same browser, and behaviors of a browser on different platforms create complications in defining stable and reliable automated checks.
  3. Responsive design issues – with responsive design, the elements on a web page change position, shape, or size, and may disappear altogether, when the user re-sizes the window or depending on the state of the user’s interaction with the application.
  4. Accessibility features – special features to support the needs of people with different kinds of disabilities must be handled.
  5. Internationalization and localization – UI checks must handle internationalization features (under the covers) and/or localized content.
  6. Mobile devices – automated checking tools must be able to access various kinds of mobile devices and device simulators.
  7. Legacy UIs – automated checking tools must be able to access command-line applications, WinForms, Java Swing, Tandem/HP Pathway, and other legacy UIs, as well as emulating IBM 3270 and 5250 terminals (in a platform-agnostic way).

Example-based testing provides concrete behavioral examples for checking deterministic results. It can also check nondeterministic (statistical) results.

Examples are readable by all stakeholders, for purposes of system documentation, when written as Given-When-Then scenarios or in a tabular form. A tool that supports both formats would be preferable to one that supports only one format.

Best of breed?

To minimize the number of different tools in the environment, it would be preferable to find a tool that handles API and UI checking equally well. In the author’s experience, the tool that offers the widest range of options and simplest customization path is the Ruby implementation of Cucumber.

Here is a set of examples for a trivial “Hello, World!” application that illustrates the tabular form of Gherkin:

Feature: Say hello
  As a friendly person
  I want to say hello to the world
  So everyone will be happy

  Scenario Outline: Saying hello
    Given I meet someone who speaks <language>
    When I say hello
    Then the greeting is <greeting>

    Examples:
    | language | greeting              |
    | English  |  "Hello, World!"      |
    | Spanish  |  "¡Hola, mundo!"      |
    | Japanese |  "こんにちは世界"       |
    | Russian  |  "Здравствуйте, мир!" |

The Ruby code to make this executable looks like this (omitting boilerplate code and helper methods):

Given(/^I meet someone who speaks (.*?)$/) do |language|
  visit_page HelloworldPage
  @language = language_key language
end

When(/^I say hello$/) do 
  @current_page.selector = @language
end

Then(/^the greeting is "(.*?)"$/) do |greeting|
  expect(@current_page.greeting).to include greeting
end

Ruby is a cross-platform language. Cucumber itself is a lightweight framework for running examples. Ruby libraries known as gems provide add-on functionality. Gems exist to support a wide range of API standards, markup languages, terminal emulation, assertions and mocks, and ancillary features such as formatting test case output, logging, dealing with web timing issues, and taking screenshots. In addition, practical support for property-based testing is already available. Ruby is an easy language to learn and configuring Cucumber to use various gems is straightforward.

By including the appropriate gems in the test project, it’s no more difficult to run examples against iOS apps, Android apps, or IBM CICS applications than it is to exercise a Web-based “Hello, World!” app.

It’s a logical choice for organizations that support services that offer multiple methods of invocation, as the code for setting up preconditions and specifying expected postconditions can be re-used.

The post Which tools should you choose for UI and API testing? appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Many contemporary business solutions take the form of a set of microservices that interact with one another in somewhat unpredictable ways in a dynamically-managed elastic cloud (or cloud-like) environment. This is a rather different architectural pattern than those from previous years, such as model-view-controller client/server solutions or batch extract-sort-edit-update solutions. Does this architectural pattern have characteristics that should lead us to reconsider our design and development methods?

To answer that question, let’s consider what we know about the reliability of such systems, and the factors that tend to assure high reliability. Then we may be able to identify techniques or methods that help us build reliable solutions.

Simple Testing in the Small

From Yuan et al:

  • …simple testing can prevent most critical failures in distributed data-intensive systems.
  • While professionals usually test that their…code works when things are going well, they rarely test that it does the right thing when something goes wrong. Adding just a few such tests during development would prevent a lot of pain downstream.

This finding is consistent with the widely-held idea in software circles that a “testing mindset” is one of the distinguishing characteristics of a software developer as opposed to a programmer or coder. A programmer tends to “test” their code to make sure it will work under ideal conditions. They tend to assume it’s “someone else’s problem” to create those ideal conditions. A tester tends to look for unplanned behaviors or behaviors outside the design parameters of the code. A software developer does both (among other things).

The Interaction of the Parts of a System

From Russell Ackoff (paraphrased): The behavior of a system depends on the interaction of its parts, not on the characteristics of each part in isolation.

This observation is consistent with a point of view about unit testing that is held by many proficient and successful software developers: To be meaningful and useful, testing must exercise the behavior of components interacting with one another. Compared to that, testing of individual code “units” in isolation offers little value. For a good presentation of this perspective, see “Why Most Unit Testing Is Waste,” by James Coplein.

Therefore, it seems worthwhile to consider:

  • Thorough testing of non-happy-path scenarios “in the small” is a strong hedge against runtime errors “in the large” (Data science and software engineering). This derives from the finding that simple testing of the behavior of small components under error conditions helps avoid errors in complex systems built from those components.
  • Rigorous control of interactions between services is a strong hedge against apparently-random emergent behavior in complex systems such as microservice fabrics (Russell Ackoff lecture on systems thinking). This derives from the observation that the interaction of the parts of a systen have significant effects on the overall behavior of that system.
TDD as a Way to Implement Yuan’s Findings

Bill Caputo has made interesting observations about the purpose or goal of “unit testing” as it applies to test-driven development (TDD). He concludes TDD is a technique for developing a specification incrementally, with tight feedback loops involving developers and stakeholders. This contrasts with the conventional view among practitioners that TDD is a software design technique, as well as with the common misunderstanding on the part of non-practitioners (and novice practitioners) that TDD is a testing technique.

Robert Martin’s Transformation Priority Premise is a result of his many years of experience in applying TDD. It proposes a sequence in which we ought to expand the functionality of code under development, guided by microtests. The approach offers a way for us to emerge a low-level design for a unit of code that performs the required functionality in the simplest way and with the least risk of overengineering. It is a way of using TDD as a design technique.

Based on Caputo’s observation, we know we can use TDD as a way to develop a clear specification for what a unit of code should do. Based on Martin’s work, we know we can use TDD as a way to guide an emergent design for a unit of code. The implication is that TDD may be a practical mechanism to ensure each small unit of code does what it is meant to do and has high reliability under various error conditions. In other words, TDD is a way to apply the finding of Yuan et al that thorough small-scale testing of each unit of code contributes significantly to reliability at scale when the unit is included in a complex solution. To produce this form of value through TDD, developers must cultivate a testing mindset to complement their programming mindset.

Design by Contract as a Way to Support Systems Thinking in Solution Design

Design by Contract (DbC) is an approach to development that focuses on defining the interactions between components and enforcing the rules of those interactions (the contract) at runtime. The Eiffel programming language was created to support this approach directly through language constructs, and DbC is described on the website of the Eiffel language.

Each interaction between software components involves a client that needs a benefit from a supplier. In order to request the benefit from the supplier, each client has certain obligations. The obligations are called preconditions. Clients are responsible for ensuring the preconditions are true; suppliers may assume the preconditions are true. Suppliers are responsible for guaranteeing postconditions are true. Postconditions are things that must be true after the benefit has been returned to the client. Clients then must assume the postconditions are true.

The Eiffel language is explicitly designed to support DbC. However, DbC can be used with any programming language provided developers follow certain conventions.

DbC was originally conceived as a way to assure reliable object-oriented programs. Note the similarities between the interactions between client and supplier objects in an OO program, and the interactions between clients and services in a microservices fabric. We can see that a DbC approach to microservices design is a practical way to support systems thinking in the design of microservices.

DbC is not a “testing” technique and cannot be mistaken for one because the guarantees of preconditions, postconditions, and invariants (not described here) happen at runtime, not at build time.

Supporting Techniques

In an entertaining and enlightening talk, Greg Wilson explores What we know about software development and why we believe it’s true. Wilson mentions several practices and techniques we’ve come to depend on to gain confidence in the correctness of our code.

It turns out the one practice that actually helps is an old one: Code review. It turns out, as well, the one metric that actually correlates with code quality is an old one: Number of lines of source code.

Notwithstanding our fondness for (and habits around) other techniques and other metrics, these two factors seem to have the greatest correlation with reliable systems.

So it may be interesting to ask:

  • What is a practical and efficient way to gain the benefits of code review?
  • What is a practical and efficient way to ensure our code modules are of small size?

First, here’s some more information about effective code reviews:

  • It doesn’t help to have multiple reviewers. The 2nd through nth reviewer add nothing beyond what the first reviewer finds (in the studies cited).
  • A reviewer grows mentally tired after about one hour. When the amount of code to be reviewed is more than can be inspected in depth in one hour or less, the reviewer will overlook errors.

We tend to favor Lean Thinking in crafting our software delivery processes. The traditional way to implement code reviews is to assign a technical lead or team lead to review each team member’s code. This creates a bottleneck in the delivery process as developers wait for a senior team member to become available to conduct a review. These pauses or halts in the delivery pipeline amount to waste, according to Lean Thinking.

Is there a way to provide for code review without introducing a pause or halt in the delivery pipeline? Fortunately, there is a simple way: Pair programming. One of the effects of pair programming is continuous code review. It’s a low-cost technique that has been shown to reduce defects by as much as 86% with very little additional overhead as compared with solo programming (in the range of 0% to 15% additional time, according to the seminal study of pair programmg by Alistair Cockburn and Laurie Williams at the University of Utah).

You might protest that every team member may not be skilled enough to review code. It may be the case that only the technical lead has that level of skill. Fortunately, it turns out that everyone on a technical team can learn the skill to review code, and it doesn’t take very long.

Is there a way to ensure code units don’t grow too large to be reviewable in less than an hour? Fortunately, pair programming offers a mechanism for this, as well. When a code unit seems to be growing beyond a “reasonable” size, a colleague is on hand to make that observation and to suggest appropriate refactoring of the code. Pair programming tends to reduce the tendency of developers to carry on with their design ideas without pausing to assess what they are doing.

Conclusion

When we want to assure high confidence in the reliability of a complex solution that comprises many microservices that are dynamically instantiated and destroyed by an elastic runtime environment, it seems reasonable to apply a short list of key techniques:

  • The cultivation of both a testing mindset and a programming mindset
  • Test-driven development of small building blocks of code
  • Design by Contract of interfaces between software components
  • Pair programming to provide continuous code review and “sanity checks” during development

The post Key Development Techniques for Building Microservice-Based Solutions appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As organizations continue to grow and evolve in their practice of Agile, what they expect from Agile transformation coaches continues to evolve as well. Ten years ago, an Agile coach was someone who had enough experience working with Agile to help others pick up the basic habits and avoid some common mistakes. Today, we need a lot more.

In this episode of SoundNotes, Mike Cottmeyer talks with Dave Prior about how the needs and expectations of coaching have changed over the last 10 years. During the interview Mike explains how LeadingAgile has evolved its understanding of the specific areas of skill and expertise the company needs to focus on when talking with transformation coaches during the interview process. Dave and Mike also discuss how coaches can address some of these gaps, and why it is so important to understand what type of coaching you are passionate about and how to leverage that.

The Evolving Role of an Agile Coach with Mike Cottmeyer - SoundCloud
(3665 secs long, 27 plays)Play in SoundCloud

Also, Mike is really digging Oasis right now.

Contacting Mike

If you’d like to contact Mike you can reach him at:

Contacting Dave

If you’d like to contact Dave you can reach him at:

If you have a question you’d like to submit for an upcoming podcast, please send them to dave.prior@leadingagile.com

And if you are interested in taking one of our upcoming Certified Scrum Master or Certified Scrum Product Owner classes, you can find all the details at https://www.leadingagile.com/our-gear/training/

The post The Evolving Role of an Agile Coach <BR> with Mike Cottmeyer appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When attempting to attain an objective or key result, people often refer to key performance, leading and lagging indicators. Unfortunately, a lot of people don’t know the difference and how to use them to their benefit. This post should provide some clarity to the differences.

What is a Key Performance Indicator (KPI)

Indicators are statistical values to measure current conditions as well as forecast trends and outcomes. A Key Performance Indicator is a measurable value that demonstrates how effectively a company is achieving key business objectives.  Examples of business objectives can range from predictability, early ROI, and innovation, to lower costs, quality, and product fit.  In basic analysis, use indicators that quantify current conditions to provide insight into the future. Lagging indicators quantify current conditions. Leading indicators provide insight into the future.

What is a “Lagging Indicator”

Lagging indicators are typically “output” oriented. They are easy to measure but hard to improve or influence.  A lagging indicator is one that usually follows an event. The importance of a lagging indicator is its ability to confirm that a pattern is occurring.  Here is an example: Many organizations have a goal to deliver some kind of scope on a release date.  Items Delivered is a clear lagging indicator that is easy to measure.  Go look at a list of items that are done and delivered.

But how do you reach your future release objective of items delivered? For delivering product predictably, there are several “leading” indicators:

What is a “Leading Indicator”

These indicators are easier to influence but hard(er) to measure. I say harder because you have to put processes and tools in place in order to measure them.  When you start building product, a lot of what you will understand and build will emerge over time. You don’t know exactly what the level of effort is, until you finish. And if you are like me, given shifting priorities and dependencies, your lagging indicator is a moving target.  If you use leading indicators, you can see if you’re tracking in the right direction. You can use the leading indicators to make changes to your behavior or environment while there is still time.

Diminishing ready backlog indicates we have less clarity on upcoming deliverables. An unstable delivery team indicates we don’t have accountability to meet our commitment. Unstable velocity indicates we lack measurable progress that can forecast our completion by the release date.

Examples of Leading Indicators for Product Teams

Now lets imagine you are managing the product development division of your company and your goal is to meet the release commitment you made to your customers.
The outcome is easy to measure: You either finished the items you committed to or not. But how do you influence the outcome? What are the activities you must undertake to achieve the desired outcome?

For example: Make sure there is enough ready backlog that the delivery team does not start working on “unready” work.  Make sure your team members are available when needed and not being shared with other teams. Ensure the team is remediating bugs as they go and not waiting until the end of the release to fix them.  Look out ahead of the delivery team and mitigate any business, organizational, or technical risks that may delay them.

These can translate into the following “leading” indicators:

  • Amount of ready backlog
  • Percentage of team availability
  • Percentage of deviation of the velocity divided by its mean
  • Amount of outstanding bugs
  • Number of known blockers
Examples of Leading Indicators for Services Teams

Now let’s imagine your goal is to be compliant with SLA’s (service level agreements) you agreed to with your customers. For instance, the maximum allowed time to resolve critical priority incidents is 48 hours.  The outcome (lagging indicator) is easy to measure: You either resolve your incidents in 48 hours or not. Again, ask yourself, how do you influence the outcome?  What are the activities to achieve the desired outcome?

For example: Make sure staff start working on incidents immediately when they occur. Make sure that incidents are assigned to the right people with the right skillset and that this person isn’t already overloaded with other work.

These can translate into the following “leading” indicators:

  • Percentage of incidents not worked on for 2 hours
  • Percentage of open incidents older then 1 day
  • Average backlog of incidents per agent
  • Percentage of team availability
  • Percentage of incidents reopened more then 3 times

Begin measuring these indicators on a daily basis and focus on improving them. If you do, your organization is much more likely to reach its objectives. I have commonly seen organizations treating the leading indicators as the goal and measure of success. This is misguided. The objectives are the lagging indicators, whatever they are.  The goal for leading indicators is to improve them over time, to positively impact the lagging indicator.

The post Leading and Lagging Indicators appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Who Should be a ScrumMaster? Who Should be a Product Owner? - YouTube

We frequently get questions from clients who are transitioning to Agile when they begin working through the challenge of determining which roles are the best fit for individuals in their organization. Since many of our clients begin their Agile journey by taking on Scrum, the question usually shows up as:

Who should I make a Product Owner and who should I make a ScrumMaster?

While there is no locked down definition of who should transition into what role, there are some standard patterns that appear across organizations. A lot of it though depends on how an organization views the individuals in those positions and what level of responsibility they are given by the company.

In this short video, Dave Prior, LeadingAgile’s resident Certified Scrum Trainer, offers some advice and guidance with respect to sorting out which individuals in your organization should move into a Product Owner role and who is a better fit for ScrumMaster.

Audio only version:
Who Should be a ScrumMaster? Who Should be a Product Owner? With Dave Prior - SoundCloud
(359 secs long)Play in SoundCloud
Contacting Dave

If you’d like to contact Dave you can reach him at:

If you have a question you’d like to submit for an upcoming podcast, please send them to dave.prior@leadingagile.com

And if you are interested in taking one of our upcoming Certified Scrum Master or Certified Scrum Product Owner classes, you can find all the details at https://www.leadingagile.com/our-gear/training/

The post Who Should be a ScrumMaster? <br>Who Should be a Product Owner? appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Douglas Adams’ The Hitchhiker’s Guide to the Galaxy used to be required reading for software developers. Ah, the good old days, when developers shared a sort of cultural literacy! A strange sort, maybe, but a sort nonetheless. Any group of developers could recite quotes from the story on request, or in response to work-related situations that would have engendered panic, if not for the soothing words on the cover of the Guide: “Don’t panic.” Quotes like this one:

“The way [the Nutri-Matic machine] functioned was very interesting. When the Drink button was pressed it made an instant but highly detailed examination of the subject’s taste buds, a spectroscopic analysis of the subject’s metabolism and then sent tiny experimental signals down the neural pathways to the taste centers of the subject’s brain to see what was likely to go down well. However, no one knew quite why it did this because it invariably delivered a cupful of liquid that was almost, but not quite, entirely unlike tea.”

Younger software development professionals have little awareness of the classic science fiction and comedic material that shaped the thinking of earlier generations of practitioners. Few are able to quote Monty Python dialog from memory, or succinctly communicate the salient characteristics of a problem simply by naming a Twilight Zone episode.

It will come as no surprise this can sometimes lead to problems in the application of robust software development techniques.

For example, I’ve noticed many software developers claim to be advocates of test-driven development (TDD) and insist they use TDD in their own work, and yet the way they build code is almost, but not quite, entirely unlike TDD.

What’s TDD anyway?

If you ask 10 people what TDD means, you might get 30 different answers. Most of the answers will be internally consistent and some of them will be practical. A few may even resemble TDD to an extent.

FWIW I’ll share what I think TDD means. YMMV. But first let me talk about this other thing for a minute.

Emergent design

There’s an approach to software development whereby we evolve the low-level design of the code incrementally. As it takes shape, the code itself “tells” us how it should be designed, if we would but listen.

Listening to code in this way requires a trained ear. We have to learn how to listen to code, just as we have to train our ears for music or foreign languages.

Or maybe it would be better to say we have to train our noses. People like to talk about code smells. A code smell is a structural pattern in source code that leads us to suspect the design could be improved. It doesn’t necessarily mean there’s a design issue; it’s just a questionable pattern that often points to a design issue, just as an unusual smell in your house might point to a dangerous gas leak or might be nothing more horrible than your neighbor’s cooking.

If we aren’t too sure what those patterns look like, we won’t be too sure what our code is trying to tell us.

And if we don’t know what the code is trying to tell us, we won’t know which refactorings to use to improve the design.

I often work with developers who can stare at a 2,000 line method in Java or C# and feel no anxiety whatsoever. The code is speaking, but they don’t hear the message.

It sort of reminds me of listening to music with my dog. She lies down peacefully when Mozart is on. She curiously investigates the speakers when Yasuhiro Yoshigaki is improvising on old auto parts. She flees in terror at the sound of George Crumb’s Black Angels. But in no case does she relate to the music on a deep level. Sometimes the sounds stimulate a response in her, but she doesn’t understand music. I’ll bet she wouldn’t react at all to a 2,000 line method in Java or C#.

Anyway, this thing about letting the code tell us what it wants to look like and incrementally letting the design evolve is often called emergent design.

You can Google the phrase. Go ahead. I’ll wait.

So, you probably found this Wikipedia article: Emergent Design, which shows the term is not limited to software development but has broader application.

Relevant to software development, you probably found this write-up from ThoughtWorks, opinions from advocates of emergent design like this one, and criticisms of the approach like this one. So you can get a sense of what it means, when it might be useful, and when it might not be useful. All good.

Let’s set aside the arguments for and against emergent design and take it as a “given” for purposes of this article. How would we guide the emergence of the low-level design of a software module or component? You can probably think of several ways to do this. The method that is most often used is test-driven development.

TDD as a way to guide emergent design

In this context, we’re talking about building small-scale components of a software solution. We do it by expressing concrete examples of the desired behavior of a piece of code in an executable form of very limited scope. These small examples are called microtests.

The TDD cycle – red, green, refactor – is used to drive out an implementation for the desired behavior of the code. “Red” means that the executable statement of a desired behavior does not exhibit the expected result. “Green” means that it does so. The words reflect the colors in which failing and passing examples are usually represented by unit testing tools. “Refactor” means to clean up the code, which we prefer to do incrementally rather than building up a mass of technical debt, so that the task does not become burdensome and so that the code is kept in an understandable state at all times.

Starting with very simple examples, as the suite of microtests is built up, an appropriate low-level design emerges. A noted proponent of the approach, Robert “Uncle Bob” Martin, explains that as the examples become more specific, the implementation becomes more generic. In other words, as we add more and more discrete examples, we are guided to write a more and more general implementation, capable of handling all the defined cases properly.

Many detractors of TDD point out that it’s possible for people to forget to include all the relevant examples, resulting in a fragile or incomplete implementation. This is more a problem with people forgetting things than an objective criticism of TDD or any other technique or method. After all, software doesn’t do our thinking for us. Well, not yet, anyway.

Uncle Bob has worked out a list of transformations the code undergoes as the design emerges. By favoring the simplest transformation necessary to cause a microtest to pass, we can guide the emergent design toward an appropriate form.

Best or good enough?

I hesitate to claim we end up with the “best” design because that would involve writing the solution in every possible way and then judging the various implementations by some criteria that everyone would agree with. You can probably see a couple of challenges with this idea. The first challenge is to think of every possible implementation.

I haven’t met anyone who has gotten beyond that first challenge on the way toward discovering the “best” design for any software solution. If such people exist at all, then they will face the second challenge: Getting everyone to agree on the criteria by which to determine the “best” design. Therefore, I doubt anyone actually knows what the “best” design for any given solution might be, even if some people believe they do.

Short of absolute perfection, I’m pretty happy with having a practical way to discover an appropriate and practical design that doesn’t go too far (that is, helps me avoid overengineering) and doesn’t overlook anything important (that is, helps me think of significant edge cases). I’ve found TDD helpful in those ways. YMMV.

TDD by any other name would smell as sweet

Well, that was a pretty long-winded answer to “What’s TDD anyway?” What I was going to say is that TDD means (to me) to repeat the red-green-refactor cycle in very small increments, following a logical progression of transformations to guide the emergence of a practical and appropriate low-level design for a software component without overlooking important edge cases, and keeping the design “clean” at all times through incremental refactoring.

A key point about all this is to be sure and write a microtest that defines a piece of behavior before you implement that behavior. The D in the middle of TDD stands for “driven.” The driver of a car sits in the front seat, not the rear. (Once when I used that metaphor, a person in the room showed me a picture of a car that had been rigged for back-seat driving. Clever.) Anyway, it’s fundamental to TDD that the only reason to write a line of implementation code is to make a red example turn green. That’s kind of hard to do if you’ve already written the implementation before you write the example.

If you’re doing something different from that, it’s fine. There’s no law that says we have to develop software in any particular way. The problem is calling whatever you’re doing “TDD” when you aren’t doing that stuff I just said. It’s no more meaningful than calling a carburetor from a 1948 Ford pickup truck a “banana.”

Here’s an if-P-then-Q-doesn’t-imply-style corollary to the sweet-smelling assertion:

Any other thing by the name TDD doesn’t smell so good

I’ve encountered quite a few developers over the years who insisted they were strong proponents and dedicated practitioners of TDD. They begin their work by laying out a fairly detailed low-level design on paper (or pixels). Then they write a bunch of “skeleton” source modules. Finally, they use the red-green-refactor cycle to help themselves fill in the blanks in the skeleton source modules. Or they use a sort of green-green-never-refactor cycle, which they label “TDD” for some reason.

The rest of this paragraph contains material some readers may find disturbing. Feel free to skip it, or to ask your children to leave the room while you read it. Don’t say I didn’t warn you! Here goes: On many occasions, I’ve witnessed experienced TDD practitioners demonstrate or teach TDD by making hard-and-fast assumptions about how the solution is destined to emerge, and to begin by writing some “initial” production code before they write the first microtest. In Deccember I saw a demonstration of the Bowling Kata in which the facilitator first created C# classes for Game and Frame, and an empty method for Roll. He blatantly did all that without writing a single microtest to drive it. I’m very sorry if reading that upset you, but it had to be said. Okay, it’s safe to invite your children back into the room now.

More recently, I’ve learned there’s a school of thought about development people call “reverse TDD,” or words to that effect. They’ll go through a fairly long monologue to describe it, if you ask them to. Sometimes they’ll do so even if you don’t ask them to. Or if you ask them not to. “Reverse TDD” basically means writing unit tests after writing the implementation. Not sure how the abbreviation “TDD” fits with that, but there you have it.

If Douglas Adams were still around and observed these activities, he may well call them “almost, but not quite, entirely unlike TDD.”

Why?

Why do people feel the need to label any old pseudo-random approach to software development “TDD?” Why does that term mean so much to them? As mentioned above, there are many ways to write code and none of them is “wrong” or “evil.” As long as you’re happy with your work and with yourself, it’s all good. Some methods might take more time or carry a higher risk of overdesign or error, but eventually, given sufficient time, money, and frustration, a working solution can be produced using pretty much any approach. So, why insist on calling just about everything “TDD?”

If I had to guess (and I do, as I’m not a psychologist), I’d guess one reason for this phenomenon is that TDD is a very popular buzz-term these days. Everyone likes to be associated with popular buzz-terms. Therefore, “whatever I do can be labeled [insert-popular-buzz-term-here] because I say so.”

Even if my guess isn’t wrong (and it might be), it doesn’t fully explain these almost but not quite entirely unlike TDD forms of TDD. The problem isn’t entirely due to developers’ misunderstanding of the technique or their eagerness to qualify for a popular label. Many tutorials and explanations of TDD explicitly advise developers to write production code before they write a failing microtest. This example from Microsoft is representative: Getting Started With Test-Driven Development.

But even that can be explained in terms of my guess at the psychological motivation. Companies and others who have something to “sell” are very keen to be associated with popular buzz-terms. They may or may not understand what those buzz-terms mean. They do understand that people will buy stuff that is associated with popular buzz-terms.

So, where do I think I’m headed with all this rambling nonsense? Just this: Things have names and definitions. If you change the Thing to such an extent that its basic characteristics no longer conform with its definition, then you really ought to come up with a new name. The Thing is no longer what is was. Calling it by the old name will only confuse people who actually know what the old name means.

Even if you furrow your brow and affect a professorial manner, they’ll know you aren’t using the buzz-term correctly. Trust me. I’ve tried it.

Variations of TDD

Am I claiming, dogmatically, that any deviation from these rules invalidates the label “TDD?” No. There are at least a couple of well-known variations on TDD that still satisfy the basic criteria.

Classic style TDD follows the pattern described above. It’s very helpful when we need to emerge a low-level design for any sort of algorithmic implementation. Classic TDD is also known as the Detroit school of TDD, as it was devised by people working in the city of Detroit. It’s the Kent Beck, Uncle Bob, Ron Jeffries et al way of doing TDD (not that it’s the only way they know). The microtests tend to be agnostic about implementation details and to focus on the observable outputs of the units of code under test.

Mockist style TDD takes the approach of defining interfaces for the key domain concepts of the solution under development and building up the solution using the red-green-refactor cycle, with mocks defined for components the code under test collaborates with. It’s very helpful when developing a solution characterized by many interactions between domain objects. Mockist style TDD is also known as the London school of TDD, as it was devised by people working in the city of London. It’s the Nat Pryce, Steve Freeman et al way of doing TDD (not that it’s the only way they know). The test cases tend to know more about the underlying implementation than when classic style TDD is used, at least to the extent of interactions with collaborating objects.

Practitioners of TDD routinely switch between these styles, as well as taking short-cuts that they advise their students not to take. Rarely will you see anyone follow a single style rigidly. Beginners are advised to take baby steps to an extreme degree so that they can internalize the technique and get a gut feel for how it influences emergent design. Once beyond that initial learning phase, it’s okay to be more flexible. It’s better to follow the steps closely until you get a sense of how far you can safely flex.

Sometimes beginners make the mistake of thinking the practitioner who’s showing them TDD wants them to stay rigid forever. That isn’t the case. It’s a way of learning. Don’t skip it. If you’re a beginner with TDD, then you don’t know enough to judge when and how far it’s safe to flex.

When a variation becomes another song altogether

That mockist style thing I mentioned…it sounds a lot like writing skeleton source modules and then filling them in using the red-green-refactor cycle, doesn’t it? I wonder if that indicates there aren’t hard-and-fast boundary lines between concepts; there might be gray areas or opportunities for people to apply judgment. Hmm.

It turns out that even when we want to use emergent design for some aspects of a solution, we don’t often use it for all aspects. Portions of a solution might be straightforward examples of well-known design patterns or reference architectures. There’s limited value in pretending we know nothing about them and forcing ourselves to drive out an emergent design for every little thing.

Also, emergent design doesn’t usually mean no up-front design at all, unless we’re experimenting or learning about a domain that’s unfamiliar to us. When building code intended for production, we typically perform some amount of upfront design.

One lightweight development method that’s consistent with the Agile Manifesto is called Feature-Driven Development (FDD). A buzz-term that came out of the FDD community is JEDI, or Just Enough Design Initially. Another popular design approach is called Domain-Driven Design (DDD), devised by Eric Evans. Scott Ambler defined yet another lightweight design approach he calls Agile Modeling.

All these, and many more methods can be used to elaborate a highly detailed domain model or a very lightweight model. It’s up to the user. Even the Unified Modeling Language (UML) and the Rational Unified Process (RUP) can be used to produce a comprehensive up front design or a minimal one.

Ideally, we’d like to find the optimal place to meet in the middle, between just enough up front design top-down and the beginning of emergent design bottom-up. That optimal place will vary by context. Understanding the context is up to us, and is not a question of methods or tools.

Maybe another reason some people insist on calling everything they do “TDD” is because they assume they have to use a single approach for all their work, and they’re looking for an umbrella term. TDD isn’t what they’re looking for. It only means what it means. It doesn’t mean what it doesn’t mean.

There’s nothing wrong with combining different techniques to achieve our goals in a given context. Lately, I’ve been finding value in combining London school TDD with another technique known as Design by Contract as an approach to developing microservices for a cloud environment. For driving out the design of individual microservices, I like to use Detroit school TDD. And I’m happy to use frameworks and libraries for boilerplate stuff. Everything has its place.

Practice makes good enough

TDD is a learned skill, and we improve with learned skills through mindful practice.

But…what? Just “good enough?” Doesn’t it go, “Practice makes perfect?”

Well, if the old saying is true and “perfect is the enemy of good,” then it follows logically that “good enough” is better than “perfect.” Therefore, if you’re a perfectionist, you ought to be aiming for “good enough.” Aiming for “perfect” would make you a less-than-perfect perfectionist, by definition. (Norman, coordinate.)

Code dojos and other hands-on activities are a great way for practitioners to learn about and experiment with different approaches to software development. Most programming katas can be approached in a variety of ways and can be used to compare and contrast different approaches given different sets of assumptions.

Personally, I like to have as many tools in my toolbox as possible and to cultivate a sense of when to use each tool. There’s no substitute for hands-on practice to learn about various techniques and to gain a sense of when and how to apply them.

Parting words

This isn’t meant to be a crash course on TDD. I just wanted to say that TDD means what it means and doesn’t mean what it doesn’t mean. I guess in that regard it sort of resembles a lot of other words and phrases; at least, the ones that mean what they mean and don’t mean what they don’t mean.

You don’t have to call everything you do “TDD” just to sound up to date or whatever. TDD is (or can become) one tool in your kit. If you write implementation code before you write examples, you aren’t “improving” or “extending” or “adapting” TDD in any sense whatsoever. It just isn’t TDD, in exactly the same sense that a carburetor isn’t a banana.

The post Almost, but not quite, entirely unlike TDD appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
The Problem with Agile Specifications

We’ve all been there. Sitting in an uncomfortable feature review where each point of feedback is being taken as an attack on the Delivery Team, who promptly retorts with, “The specifications weren’t clear.” After the meeting, the Program Team and Delivery Team each retreat to their respective corners and discuss what went wrong.

The Program Team feels like they’re being hung out to dry by a Delivery Team who says they don’t want to be order takers, yet immediately shirk any responsibility by just “doing what they are told.”

Their answer: “The Delivery Team needs to start owning the solution.”

On the other hand, the Delivery Team feels like they’re being hung out to dry by a Program Team who’s expecting them to read their minds and hit a constantly moving target.

Their answer: “The Program team needs to better define the specifications up front.”

What would effectively solve this impasse?

The Goal that Agile Teams Can’t See

I often use a golf metaphor when I coach teams who encounter this problem. Success in golf comes from the ability to reliably “hit the green” and set up an easy putt for par. No one expects to get a hole-in-one every time or even most of the time. Good golfers evaluate the course and intentionally play a series of shots that set themselves up to have the best opportunity to make a birdie and an easy par as backup.   They’re seeking the best approach-shot that will give them the ideal position on the green to make the birdie attempt.

We know we’ll have to deal with hazards from time to time.

Consistently hitting the green in the right location to set up a birdie opportunity is the key to high performance on the golf course.  Likewise, teams that plan their feedback “shots” have the best chance of “hitting the green” and make a birdie or par predictably—e.g. deliver value consistently.

The most valuable feedback you can get is working, tested code in production—i.e. in front of the customer or user that the software is intended to serve.  Short of that, working/tested code in an environment as close to production as you can get that is able to provide similar feedback.   Remember, everything we build is based on a hypothesis … “If we do X, we will solve Y problem and generate Z business impact.”

The only place that we will know the hypothesis was correct is once that code is in production and being used by the intended audience.

The obvious question is then, “How do we get most effective feedback the soonest?”

If we assume—that in Agile—working, tested code is the primary measurement of progress.  I equate that to “hitting the green.”   Getting the Feature into PROD as promised is like playing the round at/under par.  You need to be consistently “hitting the green” and giving the team great chances to make “birdie.”  There are always going to be hazards and bunker shots along the way … that’s just the reality in golf and in life.  Better to have to scramble occasionally than to have to do it regularly to avoid “missing the cut.”

Which brings us back to our meeting. The real reason the meeting went south was unclear expectations, specifically in the shared understanding of acceptance criteria and the nature of the “feedback shots” we’re making.

Note the differences in each team’s focus in their responses:  one is focusing on the “what” and the other on the “how.”  Neither seems to be looking at the overall outcome or “why.”  The Delivery team reacts to the Product Owner Team (PO) critiques as a pejorative observation of “how” (e.g., “We don’t like the way you did this”). The PO Team is seeing the “working code” for the first time, so they may very well be saying “we don’t like the way you did this.”  However, it might be: “Now that we see what was built, we realize what we asked for needs to be done a bit differently.” The Program Team feels their value in the meeting is to give feedback, so that the feature adds value to the business.

Have Clear Specifications and Expectations

To get the benefits we’re looking for from a collaborative, working relationship between the Program and Delivery Teams we must make the specifications and expectations clear.

Specifically, what feedback is trying to be generated, and, as importantly, “What does ’done’ look like at this point in time?”

When we talk about specifications, we should only go as far as to set up the Delivery Team to “hit the green” (e.g. provide the group with working/tested software that will provide the necessary feedback at this point).

When we review in-progress features the expectation is that the Delivery Team will come up with a solution that is “on the green” and is a short-putt away from being in the hole. The goal of the review is to collect the feedback that represents the work of the “short-putt.”

If the Delivery Team is constantly “missing the green,” you work as needed to get a better-shared understanding of the outcomes/feedback desired, expressed in the form of acceptance criteria.

Once the Delivery Team is consistently hitting the green, you may consider backing off the specifications a bit to give the team a little more latitude to creatively solve the problem.

Predictability is defined as the Program Team always getting the Delivery Team into position to consistently “hit the green.”  The Delivery Team should have a preponderance of “short putts” … not having to scramble (long putts) or hit it out of the hazard to “save par.”

 

The post Agile Specifications appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Sprint Report Basics: What Should You Be Tracking with Jessica Wolfe - YouTube

This week Jessica and Dave take a look at the Sprint Report template Dave uses in his CSM and CSPO classes. Using the report as a starting point, Jessica and Dave talk through the most valuable data points for new Scrum Teams as well as additional variables which are an important part of tracking the information that matters as your team learns to work together and gets better at delivering value for their customer.

If you’d like an audio-only version of the podcast, you can listen to that here:

Sprint Report Basics: What Should You Be Tracking? With Jessica Wolfe - SoundCloud
(1114 secs long, 19 plays)Play in SoundCloud

Links from the Video

If you’d like to download an Excel template of the report Dave and Jessica discuss in the video, you can find it here

In the video, Jessica references Derek Huether’s GQM blog posts. You can find one of them here.

Contacting Jessica

If you’d like to contact Jessica you can reach her at:

Contacting Dave

If you’d like to contact Dave you can reach him at:

If you have a question you’d like to submit for an upcoming podcast, please send them to dave.prior@leadingagile.com

And if you are interested in taking one of our upcoming Certified Scrum Master or Certified Scrum Product Owner classes, you can find all the details at https://www.leadingagile.com/our-gear/training/

The post Sprint Report Basics: <br>What Should You Be Tracking? <br>With Jessica Wolfe appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Objectives and Key Results (OKR) is a popular leadership process for setting, communicating and monitoring goals and results in organizations on a regular schedule, usually quarterly. The intent of OKRs is to link organization, team and personal objectives in a hierarchical way to measurable results or outcomes, focusing all efforts to make measurable contributions.

Why OKRs are important

In a Harvard Business Review survey, only 55% of middle managers can name one of their company’s top five priorities. When the leaders charged with explaining strategy to their people are given five chances to list their company’s strategic objectives, nearly half fail to get one right.  This isn’t anything new.  Andrew Grove first wrote about Objectives and Key Results (OKR) in his book High Output Management (1983) stating “A successful MBO system needs only to answer two questions: Where do I want to go? How will I pace myself to see if I’m getting there?”

Grove was actually referring to OKR, when he referenced MBO (Management By Objectives) in his book.  That said, knowing where you want to go will provide the Objective.  How you will pace yourself to see if you are getting there gives you milestones, or Key Results.

Later, Franklin Covey arrived at a similar strategy with The 4 Disciplines of Execution. (Have wildly important goals and a single measure of success)

Qualities of Objectives
  • Ambitious
  • Qualitative
  • Actionable
  • Time Bound
Qualities of Key Results
  • Measurable and Quantitative
  • Makes the objective achievable
  • Time Bound
OKR example for a start-up company raising a funding round

Company Objective 1
Finish raising capital for growth needs within 6 months [47% complete]

Key Results 1-4

  1. Email and phone 100 venture capital and seed funds (65 VCs contacted) [65%]
  2. Get at least 30 follow-up contact meetings or conference calls (15 follow-up meetings completed) [50%]
  3. Solicit at least 3 term sheets of our minimum required terms (1 term sheet solicited) [33%]
  4. Close an investment round with a minimum $10 Million pre-money (4 million raised) [40%]

Individual Objectives for Key Result 1
The objective is to email and phone 100 venture capital and seed funds. This will be distributed across one or more individuals. Then they will own individual key results.

Individual Key Results

Note that completion of these lower key results roll up to the higher level key result.

  1. Bob Smith research and identify 100 VC and seed funds (100 VC and seed funds identified) [100%]
  2. John Doe email or phone 4 VC or seed funds every week (3 VC’s contacted this week) [75%]
  3. Bob Smith research and identity 50 Angel Investors (25 Angel Investors identified) [50%]
How to implement an OKR
  1. List 3 objectives you want to strive for on each level. (company, program, individual)
  2. For each objective, list 3-4 key results to be achieved. (lower-level objectives become higher-level key results)
  3. Communicated objectives and key results to everyone within the company.
  4. Identify metrics (via GQM) that will communicate progress of completion.
  5. Update each result at a predefined cadence on a 0-100% completion scale.
  6. Consider an objective done when its results reach 70-80%.
  7. Review OKR’s regularly and set new ones.
Does monitoring OKR and goal progress promote goal attainment?

A journal article* from 2016 supports the suggestion that monitoring goal progress is a crucial process between setting and attaining the goal. Also, progress monitoring had larger effects on goal attainment when the outcomes were reported or made publicly available, and when the information was physically recorded.

*Psychological Bulletin, Vol 142(2), Feb 2016, 198-229

The post An Introduction to OKR: Objectives and Key Results appeared first on LeadingAgile.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview