Loading...

Follow LeadingAgile - Agile Transformation Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

In this episode of SoundNotes, Mike Cottmeyer and Dave Prior discuss how organizational Transformation continues to evolve. The conversation offers a brief preview of some of the things Mike will be covering at his Agile 2018 session: “Agile Transformation Explained” http://sched.co/EUF1

During the interview, Mike and Dave discuss how the skill sets needed to lead and shepherd those Transformations have grown to include much more than what we would have expected from an Agile coach several years ago. They also explore how we continue to refine our understanding of what happens during organizational Transformation and how we can better prepare our clients for the changes they will experience. 

Transforming the Transformation w/ Mike Cottmeyer - SoundCloud
(2712 secs long, 6 plays)Play in SoundCloud

LeadingAgile at Agile 2018

Mike Cottmeyer’s session “Agile Transformation Explained” will take place on Tuesday, August 7 at 3:45 PM.  You can learn more about it here: http://sched.co/EUF1

Paul Argiry’s session “Addressing Your CFO’s Concerns to an Enterprise-Wide Agile Transformation” will take place on Thursday, August 9 at 9:00 AM. You can learn more about it here: http://sched.co/EU93

John Tanner’s session “Agile Metrics – The GQM Approach” will be held on Wednesday, August 8 at 10:45 AM. You can learn more about it here: http://sched.co/EUCr

Contacting Mike

If you’d like to contact Mike you can reach him at:

Contacting Dave

If you’d like to contact Dave you can reach him at:

If you have a question you’d like to submit for an upcoming podcast, please send them to dave.prior@leadingagile.com

And if you’re interested in taking one of our upcoming Certified ScrumMaster or Certified Scrum Product Owner classes, you can find all the details at https://www.leadingagile.com/our-gear/training/

The post Transforming the Transformation <br>w/ Mike Cottmeyer appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The idea of a self-organizing team has been promoted strongly since the Agile movement started to gain popularity following the publication of the Agile Manifesto in 2001. The list of principles that support the four tenets of the Manifesto includes a brief mention of the idea: “The best architectures, requirements, and designs emerge from self-organizing teams.”

It’s noteworthy that there is no definition of “best,” nor of “self-organizing teams,” in the document. In fairness to its authors, however, we should remember that the definition of manifesto is “a written statement declaring publicly the intentions, motives, or views of its issuer,” according to Merriam-Webster. So, it isn’t strictly necessary for the authors of a manifesto to define terms or express things precisely. By definition, a manifesto is a subjective expression of general views.

Clearly, one of the general views of the authors of the Agile Manifesto is that self-organizing teams tend to produce good results. Okay, then: What do we mean by self-organizing team?

What is a Self-Organizing Team?

I’ve touched on the subject in the past in a light-hearted way, in two complementary posts, The micromanager’s guide to self-organizing teams, and The self-organizing team’s guide to micromanagers. Let’s take a slightly more serious look at the question.

It’s not easy to find a clear and generally-agreed definition of self-organizing team. People tend to leap directly to topics like “how to lead a self-organizing team” or “how to form/grow/coach a self-organizing team” or “misconceptions about self-organizing teams” without actually defining the term.

In an article in Forbes, Steffan Surdek offers three key characteristics (paraphrased here):

  • has a certain level of decision-making authority
  • is working toward meeting their emerging vision
  • takes ownership of how they work and continuously evolves

Okay, that’s a start. But what is “a certain level?” And aren’t all teams working toward meeting their emerging vision, whether self-organized or not? And aren’t “takes ownership of how they work” and “continuously evolves” two different things? If they “own” how they work, then isn’t it up to them whether to “evolve?” The purpose of that article is to discuss misconceptions about self-organizing teams, and yet it begins with a rather shaky definition of the term. Let’s keep looking.

The Scrum Alliance, an organization that supports an Agile-aligned method known as Scrum, offers an explanation of self-organizing teams, written by Nitin Mittal. Paraphrasing some of the key points about self-organizing teams from that piece:

  • They pull work for themselves
  • They manage their work
  • They don’t require “command and control”
  • They communicate more with each other than with their ScrumMaster
  • They aren’t afraid to ask questions
  • They continuously enhance their own skills

A common theme about continuous improvement seems to be emerging. We’ll soon see that the original idea of self-organizing teams doesn’t include that point. But ultimately we’ll see how the idea ties back to self-organizing teams in the context of Agile software development.

Here we have the notion that a self-organizing team manages its own work — deciding which work items to pull rather than awaiting assignment by a manager or team lead; deciding who on the team will work on which items; communicating with each other more than with a team lead; openly asking questions to clarify their understanding. This gives us a clue about what Surdek may have intended when he wrote, “a certain level of decision-making authority.” Is this the “level?”

Mike Cohn, a respected thought leader in the Agile community, attempts to clarify the meaning of “self-organizing team” in his article, The Role of Leaders on a Self-Organizing Team. He writes, “Self-organizing teams are not free from management control. Management chooses for them what product to build or often chooses who will work on their project, but they are nonetheless self-organizing. Neither are they free from influence.”

Further clarifying this point, Cohn quotes the 1986 article by Takeuchi and Nonaka, “The New New Product Development Game,” in which they wrote, “subtle control is also consistent with the self-organizing character of project teams.” He also quotes from The Biology of Business by Philip Anderson: “Self-organization does not mean that workers instead of managers engineer an organization design. It does not mean letting people do whatever they want to do. It means that management commits to guiding the evolution of behaviors that emerge from the interaction of independent agents instead of specifying in advance what effective behavior is.”

Now we’re getting somewhere. Clearly, the authors of the Agile Manifesto didn’t make up the term out of thin air. It was already a “known thing,” at least in some circles. Once released into the wild, however, the definition of self-organizing team seems to have drifted.

What is a Self-Organizing Team Not?

The pre-Agile definition doesn’t appear to have included anything about continuous improvement. It seems to focus on allowing teams to figure out the best way to get their work done. Nothing more. There’s nothing about self-management, for instance. Team members still have managers. These teams don’t handle their own human resources issues or their own financials. They don’t decide what the organization should build; they just build it.

There are those in the Agile community who insist the role of the manager is obsolete. An organization built around self-organizing teams has no need of the management function at all. They call not only for self-organizing teams, but for self-managing teams.

I’m unconvinced.

There’s a commonplace among agilists these days that a development team needs to understand, and even to participate in the definition of the business value of the software they are building or supporting. I will suggest this is not necessarily true. In a very small organization, it’s possible for development team members to be aware of and even to participate in the elaboration of “customer value.”

In larger organizations, however, development teams are so far removed from the strategic planning of the enterprise that their understanding of “value” can never be anything more than whatever their key stakeholders tell them it is. In addition, very few people who work at the hands-on level possess the skills necessary to carry out business analysis and planning at the enterprise level. Even if they were invited to the table, they would have relatively little to contribute.

I can imagine some software engineers taking umbrage at that comment. After all, they are so smart that they can figure out anything, right? And there are plenty of Agile coaches who like to say companies are “dysfunctional,” despite their 10 billion dollar a year operations. But history has not been kind to the concept of workers controlling the means of production.

The twentieth century saw a number of attempts at real self-management. These ranged from union-organized cooperatives in Germany to cooperatives in the Pacific northwest timber industry in the US to the steel industry in the eastern US to the only genuine attempts (maybe) at Communist-style worker ownership in the former Yugoslavia. The only one that has succeeded in growing and sustaining itself is the Mondragon cooperative in the Basque region of Spain, which has some unique characteristics compared to other examples.

The bottom line appears to be that those who understand how to get the work done in the best way may not be equally well-suited to running the company. Actually, there’s an excellent chance they’re not.

Thus, the original idea that a self-organizing team can take ownership of how the work is done makes sense, while the idea that they can also help to define business value does not make sense in all circumstances. That said, it’s been my experience that when we understand the value of our work to customers, we tend to feel more dedication and movitation, and greater satisfaction upon achieving a goal, than when we are merely carrying out assigned tasks with no context. It is helpful for a self-organizing team to be included in discussions of customer value, even if the individual team members have little to contribute to such discussions.

Natural Boundaries of Self-Organization

Something fundamental is missing from all these various definitions and explanations: What does it mean to self-organize?

If we accept that self-organization means a team determines how best to carry out the work, then what are the boundaries of the decisions the team can make?

Will the team employ pair programming or mob programming? Yes, that’s in scope. Will the team use test-driven development? Yes, that’s in scope. Will the team work five eight-hour days, four ten-hour days, or use some other arrangement for work hours? Yes, that’s in scope. Will the team perform exploratory testing together on a regular basis? Yes, that’s in scope. Will the team members teach one another their various skill sets while completing work items? Yes, that’s in scope. Can team members work remotely? Yes, that’s in scope. Will the remediation of technical debt be considered part of the team’s definition of done for work items? Yes, that’s in scope. Will the team apply the concept of continuous improvement? Yes, that’s in scope.

Will the team members receive a bonus? No, that’s not in scope. Will the team determine which strategic initiatives are likely to yield a positive return on investment for the corporation? No, that’s not in scope. Will the team define the organization’s policies with regard to discrimination or harassment? No, that’s not in scope. Will the team decide which market segments the company ought to focus on? No, that’s not in scope. Will the team decide that they are eligible for additional vacation time beyond the organizational standard? No, that’s not in scope. Will the team decide that this particular team member should be fired from the company? No, that’s not in scope.

Will the team decide whether this particular job candidate is hired and placed on the team? That may be in scope. Will the team decide that this particular team member should be removed form the team? That may be in scope. Will the team define their own dress code? That may be in scope. Will the team decide which continuous integration server to use? That may be in scope. Will the team have full control of test environments and deployment activities? That may be in scope.

Anything that pertains directly to how the work is done is in scope for a self-organizing team to decide, unless there are organizational considerations that trump their decision. For instance, an autonomous team in a small organization can and should choose its own toolchain for continuous delivery, while a team in a larger enterprise may be constrained to use the toolchain that has been selected for the organization, for consistency in operations.

Some additional matters may be in scope for self-organizing team depending on context. For instance, if the team feels the organization’s formal anti-harassment guidelines are too loose, they could craft a team agreement that defines tighter guidelines for their own use; but they could never do the reverse of that, and loosen organizational anti-harassment guidelines for themselves.

The same rule of thumb applies to technical practices, as well. An organization may publish a guideline to the effect, “We encourage teams to use test-driven development when feasible.” A self-organizing team has the right to include a statement in their team agreement to the effect, “Our team will use test-driven development as a baseline technical practice.” However, they don’t have the right to say, “Our team categorically refuses to use test-driven development.” A self-organizing team can tighten, but not loosen, organizational guidelines for themselves.

Self-Organization, Self-Management, and Self-Assembly

I once coached a team that wanted to push the boundaries of self-organization. Bit by bit, they took on increasing responsibility for their own work. At one point, most of the team members felt that a certain colleague was not pulling his weight. They asked their administrative manager if they could take ownership of that situation, too. She said yes.

She and I discussed the matter and agreed that the team was not going to enjoy what was about to happen. We also agreed it would be important and useful for them to experience it.

The team devoted a retrospective to the question of that colleague. Collaborating with him, they crafted an improvement plan for him to follow, and he agreed to it. They gave him six weeks to get aligned and up to speed. Four weeks later, he gave his two week notice and left the company.

The team decided they never again wanted to deal with a human resources issue. They had discovered their own limits. They embraced self-organization whole-heartedly and effectively. When it came to self-management, they learned that they did not wish to “own” it.

For me there’s a lesson here: You don’t really know what your limits are until you exceed them. Then you know you want to step back a bit. So, go ahead and exceed your limits. It will be uncomfortable, but it won’t be the end of the world, and you will have learned something valuable that cannot be learned in any other way.

At the other end of the scale, there are teams that never rise to the level of self-organization in a meaningful sense. They sit together, they work together, they pull work, and all that good stuff. But they never truly self-organize.

The Swiss consultant Joseph Pelrine considers a spectrum of team organization that spans self-assembly, self-organization, and self-management. The team I mentioned above dipped their collective toe into self-management waters and didn’t like it. Pelrine notes that many teams never even reach the level of self-organization.

In 2013 I described a conference session Pelrine facilitated in which we explored this concept in 2009. Unfortunately, I don’t know of another more direct source of information about the concept. In any case, the idea is that people can self-assemble on the fly when necessary, but such assembly only becomes self-organization when it results in some lasting change in behavior.

Pelrine used the situation of riding an elevator to exemplify. We made a square on the floor out of painter’s tape and pretended it was an elevator. As the elevator stopped on various floors, participants in the session got on and off.

In the debrief, Pelrine called attention to the fact that people automatically shifted position to allow others to get on the elevator, or to make space for riders to get off when the elevator reached their floor. He called this self-assembly.

People made adjustments to allow a task to get done, but they did not form working relationships or modify their general practices as a result. For those reasons, self-assembly was insufficient for self-organization.

So it is, too, with teams. You can put people in a work space together and call them a team, but if team formation doesn’t occur and the way people work doesn’t change, then the result isn’t self-organization. This ties back to the Agile notion that a self-organizing team “evolves” or “continuously improves.” Lacking this sort of progressive change, what we’re looking at is merely self-assembly.

Conclusion

Some general observations about team self-organization emerge from this exploration:

  • Putting people together in an elevator or a work space doesn’t make them a team
  • Telling people they are a self-organizing team doesn’t make them one
  • Self-organization is not the same as self-management
  • Teams can own the ways in which they carry out their work, but they don’t define what the work must be
  • If working together doesn’t result in relationships and behavioral change, the work group is not a team
  • Teams can define stricter rules for themselves than the organization requires, but they can’t loosen the organization’s rules as applied to themselves

The post Limits of a Self-Organizing Team appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Way back in the Dark Ages, it was common for programmers to receive phone calls in the wee hours of the morning to fix production issues in their applications. I remember joining a team in the late 1970s that supported an application used worldwide by a large corporation. When I started, the application suffered an average of 140 production incidents per month. The team resolved to drive that number down. After a few months, we had reduced the average number of incidents to 11 per month.

Experiences like that one led programmers of the era to adopt a philosophy we called defensive programming. The motivation was selfish. We were not driven by a fanatical dedication to quality. We were not driven by the 21st century buzzword, “passion.” We were driven by the simple biological imperative to get a good night’s sleep.

The Age of the Mainframe

Defensive programming consists of learning and using guidelines for software design and coding that tend to minimize the frequency and severity of problems in production. Back in the day, our quest for a good night’s sleep led us to build applications that were reliable and resilient; applications that could gracefully handle most unexpected inputs and unplanned partial outages of systems on which they were dependent, mostof the time, without waking us up in the middle of the night.

We built batch applications that could be restarted at any step without losing data. We built “online” applications that could, within limits, self-recover from common problems, and that could (at least) properly record what happened and notify us before the situation went too far. We built systems that could judge (well, follow rules about) which errors were truly critical and which ones could be logged and dealt with in the morning.

We just wanted to sleep through the night for a change. What we got were high availability, reliability, recoverability, and happier customers. And sleep, too.

The Age of the Webapp

The day came when the software community more-or-less forgot about defensive programming. Many systems have been built (I’m thinking largely of webapps, but not exclusively) that break easily and just die or hang without offering much information to help us figure out what went wrong and how to prevent it happening again.

Recovery came down to restoring data from a backup (if any) and restarting the application. Typically, this was (and still is) done without any root cause analysis. The goal is to get the thing up and running again quickly. Some people just expect the systems they support to fail periodically, and the “fix” is nothing more than a restart.

This may be a consequence of rushing the work. There’s a desire for rapid delivery. Unfortunately, rather than applying Systems Thinking, Lean Thinking, Agile thinking, sound software engineering principles, mindful testing by well-qualified testers, automated failover, and automated system monitoring and recovery tools, there’s been a tendency to use the same methods as in the Old Days, only pushing people to “go faster.” This generally results in more bugs delivered in less time.

Another cause might be the relatively high dependency on frameworks to generate applications quickly. It’s possible to produce robust code using a framework, but it requires a certain amount of hand-rolled code in addition to the generated boilerplate code. If we “go with the flow” and accept whatever the framework generates for us, we may be inviting problems in production.

It’s become a sort of game to take photos of obvious system failures and post them online. The ones I’ve seen in real life have included gas pumps, ATMs, and large advertising displays in shopping areas displaying Microsoft Windows error screens, and an airport kiosk displaying an HTTP 500 error complete with Java stack trace. The stack trace in particular is probably very helpful for vacationing families and traveling business people.

Some examples are rather beautiful, in their own way. Here’s just one example I found online:

The fact these errors are displayed to the general public demonstrates a level of customer focus on the part of the development teams. Not a goodlevel, but a level.

It’s amusing until one reflects on the potential consequences when a large proportion of software we depend on for everyday life is of this quality.

I guess that worked out okay for a generation of developers. There don’t seem to be many “war stories” of late nights, long weekends, or midnight wake-up calls from this era except in connection with manual deployment procedures. Manual deployment procedures are a pretty dependable recipe for lost sleep. But other than release nights, folks are apparently getting a good night’s sleep.

Sadly, today’s programmers seem to accept all-night “release parties” as the norm for software work, just as we did back in the Bad Old 1980s. When they take the view that this is normal and to be expected, it probably won’t occur to them to do anything about it. The curious and illogical pattern of forgetting or explicitly discarding lessons learned by previous generations of developers afflicts the current generation just as it did our own (well, I mean my own).

The Age of the Cloud

Then the “cloud” era came along. Now applications comprise numerous small pieces (services or microservices) that run inside (or outside) containers that live on dynamically-created virtual machines in an “elastic” environment that responds to changes in usage demand. Cloud infrastructures make it seem as if an application is long-lived, while in reality servers are being destroyed and re-created under the covers all the time.

Developers of microservices have no control over how developers of client applications will orchestrate the services. Developers of client applications have no control over how the microservices will be deployed and operated.

Cloud computing is evolving faster than previous paradigms did. Cloud, fog, and mist computing combined with the Internet of Things (IoT) and artificial intelligence (AI) add up to even more very small services interacting with one another dynamically in ways their developers cannot predict.

When two IoT devices equipped with AI come within radio range of one another, and they figure out how to interact based on algorithms they devised on their own, how can we mere humans know in advance all the possible edge cases and failure scenarios? What if it isn’t two, but twenty thousand IoT devices?

The opportunities for late-night wake-up calls are legion. Resiliency has become a Thing.

Challenges for Resiliency

When a system comprises numerous small components, it becomes easier to gain confidence that each component operates correctly in isolation, including exception scenarios. The trade-off is that it becomes harder to gain confidence that all the interrelated, interdependent pieces work correctly together as a whole.

With that in mind, I think we ought to be as rigorous as we can with the things we can control. There will be plenty of issues to deal with that we can’t control. Prevention isn’t enough, but it’s a strong start. Here are some things for managers, programmers, and testers to keep in mind.

Prevention – Programmers

The most fundamental way to ensure high quality when developing new code or modifying existing code is to learn and use generally-accepted good software design principles. Some of these principles are general, and apply to any language and any programming model. These are things like separation of concerns and follow consistent naming conventions. Building on those, people have identified good practices within each programming “paradigm,” where paradigm means things like Object-Oriented, Functional, Procedural, and Declarative Programming.

Object-Oriented Programming has guidelines like single responsibility principle and open-closed principle; Functional Programming has guidelines like referential transparency and avoid hidden side-effects; Procedural Programming has guidelines like higher-level logic first in the source file, then detailed routines and single exit from a subroutine or called block. Obviously those are not comprehensive desriptions. I’m just trying to set some context.

One of the most basic and general guidelines is the principle of least surprise (or least astonishment). Doing things in the most “standard” or “vanilla” way, given the programming paradigm and language(s) in use, will result in fewer “surprises” in production than will a “clever” design.

Paying attention to consistency across the board helps, too. Handle exceptions in a consistent way, provide a consistent user experience, use consistent domain language throughout the solution, etc. When you modify existing code, follow the patterns already existent in the code rather than introducing a radically different approach just because it happens to be your personal preference. (This doesn’t apply to routine, incremental refactoring to keep the code “clean,” of course; don’t add to that long conditional block just because it’s already there).

We can’t absolutely prevent any and all production issues when our solutions comprise massively distributed components that know nothing at all about one another until runtime. That isn’t an invitation to take no precautions at all, however. Some things programmers can do to minimize the risk of runtime issues in a world of cloud-based and IoT solutions:

  • Be disciplined about avoiding short-cuts to “meet a date;” buggy software isn’t really “done” anyway, no matter how quickly it’s released.
  • Emphasize system qualities like replaceability and resiliency over traditional qualities like maintainability. Treat your code as a temporary, tactical asset. It will be replaced, probably in less time than the typical production lifespan of traditional systems in the past.
  • Make effective use of frameworks and libraries by understanding any inherent limitations or “holes,” especially with respect to security, interoperability, and performance.
  • Learn and use appropriate development methods and guidelines to minimize the chance of errors down the line, such as Specification By Example to ensure consistent understanding of needs among stakeholders, Detroit School Test-Driven Development for algorithmic modules, London School Test-Driven Development for component interaction, Design By Contract to help components self-check when in operation, and others.
  • Make full use of the type systems of the languages in which you write the solution. Don’t define domain entities as simple types like integeror string.
  • Self-organize technical teams on a peer model rather than the Chief Programmer model, to help ensure common understanding and knowledge across the team and to increase the team’s bus number.
  • Standardize and automate as many routine, repetitive tasks as you can (running test suites, packaging code into deployable units, deploying code, configuring software, provisioning environments, etc.).
  • Don’t be satisfied with writing a few happy-path unit checks. Employ robust testing methods such as mutation testing and property-based testing as appropriate. You might be surprised at how many “holes” these tools can find in your unit test suite. It is perfectly okay if you write ten times as much “test” code as “production” code. There is a risk of writing some redundant test cases, but it would be far worse to overlook something important. Err on the side of overdoing it.
Prevention – Testers

A study I’ve cited before on this forum, from the University of Toronto, by Ding Yuan and several colleagues, Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems, suggests there is good value in making sure each small component that will become part of a distributed system is very thoroughly tested in isolation.

The good news is we have tools and techniques that enable this. The bad news is relatively few development organizations use them.

For those who specialize in testing software, there may be another bit of bad news (or good, depending on how you respond to it). The complexity of software has reached the point that purely manual methods can’t provide adequate confidence in systems. There are just too many variables. Even relatively routine functional checking of well-understood system behaviors requires so many test cases that it’s not reasonable to cover everything through manual test scripts.

As the general complexity of software solutions continues to increase, testers have to devote more and more time to exploration and discovery of unknown behaviors. They just don’t have time to perform routine validation manually.

That had become true before the advent of cloud computing. Now, and in the future world of high connectedness, that reality is all the more unavoidable.

That means testers no longer have a choice: They must acquire automation skills. I understand there is some angst about that, and I suppose you already know my thoughts on the subject. There’s no sense in sugar-coating it; you’ve got to learn something new, whether it’s automation skills or a different occupation altogether.

Demand for people who know how to change horse-shoes will never again be what it was in the 19th century. Similarly, 21st-century software can’t be supported properly with 20th-century testing methods.

When “test automation” started to become a Thing, it was mainly about writing automated functional checks. Since then, development teams have put automation to work to drive functional requirements (using practices called Behavior-Driven Development, Specification by Example, or Acceptance Test Driven Development) and to support “real” testing activities.

My prediction is that the proportion of “test automation” we do for functional checking will decline, while the proportion we do in support of “manual” testing will increase, because of the rising complexity and dynamic operation of software solutions. The growing area of artificial intelligence also has implications for software testing, both for testing AI solutions and for using AI tools to support testing.

The level of technical expertise required to use this kind of automation far exceeds that necessary for automating conventional functional checks. People who specialize in testing software will need technical skills more-or-less on par with competent software engineers. There’s just no getting around it.

Some things testers can do to minimize risk in a dynamic cloud and IoT world:

  • Understand that rigorous and thorough testing of individual software components has been shown to correlate strongly with high software quality. Don’t assume you “can’t” find problems at the unit level that might manifest when components are integrated. Push as much testing and checking as low in the stack as you can, as a way to minimize the cost and overhead of tests higher in the stack.
  • Automate any sort of predictable, routine checking at all levels. Your time is too valuable to waste performing this sort of validation manually.
  • Engage the rest of your team in exploratory testing sessions on a regular basis. Teach them how to do this effectively. Share your knowledge of software testing fundamentals.
  • Learn automation skills and keep up with developments in technology. When you learn something new, consider not only how to test the new technology, but also how the new technology might be used to support testing of other software.
Beyond Prevention

The preventive measures suggested above are things we can do before our solutions are deployed. But such measures will not guarantee a massively-distributed, highly dynamic solution always behaves according to its design intent. We need to “bake in” certain design characteristics to reduce the chances of misbehavior in production.

We want to be as clear as possible about defining the behaviors of APIs at all levels of abstraction, from method calls to RESTful HTTP calls. Our unit test suite gives us a degree of protection at build time, and using Design By Contract gives us a degree of protection at run time. Nothing, however, gives us a guarantee of correctness.

Contracts vs. Promises

In a 2002 piece entitled The Law of Leaky Abstractions, Joel Spolsky observed that abstractions over any implementation can leak details of that implementation. He gives the example of TCP, a reliable protocol built on top of IP, an unreliable protocol. There are times when the unreliability of IP “leaks” through to the TCP layer, unavoidably. Most, if not all abstractions have this characteristic.

The observation applies to a wide range of abstractions, including APIs. An API can be thought of as an abstraction over an implementation. By intent, we expose aspects of the code’s behavior that we want clients to use. With Design By Contract, we can enforce those details at run time.

But that level of enforcement does not prevent clients from creating dependencies on the implementation hiding behind the API. Titus Winters observed, “With a sufficient number of users of an API, it does not matter what you promised in the contract, all observable behaviors of your interface will be depended on by somebody.” He called this Hyrum’s Law, after fellow programmer Hyrum Wright.

If this is already true in the “ordinary” world, just imagine how much more true it will be once AI-equipped IoT devices and other intelligent software is the norm. An AI could hammer on an API in millions of different ways in a few seconds, with no preconceptions about which interactions “make sense.” It would discover ways to interact with the API that the designers never imagined, creating dependencies on side-effects of the underlying implementation.

One of the most basic ways to cope with the complexity of massively-distributed services is the idea of Consumer-Driven Contracts. Based on the idea of consumers that request services of providers, the model allows for contracts to be either Provider-Driven or Consumer-Driven. These contracts have the following characteristics:

Contract Open/Closed Complete Number Authority Bounded
Provider Closed Complete Single Authoritative Space/Time
Consumer Open Incomplete Multiple Non-authoritative Space/Time
Consumer-Driven Closed Complete Single Non-authoritative Consumers

Clear definitions of expectations on both sides might sound like a good way to ensure distributed systems behave according to design, but there are limitations. Even the original write-up about this approach, cited above, recognizes this:

No matter how lightweight the mechanisms for communicating and representing expectations and obligations, providers and consumers must know about, accept and adopt an agreed upon set of channels and conventions. This inevitably adds a layer of complexity and protocol dependence to an already complex service infrastructure.

Producer Contracts, Consumer Contracts, Design by Contract…these depend on some defined manner of interaction actually occurring at run time. Just as with contracts between humans, the contract itself does not guarantee performance. Contracts are more like “promises” than guarantees. If the client adheres to the contract, the service promises to deliver a result.

Software design patterns based on the idea of “promises” have emerged. These can help us build slightly more reliable distributed solutions than the idea of “contracts” as such when requests and responses occur asynchronously, and when the components involved are not part of a closed system but are dynamically discovered at run time. The idea is summarized nicely in an answer to a question on Quora provided by Evan Priestley in 2012:

A promise (or “future”) is an object which represents the output of some computation which hasn’t necessarily happened yet. When you try to use the value in a promise, the program waits for the computation to complete.

It would be challenging enough to test for this sort of behavior with conventional solutions; consider what it could be like in a world of autonomous, mobile, AI-driven IoT devices and AI-enabled client applications. Imagine a service fabric with a registry of services that perform particular types of functions. Assume the services follow the rule of thumb that they should be stateless.

A human-defined algorithm in a piece of client code might use the registry to discover an available service to perform operations on, say, an invoice. Having discovered the service, the client would call APIs to add line items to an invoice, apply sales tax to the amounts, and apply customer loyalty discounts to the total price.

An AI-based client would very likely operate in a different way. It could explore the behavior and reliability of all the available services pertaining to invoices. It might determine that Service A is highly dependable for the function of adding line items to an invoice; Service B is dependable for calculating sales tax; and Service C is dependable for applying customer loyalty rules. It decides to invoke specific APIs exposed by different services for each function, based on empirical data regarding “performance to promise” in the near-term past. Which services are most likely to fulfill their promises?..

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This episode of SoundNotes focuses on Definition of Ready. A few weeks ago, Dave Nicolette created a post in Field Notes that was the result of a current debate in the Agile community about whether having a Definition of Ready helps, or harms our ability to deliver value for the customer.

The episode begins with Dave Nicolette explaining Definition of Ready—within the context of the LeadingAgile model. After that, he and Dave Prior discuss/debate the the pros and cons of DoR from their respective backgrounds (Developer vs. PM).

Why You Need a Definition of Ready w/ Dave Nicolette - SoundCloud
(1554 secs long, 31 plays)Play in SoundCloud

Links From The Podcast Contacting Dave Nicolette Contacting Dave Prior

If you’d like to contact Dave you can reach him at:

If you have a question you’d like to submit for an upcoming podcast, please send them to dave.prior@leadingagile.com

And if you’re interested in taking one of our upcoming Certified ScrumMaster or Certified Scrum Product Owner classes, you can find all the details at https://www.leadingagile.com/our-gear/training/

The post Why You Need a Definition of Ready w/ Dave Nicolette appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 


In a piece posted here some time ago, Works on My Machine, I offered some suggestions on how to ensure our local development environment doesn’t have leftover configuration from a previous project that might affect our current project. One suggestion was to use an online coding environment rather than adding more and more configuration settings to a longstanding local system.

I listed two such environments with the caveat that this was a relatively new option, and we should “[e]xpect to see these environments improve, and expect to see more players in this market.” One that I listed at the time was C9.io, which is very good. Since the piece was published, C9.io has been purchased by Amazon and is now commercial. If that’s okay for you, then great. If you’re looking for something free of charge and non-intrusive, I’ve come across a couple of alternatives:

Supports several languages, Github integration, Travis, and similar integration

Supports several Node.js frameworks for webapp development

Keep in mind this is a rapidly-changing segment, so keep your eyes open for even better alternatives as well as for changes in the “free” status of these services.

Happy coding!

The post Online Coding Environments appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Often when working with clients I am asked about smaller batch sizes…mostly in the context of “How is that even possible? Look at what’s on our plate already.”

Admittedly that’s a tall order. Everyone is working as hard as they can; unfortunately, nothing is fundamentally moving through the system. Typically the result at end of the quarter is that most, if not all, the myriad of commitments end up delayed significantly or outright missed.

There are plenty of apparently valid “reasons”for this being the case. To put it politely the delivery teams are in a constant reactive mode not responsive …reacting to who happens to be complaining the loudest or can escalate to the most senior executive.

Frustrating for the teams and certainly annoying for the stakeholders who are depending on the work to be completed. Consequently there is more pressure to double down and start work earlier. “Better start now or they will NEVER get it done”. The result being too much Work in Process (WIP).

Everyone is “crowding the turnstile” at once!

Output v Outcomes

A couple weeks ago I heard one of my fellow LeadingAgile consultants say something that really resonated with me. He talked about the difference between “output” and “outcome”. We should be more concerned about the latter than the former.

Certainly we have to produce “output” to generate “outcomes”. But while we’re at it why not do both?

“Learn to produce output consistently that produces the best chance of achieving your desired “outcomes”.”

So how do you become “more consistent” when the engine is “clogged”? How do you pragmatically compare value when it’s hard to articulate?

The “System of Delivery” (ie the ability of the organization to turn the “ask” into “working tested software) needs to be predictable…that means it needs to get unclogged. Second, more often than not the “ask” is greater than the SoD capacity we could choose to limit ourselves to working on the highest value things.

Obvious? Yes. Easy to do? Well…….

Here’s the reality. It IS actually pretty easy to do if you’re willing to take the time and engage in this vital, yet often ignored conversation.

The Challenge

Here is the challenge… the team has multiple requesters vying for a constrained capacity (I’ll choose to limit this discussion to this situation….most organizations don’t have unlimited money, time or resources). Every “ask” is “valuable” from the requester’s standpoint. Oftentimes the “ask” is articulated in the form of a WHAT (and maybe a HOW as well). Buried in the request is a hypothetical OUTCOME that will result from the requested “ask” (output).

The conversation that needs to be had should address these two fundamental questions:

  1. How valuable are these requests in relation to each other?
  2. Which of these are most likely to drive the desired valuable OUTCOME?

Without going into the details let me walk you through how a recent client conversation resulted in “unclogging” the engine”. The “how to do this” will be the subject of a future blog post from my friend.

The “ask” to the team when teased apart turned out to be 100+ “valuable” items. These were sorted into three buckets: Valuable, High Value and Highest Value with a “no more than” constraint place on the two highest value buckets.

The result looked something like this:

Setting aside the lower two buckets the Highest Value items were arranged left to right with the highest of the high on the left and the lowest on the right. When finished the result looked like this:

When asked to qualitatively “size” the value a distribution emerged that spanned from 100 to 1 looking something like this:

When the team’s capacity was considered and applied starting on the left they found that they ran out of capacity quickly. In fact, they could only realistically commit to only a portion of the Highest Value bucket! They couldn’t even touch the other two buckets!

Here is the “Ah Hah!” moment ….

Least Valuable

The “1”on THIS list is by definition individually more valuable than ANY individual item in the other two buckets! If we drew the value graphically like a Pareto chart it might look like this:

The VAST amount of the value is contained in a very small portion of the “ask”!

Why then are we wasting time and effort even discussing these items let alone putting effort into them at this moment?

This is a very powerful technique to unclog the engine! All the (relatively) lower value items are a distraction and taking needed focus away from delivering the higher value items.

This process resulted in …

  1. A renegotiated commitment in light of the shared understanding of (outcome-focused) valuerelative to the current capacity (the buckets and order being vetted by the Product Owner and keystakeholders).
  2. The means to evaluate what would be the impact if additional capacity/resources were madeavailable
  3. A prioritized backlog of items to pull thru the System of Delivery
  4. A technique on how to slot new requests against pending items in the backlog when emerging requests arise

The post It’s All Relative appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is the final instalment in a series of posts that walks you through test-driving a microservice and setting up a working continuous delivery pipeline to deploy it to the cloud automatically. Hail to you who have survived Parts 1 through 3!

In Part 1 we set the stage for our project and you received a homework assignment to sign up for several online services.

In Part 2 we configured our version control, dependency management, and run management facilities and started to get familiar with our development environment.

In Part 3, we test-drove the initial thin vertical slice of our application.

Now it’s time to complete the rest of the delivery pipeline: Continuous integration, static code analysis, and automated deployment. We did the application development work in Part 3. From here on out, it’s all configuration work. You became a programmer in Part 3. Now you’ll become an infrastructure engineer. (Well, sort of. Don’t get a big head.)

Step 8: Configure Continuous Integration

There are two things to do to get continuous integration working with your microservice. First, tell Travis that you want to connect your Github repository. You do this by flipping a switch on a Travis web page that looks like this:

Next, add a configuration file to your Github project. In the project root directory, create a file named .travis.yml (yes, the name starts with a dot). Put the following data in the file:

notifications:
  email:
    recipients:
    - youremail@something.com
    on_success: change
    on_failure: change
language: ruby
rvm:
- 2.3.1
branches:
  only: master
install:
- bundle install
script:
- export PLAY_URL=http://0.0.0.0:4567
- rackup -P rackup.pid -p 4567 -o 0.0.0.0 &
- rake integration
- kill `cat rackup.pid`

Be sure to put your actual email address under recipients, rather than “youremail@something.com”.

==> Commit! <==

Once this is set up, each push to Github will initiate a build and test run on Travis. Here’s an excerpt of typical output from this, displayed on the Travis website.

Step 9: Configure Static Code Analysis

Now let’s add support for running static code analysis and test coverage analysis with CodeClimate. Sign into CodeClimate using the free (Open Source) account you created. Follow the steps to connect your microservice Github project to CodeClimate (see Adding Your First Repo. It will automatically analyze your project and take you to a report page.

That’s nice, but what you really want is to include the static code analysis and code coverage in your seamless, automated CI/CD pipeline. (This bit is considered part of CI rather than CD.) CodeClimate gives you some information about connecting their service with the continuous integration service of your choice (see Adding Travis-CI Test Coverage).

The key is the .travis.yml file you created to tie Travis-CI into your pipeline. Once you have connected your Github repo with CodeClimate, you can add some specifications to the .travis.yml file to cause Travis-CI to pull in CodeClimate static code analysis and reporting.

You’ll need the token that CodeClimate generates for you when you connect your Github repo. Documentation is here: Finding Your Test Coverage Token.

env:
  global:
    - CC_TEST_REPORTER_ID=your-test-coverage-token-goes-here
email:
  recipients:
  - davenicolette@gmail.com
  on_success: change
  on_failure: change
language: ruby
rvm:
- 2.3.1
before_script:
  - curl -L https://codeclimate.com/downloads/test-reporter/test-reporter-latest-linux-amd64 > ./cc-test-reporter
  - chmod +x ./cc-test-reporter
  - ./cc-test-reporter before-build
branches:
  only: master
install:
- bundle install
script:
- export PLAY_URL=http://0.0.0.0:4567
- rackup -P rackup.pid -p 4567 -o 0.0.0.0 &
- rake integration
- kill `cat rackup.pid`
after_script:
  - ./cc-test-reporter after-build --exit-code $TRAVIS_TEST_RESULT

==> Commit! <==

The Travis-CI build will show the CodeClimate actions, but the actual report will not appear in the Travis-CI log output. To see the report, visit the CodeClimate website.

Step 10: Automated Deployment to Production

We’re almost home. There’s just one more piece to the pipeline: Automated deployment to production when the CI build and tests are successful. You set up a free account on Heroku, and that will be the production environment for your microservice.

First, define your microservice to Heroku.

Download the Heroku command line app to your Code Anywhere container (see documentation):

wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh

Log into Heroku from the command line:

heroku login 

When prompted, enter the userid and password you created when you signed up for Heroku.

Install the Travis-CI command line gem (see documentation):

gem install travis -v 1.8.8 --no-rdoc --no-ri

Use this command to get an API key, encrypt it, and add it to your .travis.yml file (documentation):

travis encrypt $(heroku auth:token) --add deploy.api_key

Automatic deployment is triggered by the continuous integration server, and we set it up by adding some specifications to the .travis.yml file. The details are documented on the Travis-CI site at Deployment to Heroku.

When you’ve added the deploy section to .travis.yml, it will look similar to this:

deploy:
  provider: heroku
  app: playservice
  api_key:
    secure: your secure API key, generated by the travis encrypt command

Create a file in the root directory of your project named Procfile containing this line:

web: rackup -p $PORT

That will be the command Heroku uses to start your web server. Don’t specify any ports or other arguments as you do in your development environment on Code Anywhere. Heroku will use the setting of the PORT environment variable, which it controls.

==> Commit! <==

And thus the moment of truth arrives. If all these various bits and pieces have been defined correctly, the push you just did to Github will trigger the entire pipeline, and you’ll be able to run your microservice in production (Heroku) by accessing a URL such as playservice.herokuapp.com/v1.0.0. You can watch the build on Travis-CI, and when it completes with success you can try your URL on Heroku.

Conclusion

If you’re coming from a non-technical role, and/or your technical skills are rusty, then I hope you’ve gained an appreciation for what it takes to do test-driven development and to set up a CI/CD pipeline for continuous delivery. This exercise has been a relatively simple example of those things, but still a pretty realistic one.

If you’re coming from a programming background, then I hope you’ve picked up some practical information about the “ops” side of devops. Similarly, if you’re coming from an infrastructure background, then I hope the section of the exercise that involved test-driven development was informative.

Part of the point of this exercise is to underscore the fact that CI and CD have moved quickly from somewhat arcane and “advanced” practices to commonplace, baseline expectations for software development and delivery. In addition, the cloud-based services and tooling available to support these things have matured very rapidly indeed. They are currently usable enough that a person need not be a deep expert in technical matters to build a simple application and set up automated testing, static code analysis, and deployment.

If you’re involved with software development and delivery in a technical role, the day is fast approaching when you won’t be able to get away with lacking these skills. If you’re still writing code without tests, think about it. If you’re still doing functional testing or “checking” manually, think about it. If you’re still configuring and provisioning servers manually, think about it.

The post Build a CI/CD Pipeline in the Cloud: Part Four appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

If you’re working at an organization that is applying Agile practices across the enterprise, chances are,  somewhere early on in the Transformation a tool was selected to help the teams manage their work. And, hopefully provide management with some kind of visibility into the work being done.

One of the unfortunate truths about the tools is that while they are capable of doing a lot of things, most companies do not invest the time in setting the tool up to work for them. This often leads to teams struggling with having to adjust their practices to meet the tool and management not getting the visibility it needs into how the work is going.

It doesn’t have to be this way. 

The tools can add a lot of value, but you have to get them set up right for that to happen.

In this episode of SoundNotes, Jessica Wolfe shares a story about how she was able to help one organization adjust the tool to provide the business with the information the CEO needed to understand how work was progressing across the portfolio and how it was tying back to the company’s strategic objectives. 

As they dig into the story, Jessica and Dave explore what kinds of information the tools are able to provide, and how that can help your organization understand what is happening at a level of detail that includes strategic value, financial metrics, risk, and much more.

She also manages to completely change Dave’s long mistrust of anything other than post-its and sharpies.

Configuring Agile Tools to Work for You w/ Jessica Wolfe - SoundCloud
(2430 secs long, 7 plays)Play in SoundCloud

Contacting Jessica

If you’d like to contact Jessica you can reach her at:

Contacting Dave

If you’d like to contact Dave you can reach him at:

Send Us Your Questions

If you have a question you’d like to submit for an upcoming podcast, please send them to dave.prior@leadingagile.com

Upcoming Classes

And if you are interested in taking one of our upcoming Certified Scrum Master or Certified Scrum Product Owner classes, you can find all the details at https://www.leadingagile.com/our-gear/training/

The post Configuring Agile Tools To Work For You w/ Jessica Wolfe appeared first on LeadingAgile.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is Part 3 of a four-part series of posts that walks you through setting up a working continuous delivery pipeline in the cloud.

In Part 1 we set the stage for our project and you received a homework assignment to sign up for several online services.

In Part 2 we configured our version control, dependency management, and run management facilities and started to get familiar with our development environment.

In this installment, we’ll test-drive the first thin vertical slice of application functionality.

In Part 4, we’ll build out the rest of the CI/CD pipeline.

Review the Story

We were just about to start test-driving our application. Let’s review our first Story before we proceed:

Story:

In order to validate architectural assumptions 
We want to see a simple transaction flow all the way through the system

Acceptance criteria: 

When a user submits a "Hello" request 
Then the system responds with "Hello"

We glossed over what it means for the user to submit a request, and we haven’t discussed any details about how we want the service to respond to that request. Let’s clarify.

We’ve been talking about “saying hello,” but that’s a fairly general statement. What do we actually want the request to look like, and what do we want the microservice to return?

The core principle of engineering, found on signs in labs the world over, is: “Don’t do anything stupid on purpose.” This principle applies equally to software development.

We’ll do plenty of stupid things by accident, so there’s no sense in piling on. In the context of validating our architectural assumptions for an Internet-based microservice, that means balancing two goals:

  • We want to honor the ancient wisdom of programmers, YAGNI (You Ain’t Gonna Need It). People have learned that when we try to anticipate future requirements and code for them in advance to “save time,” we end up doubling our work because when the future finally arrives we invariably discover the requirements are different from our prediction. We have to rip out what we wrote before. So we want to try and build only what’s really needed right now. We used to try and build for the future by accident (or innocently, with the best of intentions) all the time. Now we know better, so if we do it again we’ll be violating the core principle of engineering.
  • We want to avoid going overboard with YAGNI. There are certain things we know will be needed in a viable cloud-based continuous delivery pipeline. There are certain things we know a microservice must support. We can’t predict what specific requests our customers will want us to support in the future, but we can predict some of these common things.

So, we know the microservice API has to be versioned. Why not define “saying hello” to mean that the microservice knows how to respond to an inquiry such as, “Dear microservice, do you understand version 1.0.0?” And it should package the response as a JSON document.

With that in mind, let’s say we want the RESTful URI to look like this:

http://[server][domain][:port]/v1.0.0/

…and we want the response document to look like this:

{
  "service": "playservice",
  "version": "1.0.0",
  "status": "supported"
}

We’ll need to handle the case when the request is improperly formatted, too. For purposes of validating our architectural assumptions, I’m going to say the “happy path” case will be sufficient. You may disagree. This is a judgment call that pertains to the balance between doing something stupid and taking YAGNI too far. There isn’t a single “right” answer.

Step 6: Write the Hello Functionality

By convention, Ruby applications are usually structured with separate directories for the production code and the test code. The production directory is usually called app or lib. The test directory may be called test or spec, depending on which unit testing tools are used. We’re using Rspec, and the convention is to name the test directory spec. Let’s create those directories now. They are subdirectories of playservice (or whatever the root directory of your project is called).

cd playservice
mkdir app 
mkdir spec

Until now, we’ve been doing configuration work. Now we’re going to do software development work. We’ll start by creating specs (short for “specifications”) that describe the behavior we want to see from our “hello” function.

The specs are executable, and they will “fail” (display error messages) when the application does not behave according to expectations. When we have wrangled the software into submission, the specs will report “success.” At that point, we can clean up whatever mess we may have made in the course of making it work.

Then we’ll repeat the whole sad business over and over again until we’ve finished the application. That’s test-driven development (TDD) in a nutshell. TDD is generally regarded as the “proper” way to develop code that we type in with our own fingers, as opposed to assembling pre-built building blocks or using a code generator.

We could just throw a couple of lines of code together that spit out the JSON document we’re looking for, and call it a day. That would adhere to the YAGNI principle. But it would be stupid, because we know a couple of things about software.

One of the basic things about software is separation of concerns. We’re talking about multiple concerns here. The logic to recognize “version 1.0.0” and provide the response data is one concern. The logic to package that data in the form of a JSON document and return it to the requester is a different concern. So we know we’ll need two pieces of software to complete this Story, if we want to do it in a way that helps us validate our architectural assumptions, as opposed to some sloppy, random, hacky way.

Note: Some people like to call “basic things about software” by the name, “software engineering principles.” It sounds better, I guess.

There are little tricks or tips for using any tool effectively. I’m not going to ask you to go and learn about Rspec on your own. When using Rspec, it’s often useful to define some common things in a file named (by convention) spec_helper.rb. Let’s create that file for our project now, while things are still simple.

Create a new file and enter this data into it:

$LOAD_PATH.unshift File.expand_path('../app', __FILE__)

require 'rspec'

Now save the file as playservice/spec/spec_helper.rb.

The logic that recognizes the version number, etc., doesn’t need to “know” it’s running as part of a microservice. It only needs to know that when it receives a string value that it recognizes, it returns a few other string values. The simplest implementation will return the three values the microservice will need in order to populate the response document, as described above. Let’s express that behavior in a spec:

require_relative "../app/handler"

describe 'playservice: ' do 

  before(:example) do 
    @handler = Handler.new
  end

  context 'verifying version support: ' do
     it 'reports that version 1.0.0 is supported' do 
       expect(@handler.default).to eq({
         "service" => "playservice",
         "version" => "1.0.0",
         "status" => "supported"})     
     end
  end
end

We can run our specs using Rspec as follows. Note we have to be in the project root directory when we do this.

rspec spec/handler_spec.rb 

You should see a result like this:

$ rspec spec/handler_spec.rb
F

Failures:

  1) playservice:  verifying version support:  reports that version 1.0.0 is supported
     Failure/Error: @handler = Handler.new

     NameError:
       uninitialized constant Handler
     # ./spec/handler_spec.rb:4:in `block (2 levels) in '

Finished in 0.00689 seconds (files took 0.27559 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/handler_spec.rb:8 # playservice:  verifying version support:  reports that version 1.0.0 is supported

This is good! It’s telling us (in its own special way) that the behavior we’re looking for hasn’t been implemented yet. And that’s the truth! It’s very useful to have specs that tell us the truth about our code. Otherwise, we’d just be swatting flies in the dark.

The output is telling us the following:

  • “uninitialized constant Handler” is Ruby’s way of saying, “AFAIK there’s no such thing as Handler”. We haven’t created Handler yet, so this is exactly where we expect to be at this point.
  • “rspec ./spec/handler_spec.rb:8 # playservice: verifying version support: reports that version 1.0.0 is supported” is Rspec’s way of saying, “I detected the problem at line 8 in file handler_spec.rb in a block labeled ‘reports that version 1.0.0 is supported’, which is inside a block labled ‘verifying version support’, which is inside a block labeled ‘playservice'”. You’ll be grateful for that level of detail when you’ve built up an application that has many spec files containing many blocks.

Our next step is to create a Ruby class named Handler that will know how to produce the expected output. I won’t ask you to learn Ruby instantaneously. A key thing to know is that a Ruby class is written in Camel Case while the name of the file that contains the source code is written in Snake Case.

I can hear you saying, Wait a minute! What do animals have to do with anything?

Here’s a phrase written in Camel Case, like a Ruby class name:

ThisCouldBeARubyClass

…and here’s the same phrase written in Snake Case, with the .rb suffix as if it were a Ruby source file name:

this_could_be_a_ruby_class.rb

Here’s our Handle class. Create another new file with these contents and save it under the app subdirectory.

class Handler 
  def default 
    { 
      "service" => "playservice",
      "version" => "1.0.0",
      "status" => "supported"
    }
  end
end 

Now when we run the spec again, we get this output:

$ rspec spec/handler_spec.rb
.

Finished in 0.00743 seconds (files took 0.13385 seconds to load)
1 example, 0 failures

In case you’re coming to this from a non-programming background, that was test-driven development, right there. You just did TDD. Granted, in most cases it takes many more of these little steps to build up a useful amount of code, but that was genuine TDD. You’re a programmer now! Don’t tell your friends, or every time you go to a party they’ll ask you to fix their personal computer.

==> Commit! <==

Step 7: Construct the Initial Playservice Application

The second piece of logic we need is the piece that routes the request to the version responder and packages the output from that method as a JSON document to send back to the requester.

The code that receives the request and returns the response isn’t an isolated Ruby method. It’s the actual microservice code. We can’t test-drive it with a microtest example that is completely self-contained. We have to test-drive it at the “integration” level, with the web server running.

I can hear you saying, Wait a minute! What do you mean, “level?” You never said anything about “levels” before! Are you going to keep adding more and more stuff?

That’s really two questions. Re Question #1: There are multiple levels of testing (or checking) and multiple levels of test-driving. Microtests are the base. They’re the smallest cases, and they have no dependencies on any code outside of themselves.

The next level up from there might be called “unit” or “component” or “integration” or something like that, depending on whom you ask. Those test cases may have some external dependencies, and they exercise a larger chunk of the application than the microtests do. We have to take one step up from microtests to test-drive the microservice code itself.

Re Question #2: Yes.

We ran our initial microtest case by executing Rspec directly on the command line. It’s good to know how to do that. In a “real” project, we would use a build tool to run builds and tests. For Ruby, the standard build tool is Rake. Rake uses a configuration file that it expects to find in the project root directory and it expects to be named Rakefile. Let’s create a Rakefile for our project now.

We’ll configure Rake so that we can run microtests and integration tests separately. Create a new file and put this data in it:

require 'rspec/core/rake_task'

RSpec::Core::RakeTask.new(:spec) do |t|
  t.rspec_opts = "--tag ~integration"
  t.pattern = Dir.glob('spec/**/*_spec.rb')
end

RSpec::Core::RakeTask.new(:integration) do |t|
  t.rspec_opts = "--tag integration"
  t.pattern = Dir.glob('spec/**/*_spec.rb')
end

task :default => :spec

Let’s do a quick check to see that we can execute the same spec as we did before, but running Rake instead of Rspec directly. Try this command:

rake

We could have run rake spec, but because we defined spec to be the default Rake task, we don’t have to type that much. That’s handy, as we’ll normally run the microtest-level specs far more frequently than anything else. The result should look like this:

$ rake
/home/cabox/.rvm/rubies/ruby-2.1.2/bin/ruby -I/home/cabox/.rvm/gems/ruby-2.1.2/gems/rspec-support-3.7.1/lib:/home/cabox/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.7.1/lib /home/cabox/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.7.1/exe/rspec spec/handler_spec.rb --tag ~integration
Run options: exclude {:integration=>true}
.

Finished in 0.00267 seconds (files took 0.18323 seconds to load)
1 example, 0 failures

That means our single microtest example produced the same result as it did when we ran it directly with Rspec.

Now let’s try running “integration tests.”

rake integration 

This time, the result looks like this:

$ rake integration
/home/cabox/.rvm/rubies/ruby-2.1.2/bin/ruby -I/home/cabox/.rvm/gems/ruby-2.1.2/gems/rspec-support-3.7.1/lib:/home/cabox/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.7.1/lib /home/cabox/.rvm/gems/ruby-2.1.2/gems/rspec-core-3.7.1/exe/rspec --pattern spec/\*\*\{,/\*/\*\*\}/\*_spec.rb --tag integration
Run options: include {:integration=>true}

All examples were filtered out

Finished in 0.0004 seconds (files took 0.16691 seconds to load)
0 examples, 0 failures

It’s telling us that no examples were executed. That’s because we haven’t written an integration test yet.

==> Commit! <==

You’ve probably noticed that we keep switching around among different activities like infrastructure setup, tool configuration, application coding, testing, and test automation. That’s pretty normal these days. The era when each of those little tasks was carried out by a separate team is rapidly waning. Some specialization is useful, but excessive specialization tends to slow things down. One thing I hope you’re learning from this exercise is that these various tasks aren’t so terribly difficult that any of them really requires a deep expert for the majority of routine work <= note the caveat

Now let’s test-drive the microservice call that returns the JSON document we defined earlier. We’ll mark that example as “integration” so we can run it separately from “unit” checks.

require 'json'
require 'rspec'
require 'rest-client'

describe 'playservice: ' do
  context 'verifying version support: ' do 
    it 'says that it supports version 1.0.0', :integration => true do
      response = RestClient.get 'http://0.0.0.0:4567/'
      expect(JSON.parse(response)['service'])
        .to eq('playservice')
    end
  end
end  

Don’t worry about the details if you’re not into Ruby, but do notice a couple of key things about that file:

First, on the line that starts with it, there’s a specification “:integration => true”. This is how Rake will know to choose this example when running “integration” tests.

Second, notice there are some new “require” statements. We’ll need to update our Gemfile and run bundle install to pick up these additional dependencies. The reason for them is to access the microservice and to interpret the JSON response document we expect the microservice to return.

Gemfile should now contain:

source 'http://rubygems.org'

gem 'sinatra', '1.4.8'
gem 'thin'
gem 'json'

group :test do
  gem 'rspec'
  gem 'rest-client'
end

Our microservice will need the json gem in production, but our project is not a “rest client”. The Rspec examples will need to act as a REST client, so the rest-client gem is needed only for running tests.

Remember to run bundle install after updating Gemfile, and remember to…

==> Commit! <==

Now we’ll use Rack to start the web server and we’ll try our integration test:

rackup -p 4567 -o 0.0.0.0 &

We’re specifying port 4567, as that’s the default port for Thin. We’re telling the web server to listen on host 0.0.0.0, as that’s the default for Code Anywhere. Later you’ll see that we don’t use these setting for the production deployment.

If everything is in order, you should see something like this appear on the console:

$ rackup -p 4567 -o 0.0.0.0 &
[1] 1148
cabox@box-codeanywhere:~/workspace/playservice$ Thin web server (v1.7.2 codename Bachmanity)
Maximum connections set to 1024
Listening on 0.0.0.0:4567, CTRL+C to stop

One of the tabs Code Anywhere opened when you created the connection to your container contains information about the container. It looks something like this:

Look for the part that looks like this:

It’s telling you the URL to use to access the service that’s running in your container. Open a browser tab and enter that URL, with the port number and path info appended, like this:

http://playcontainer-davenicolette339440.codeanyapp.com:4567/v1.0.0

That’s the URL where we’ll access our microservice. But we don’t want to hard-code that value in our spec file. It will be different in different environments. We’re in our development environment now, but we’ll also be running the specs in the continuous integration environment. In keeping with 12 Factor design guidelines for microservices, we want the URL to be provided to the application through an environment variable.

export PLAY_URL=http://playcontainer-davenicolette339440.codeanyapp.com:4567

…and we modify the spec to read the environment variable:

require 'json'
require 'rspec'
require 'rest-client'

describe 'playservice: ' do
  context 'verifying version support: ' do 
    it 'says that it supports version 1.0.0', :integration => true do
      response = RestClient.get "#{ENV['PLAY_URL']}/v1.0.0/"
      expect(JSON.parse(response)['service'])
        .to eq('playservice')
    end
  end
end

Now when we run integration tests, we see that it doesn’t know what we’re talking about

rake integration
$ rake integration
13.64.149.219 - - [12/Feb/2018:10:51:56 -0500] "GET /v1.0.0/ HTTP/1.1" 404 513 0.0015
F

Failures:

  1) playservice:  verifying version support:  says that it supports version 1.0.0
     Failure/Error: response = RestClient.get "#{ENV['PLAY_URL']}/v1.0.0/"

     RestClient::NotFound:
       404 Not Found

Finished in 0.07062 seconds (files took 0.83018 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/version_check_spec.rb:7 # playservice:  verifying version support:  says that it supports version 1.0.0

This is where we expect to be at this point, as we haven’t written the microservice yet. Let’s do that now. Create a file with the following contents and save it as app/playservice.rb.

require 'sinatra'
require 'thin' get '/v1.0.0/' do 'Nothing to see here' end

Obviously, Nothing to see here is not correct. We’re taking small steps to test-drive the solution. We’re currently moving from ‘404’ (not found) to ‘found it, but it gives the wrong answer’. That’s part of the TDD process.

After restarting the web server and running rake integration again, we get the following result:

$ rake integration
13.64.149.219 - - [12/Feb/2018:11:01:01 -0500] "GET /v1.0.0/ HTTP/1.1" 200 19 0.0155
F

Failures:

  1) playservice:  verifying version support:  says that it supports version 1.0.0
     Failure/Error:
       expect(JSON.parse(response)['service'])
         .to..
Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Agile methods aren’t just for software anymore. Actually, they haven’t been just for software for quite a while now. That said, the types of companies, and the types of industries, that are exploring team-based, collaborative, iterative, and incremental approaches to do their work is rather breathtaking. Agile is truly going mainstream. The question at hand is can we apply team-based Agile straight out of the box in a non-software context? Can we take our scaled Agile approaches and apply them without modification? Mike Cottmeyer’s experience is that most of the principles and patterns apply, but sometimes the practices and frameworks need modification for a particular context.

The post [Live] Faster Food and a Better Place to Sleep: Exploring Agile in Non-IT Domains – Live from Agile Dev West appeared first on LeadingAgile.

Read Full Article
Visit website

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview