Category Archives: Agile

Agile #scale

A few years ago I would have had difficulty mentioning failure and Agile software development in the same breadth. On the heals of the ever popular manifesto and effective practices such as XP and Scrum, Agile adoption grew, and the more it grew, the more software developers and managers felt empowered to beat the the long and dismal history of software failure.

Photo: Scott Olson/Getty Images

Now there’s increasing evidence to suggest that Agile software development and Agile management practices have finally earned the interest and attention of larger organizations,  the same organizations who usually find comfort hiring from a pool of 400,000 management professionals carrying the widely recognized PMP industry certification.  This certification, (known as the Project Management Professional), is a leading certification for project managers offered by the Project Management Institute (PMI).  The certification’s popularity makes the PMI very influential in establishing culture and practice of management within larger organizations.  The PMI has now turned their attention to Agile.

But in the spirit of Agile’s promotion of continuous feedback and adjustment, I’ve encountered quite a few challenges scaling agile in larger organizations.  Some of these challenges are structural, others cultural, and so it’s time for me to adjust my own tune on the realities that come from adopting Agile in such environments.

The following are four challenges confronting Agile practitioners in larger organizations:

  1. System of reporting” differs from the “System of production” – The corporate hierarchy (i.e. “system of reporting”) renders difficult the self-organization and a cross-functional focus required for successful Agile teams.
  2. Financial cycles differ from management cycles which differ from project cyclesExcellent article by Jim Highsmith on the temporal challenges an iterative approach brings when the organization thinks and acts on a quarterly and yearly basis.
  3. Definition of done –  Procurement, budgeting and yearly reviews all necessitate a formal understanding of when the project will finish. You may even reach consensus on a scope and date to appease management but your first release plan that extends past the terms of this definition may present problems.
  4. Rewarding individuals over teams – Yearly corporate performance review programs focus on the individual yet Agile makes no provisions for this kind of evaluation, in fact it can be detrimental (pdf) to the team’s trust and self-organization.

What challenges have you encountered scaling Agile in larger organizations? How are you overcoming them?

 

Tagged

Technology of Doing

Over the course of the past twenty years or so, the software development community has created or sought axioms, metaphors, techniques, approaches, analogies, processes and other practices (sometimes borrowing them from automobile manufacturing) that render software development work more productive and the worker more effective.  These practices continue to influence work across organizations, teams, and individuals and their recent rise to prominence in other knowledge worker disciplines supports the notion that software developers are in fact “pioneers in knowledge work”, as was suggested by Watts Humphrey.

Many of these practices resulted from the need to improve the effectiveness of software development efforts after the general ineffectiveness experienced with projects that followed the traditional management and engineering mindset.  As other forms of work continue to evolve to depend more and more on the effective application of specialized knowledge,  we may find that what has proven effective for the software development community (e.g. Agile) may be equally so when applied to other work disciplines.

Background

About ten years ago I started implementing my vision for a repository of knowledge worker tools and practices that could help promote this cross-pollination of practices.  At the time I referred to it by the name “Metaframeworks” and the idea was to organize, document and digitally capture these popular practices so they could be effectively studied, referenced, mixed and matched across disciplines.  An example of the practices I set out to capture were all those defined under the compound software development practices such as Scrum and Extreme Programming. With Metaframeworks however, I wanted to capture the practices emerging from many knowledge worker disciplines, not just software development.

So as the ‘need-to-know’ culture of Web 1.0 began a transition towards a ‘need-to-share’ culture in Web 2.0, I also started looking for ways to add more structure to this repository as well as introducing new ways to share the knowledge and information it captured.  At this point the project was renamed ‘CGuide’, and supported a new hierarchical classification of practices, with the classification scheme borrowed directly from Google Directory.  The repository had also moved from my local hard drive to Amazon’s online Simple Storage Service (S3) and while CGuide’s predictable structure and online accessibility made it easier to find and navigate towards a relevant practice, there was something missing.  Needed was a common metamodel and associated metadata to capture key characteristics of these practices.  This metadata would be critical in promoting the development of semantic tools capable of searching through the directory of practices, for example.

Which brings me to the third generation of this repository.  This new phase operates in a web increasingly dominated by social media technologies such as Twitter and Facebook but also increasingly limited by traditional keyword based search tools of Web 1.0 and 2.0.   As our collective maturity in using the Internet increases, along with the tsunami of data it generates, users are demanding more relevant search results to their increasingly sophisticated queries.  Back in 1999 the founder of the World-Wide Web, Tim Berners-Lee, was quoted as follows:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.

– Tim Berners-Lee, 1999

His ambitious vision is turning out to be the seed for the Web’s next generation.  In the spirit of evolving this repository with the times of the Internet, I am moving it to Metaweb’s excellent structured data platform known as Freebase, with the vision of turning it into the world’s largest open linked data repository of knowledge worker practices.

The Project

Technology of Doing includes a comprehensive dataset of knowledge worker practices.  You can start using this dataset by visiting http://technologyofdoing.freebase.com.   Practices are none other than methods, concepts or phenomena that feed from a large body of true sciences and/or experiences and provide an effective way to achieve a set of objectives.  For example, ‘Pair-programming‘ is a type of knowledge worker practice prevalent in software development , ‘DIKW‘ is a type of practice found in knowledge management, and ‘Strengths-based Selection‘ is practice adopted by business management specialists. Each of these practices can be associated to one or more objectives (e.g. improve productivity) as well as a one or more practitioner strengths (e.g. empathy).

Semantic Possibilities

With the structure and metadata in place, the development of semantic tools capable of offering a strengths-based practice selection to individuals, teams and organizations alike is now possible.  An example of this is the Boost Practices application, where individuals can use it to better align their work practices with their natural talents.

Check back throughout 2011 as I cover more of the Technology of Doing.

Tagged

Is BDD the solution to the tester’s dilemma?

We, testers, are coming late to the agile party. I would argue that the role of a programmer is first class in Extreme Programming, the role of a team lead first class in Scrum, and the role of the architect first class in Feature Driven Development (FDD). I know, I am oversimplifying and of course each of these roles requires a diversity of skills to fit an always changing job description, and some even got a name change (Scrum Master).

But what about testers…

I know of a team in which every member is called a programmer, not a developer or a team member. Do you believe it? It is true, but isn’t that strange?

What if said “I know of a team in which every member is called a tester”. Would you believe it?

Brian Marick was one of the original authors of the Agile Manifesto and the lone self-defined tester in the group.  He also made big contributions to the redefinition of our role, and others follow the path, but we have had what I consider (re)foundational book, Agile Testing, for less than 6 months now.

We are still in transition as a community . This transition is sometimes caused by a lack of understanding of Agile Development and sometime it is caused by the lack of breakthrough innovations.  There is some re-thinking going on, both in the broader sense (see Brian Marick on Micro-Scale Retro-Futurist Anarcho-Syndicalism), and in a more restricted sense (there is a group, AA-FTT – Agile Alliance Functional Test Tools, that is trying to advance the state of the art, you can see the discussions on the mailing list and a there are a number of workshops [1][2]).

So it might be too early to have a solution to the tester’s dilemma, but anyway, we have to continue to test and deliver products today. What is the best we can do right now?

Let’s discuss a promising proposition: BDD and Cucumber, as presented in The RSpec book: Behaviour Driven Development with RSpec, Cucumber and friends.

But first we need a little of history and background.

Test driven development (TDD)

TDD was a wonderful idea that came to us in the XP package. It is widely accepted as a good practice but isn’t so widely used.  Why?  TDD had the positive effect of pulling up quality, both as a concern of the team and in the product result. But the four letter word (test) caused a lot of friction and misunderstanding.  The friction often appears as a resistance of programmers to do “tester’s job”.  The misunderstanding could be caused by the way people learn new concepts. If the words are known, they usually assume that the concept is the same (see Sapir-Whorf Hypothesis).

That’s why, instead of Agile Development, Brian turned to Micro-Scale Retro-Futurist Anarcho-Syndicalism (reading that, you most certainly won’t have a clue what he is talking about, so you will go and check) and Kent Beck turned to Responsible Development (he prefers the connotation of responsible; being responsible has some cost attached; being agile, well … who doesn’t want to be agile?)  As a result of getting TDD wrong, some practitioners focus on testing the code, while they should be paying attention to the design. This loss of focus leads to bad practices; for example, when a developer defines tests that depends on the implementation (internal structure) instead of the behaviour of the object.

Acceptance Test Driven Development (ATDD) …

You might like to know that software development and theater companies have many things in common: “Good theater companies routinely produce innovative products under extreme deadline pressure (opening night) ”. Or at least, they could have. To get those results (reliable innovation), you have to build products through what Rob Austin and Lee Devin called Artful Making, instead of Industrial Making. This is not a case of wishful thinking, you need some precondition to use Artful Making. One of them: Quick, low cost and high quality iteration, and a product at the end of each iteration (yes, this sounds very Lean).

One impediment to Artful Making is the language/conceptual gap between the user and the development team. If we have different language, we incur in translation time and cost and we risk to inject error every time that we cross the gap… in any direction. For instance, during requirement elicitation and during acceptance test.

Would it be nice if we could have requirement stored in such a way that:

  • They are understandable for both the users and the development team (communication).
  • They could be defined (written) before the product is build (test).
  • The product could be quickly and easily (automatically) validated against them (test).

Enter Fit: Framework for Integrated Test. What if we let the user write the requirements as examples tables in Word, Excel or HTML pages? And then, with a minimum gluing code and a test runner, we could test the application.

It was a great leap forward, both in communication and test. It has been followed by FitLibrary and Fitnesse which add a wiki as the front end (instead of HTML).

A lot of agile teams use some incarnation of FIT/FitLibrary/Fitnesse.  Some problems where reported:

  • Many end user don’t take ownership of the tests. Team members (usually the testers) act as scribes, and translate user / team conversation to HTML /wiki.
  • When the number of test cases scale up, test case development become awkward. A wiki or a set of html pages are not a good development environment. As a developer, you have to manage two models of thinking (FIT/Fitnesse and your application programming language and environment)

…And Behaviour Driven Development (BDD)

BDD is an evolution of TDD and ATDD proposed by Dan North (video).

We can get rid of Test in TDD, and call it Behaviour Driven Development. But why Behaviour? Because it is better to focus the programmer in the more important aspect of the objects, their behaviour. It is also built upon the idea of a ubiquitous language as in Domain Driven Design.

To complete the name/concept switch away of Test, the executable specification are called examples. So specifications of code behaviour are called code examples and the higher level specification are called application examples.

At the requirement/acceptance level, we define application behaviour, at the design/programming level, we define object behaviour. In both cases you follow the red/green/refactor cycle, having two, concentric cycles. The outer cycle is the ATDD part, the inner cyle is the TDD part, but the both work together. These are the steps, number between () refer to the figure extracted from The RSpec Book from The Pragmatic Bookshelf.

  1. Write an scenario with Cucumber (1)
  2. Write a step definition
  3. Run and Watch it fail (2)
    1. Write a failing code example with RSpec (3)
    2. Get the example to pass (4)
    3. Refactor (5)
    4. Repeat 3.1 – 3.3 until Cucumber step is passing
    5. Repeat 2 – 3 until Cucumber scenario is passing (6)
    6. Refactor (7)

Building the application in this way promotes an Outside-in development. Models appear as solutions to the acceptance criteria. This has a few implications: it is easier for the client to write acceptance test (as examples), the model is more relevant to everyone on the team, and it requires the use of techniques to support incremental development (like mocks).  The bottom line of BDD is “writing software that matters”, and it follows three principles:

  • It’s all behaviour. This behaviour should be described in a common (ubiquitous) language, that can be understandable by the business and technical people on the team.
  • Deliver stockholder value. Brian Marick will say “focus on the cheese”.
  • Enough is enough. This is true for upfront planning, design, etc, but also for automation, don’t try to automate everything.

There are many tools implementing BDD ideas with different approaches and contexts (i.e. languages). In Ruby, the tool stack includes RSpecCucumberWebrat. I will only drill into Cucumber which is the BDD tool most relevant to end user and (non technical) testers.

Cucumber

I will not try to explain the use of Cucumber. Following the excellent Ruby/Rails tradition of web/podcast you can check some Cucumber resources here and of course, check the Cucumber site and The RSpec Book.

Disclaimer: Please consider that I am reading a beta version of The RSpec Book, a nice feature of Pragmatic Bookshelf, and I haven’t research Cucumber as much as I would like, so I might be missing many features. I just want to share some impressions as a newbie user.

Just to agree on vocabulary, this is a short example (login.feature):

Each feature file contains:

Feature: Login
In order to use the site functionality
As a guest user
I want log in

Scenario: I try to access without logging in
Given I am not logged in
When I go to the taskboard number 1
Then I should be on the login screen

  • One feature, with a free description (not parsed), that could take traditional story format (As… I want … So that I can …) or another format that change the order to put more emphasis on the value obtained (In order to… As a … I want …). The last format is closer to the order of the steps.
  • One or more Scenario, each scenario has a name and a number of steps. Each step begins with any of five keywords: GivenWhenThenAnd, and But.
    • Given: define the context, the preconditions.
    • When: is the action or event that happen to the application.
    • Then: is the expected result.
    • And/But: could be used after Given/When/Then to add steps with the same meaning of the previous one.

Feature files are written in a very simple DSL (Feature/Scenario/Given/When/Then/And/But) called Gherkin. Like Python and YAML, Gherkin is a line-oriented language that uses indentation to define structure.

My first impression, coming from FIT/Fitnesse, is that it is not so different, I mean, you can also have a Do fixture that can be read almost as English. Almost.

Examples written in natural language (like English, but many languages are supported) is a big selling point. And you don’t lose the table (data driven) approach (Scenario Outlines).

It is far easier to edit, just plain text and it seems to be more integrated to the development environment. I really like the ability to switch more easily between application examples (feature file) and code.

Working in a Ruby environment also allows a more quick and simple cycle that I think could help when pairing with a user.

It’s sad to say that I don’t have experience with large set of features. One of the problems with FIT/Fitnesse is that it is more difficult to refactor examples than to refactor code. And sometimes, you unwillingly break the examples by changing an apparently innocuous html. I don’t know how scaling and refactoring is managed in Cucumber (except the Scenario Outlines).

One downside of plain text is the loss of styling and hyperlinking. The first one could be replaced by convention over configuration and some viewer, but without hyperlinking, we lose the ability to arrange examples as a nice snippet of documentation.

Final comments

In the BDD double cycle, you can extend the use of Cucumber to make code examples or extend RSpec to make application examples.

The questions that come to my mind is, why developers will not choose to use RSpec, so they keep the toolset simpler. Anyway, they don’t need English if they can read Ruby.

In my experience, the cost to maintain the gluing code is only acceptable when there are full time team members that are not fluent in the main programming language (say Java or Ruby). It could be the user or somebody working as the user scribe (an Analyst or Tester).

I see that Cucumber has many good point, and a superficial use is not enough to judge it. I would like to give it a deeper try. Do you have a project Rails? Call me 😀

Will testers role in agile team become the user scribes of functional tests? Yes, probably. But not only that. Exploratory testing is much needed, and a devil’s advocate is useful for the team. For instance, related to coverage, test automation and risk analysis.  The Agilar Taskboard team (Francisco Tufró & Marcos Navarro) allows me to use the product as a test bed for Cucumber.  Thanks!

About the author: Juan Gabardini has more than 20 years of experience in IT management, IT product development, teaching and consulting in financial services, retail, telecommunications, medical equipment and manufacturing sectors.

Juan is currently focused on testing and coaching the Agile way, and building an Argentinian and Latam Agile community (Agile Open Tour and Ágiles 200x).

Juan is a member of IEEE, Agile Alliance and Scrum Alliance, and holds degrees in System Analysis and Electronic Engineering from the Buenos Aires University, where he also is teaching. You can find him (in Spanish) in http://softwareagil.blogspot.com and http://www.agiles.org

Tagged ,

Tester’s Dilemma

The testing of software products or any software artifact, after the fact (i.e. inspection to find defects versus inspection to prevent defects), adds no value to the business providing the product or customer using the software.  In this regard we can say that testing is waste.

What!? Should I stop testing, then?

Well no, but let’s discuss from a couple of perspectives.

Am I cheating my customer?

The story goes like this:

  • My customer wants a high-quality product
  • Because we are human, we make mistakes and therefore no software is bug-free
  • We test in order to catch problems before they reach the user.

So far so good, but now comes the difficult part. Determining how much to test is an inexact science.  I can infinitely test the software driving up project costs throughout. I could also let the client decide how much to test (I am agile after all), but beware! His decisions could result in tons of technical debt in the form of untested code. I could also carefully explain the needs, alternatives and tradeoffs in the various testing approaches, how much automation to incorporate; this would all a piece of cake. In fact, I have just convinced him to pay for the testing, and now you tell me that testing is a waste. Am I making my customer pay for waste?

We are professionals that take pride in our work. We strive not only to be effective but to also be efficient. Our aim is to perform only those activities that contribute to the final outcome. Thinking we are wasting customer money is disturbing. Just bear with me a little longer.

Not my fault, this is a new concept

I could just tell my customer “I’m sorry but I just realized that testing is a waste.”

Truthfully, the idea of testing as something to avoid has been with us for quite long time. In Software Process Improvement (SPI) the story of “testing is waste” goes as follows.

To build a piece of software, we first gather requirements, analyze them, design the solution, program and implement the design and finally deliver it. Now suppose that the user finds a defect after using the product. How much would cost?

Let me introduce cost of quality (CoQ) and its taxonomy. CoQ relates to the costs which are incurred due to a lack of quality and costs which are incurred in the achievement of quality. Cost due to lack of quality is caused by failure before (internal) and after (external) the delivery and the cost of fixing the defect. Cost of achieving quality includes appraisal (inspection) and prevention costs.

Some defects could be mapped to a problem that originates during the programming or design stages. Could we have detected closer to the moment the defect originated? How much cost would we have incurred in this case?

Barry Boehm published in ’81 a much cited study saying that a change in the requirement phase cost x1, in analysis x10, in design x100, and so on. In the context of Boehm (bureaucratic environments, projects with fixed scope, mostly US Government and contractors to them), most of the changes come from defects.  Then, taking quality upstream makes economic sense.

CoQ was used to justify Big Requirement Up Front and the whole BxUF, where x is also analysis, design, etc. If you do it right from the beginning, what could go wrong?  This is the infamous waterfall approach.

But you as well as everyone else know that the universe throws us curve balls all the time in the form of changing requirements. As a result, we have to iterate. Iteration means rework, and rework is bad, but what can we do?

What about automating the whole development process itself? You know, Computer Aided Software Engineering (CASE) and all that stuff. Old fashion and will surely have some problems, but remember the goal is to make it right from the start. This would reduce the CoQ.

You have to be politically careful in attacking CoQ; it is used as an economic justification for SPI initiatives including CMMI projects, but also to justify agile methods.

The exact factors influencing each phase in the cost of change curve are not collectively agreed to, but Alistair Cockburn argues that the curve is correct even in XP.  So the tool is not bad per se, but it could be used for justifying wildly different solutions.

The machine that changed the world

You are probably thinking “Moving testing upstream seems reasonable; we want to make every task as efficient as possible. That still doesn’t mean that the testing will add value!”

Sometime we find ourselves arguing just because we assign different meaning to the same words. Value, Waste … let’s agree what they mean to me, but first I will digress.

Japanese car manufacturers (starting with Toyota) use a set of principles and practices called Toyota Production System, currently know as Lean Production, radically changed the way that production is organized. The Lean principles are:

  • Eliminate waste
  • Amplify learning
  • Decide as late as possible
  • Deliver as fast as possible
  • Empower the team
  • Build integrity in
  • See the whole

Just-In-Time and Stop-the-line (Jikoda) practices, which originate from Lean Production, were counter-intuitive when first proposed, but nowadays those and other Lean Production practices are widely accepted and considered to have had such a positive impact to be referenced in The Machine That Changed the World.

But Lean Production is about manufacturing. Software development is not manufacturing. Does Lean Production apply to Software Product Development? Not directly, you must take a look at The Toyota Product Development System to see how the principles are applied. And finally, there are difference between developing cars and software.

You may already know Lean Software Development and Mary and Tom Poppendieck’s work. They applied Lean principles to software development. This is a pretty new area, still in motion, but it already having a positive impact in the Agile community, triggering many interesting discussions.


Value Stream Mapping

In Lean, optimization must be global (“See the whole” principle).

One of the Lean tools for detecting opportunities for improvement maps the flow of a “piece” thru the value adding system (From concept to cash), taking note of the time spent by the piece in each step of the flow and whether any queue of pieces or work in process (WIP) exists. The flow’s end-to-end time is called lead time. Having the map, you can identify improvement applying mostly two principles: Eliminate waste and shorten the end-to-end time (Deliver as fast as possible). The nice thing is that shortening the lead time forces you to eliminate waste.

What is a piece in software development? You can think in a new functionality, from the moment that is conceived to the moment that starts to be used by a user in a production environment.

After identifying all kinds of waste, you can measure the efficiency of your process as the ratio between the time spent in value adding tasks and the lead time.

Eliminating waste is a never ending activity. Processes will never have 100% efficiency.

Waste …

Waste is everything that doesn’t add value as perceived by the user.

There are many kinds of waste. There are some common waste patterns: every queue and waiting step is waste, and also every rework or product not used/sold is waste. For instance in software there are 7 types of waste:

  • Partially Done Work
  • Extra Features
  • Lost Knowledge
  • Handoffs
  • Task Switching
  • Delays
  • Defects

But even if defects can be classified as waste (they are both a queue and a signal that rework is needed), you can still argue that detecting them thru testing is not waste.

No, no. You must work in such a way that defect didn’t happen in the first place (Build quality in principle). And if you can’t do that, at least you must build incrementally so small that you detect the defect almost when it is happening, and fix it (and it cause) immediately. You know where I’m going: Test-driven development (TDD) and Acceptance Test-Driven Development (ATDD). The catch is that testing after the fact (inspection after the defect occurs) is waste, while testing to prevent the defect is not waste.

This is easier to understand in manufacturing, but what about software development?

There is consensus that TDD is not testing but a design that could be executed and validated automatically, so it is not waste. But ten years ago most of us thought that a good set of UML diagrams was the best representation of the design and that there was value in keeping it updated.  Who knows, in ten years we might know the “proper” way of doing design, and tests associated with TDD will be waste used only by dinosaurs.

… and Value

Software testing is a measurement of the qualitative attributes of the product. Once tested, you’ll have a better understanding of the product, but from the customer’s point of view, it is still the same product.

In some cases the customer may request and pay for a particular measurement (test). Is this a case where testing adds value? The answer is no. It is more likely a lack of trust which causes the customer to request and pay for this wasteful behavior.

After all, what good is in testing?

Some care must be taken. Saying that something is waste doesn’t mean that you can just get rid of it. After analyzing the value stream mapping of your organization, you might find some quick benefits, like removing duplicate tasks (i.e. testing by the development teams and by an external test team), but many times you can remove waste only after changes occur in other activities or tasks. Take, for example, the lack of trust situation. Removing tests is only possible if the customer trusts us, and this will take time. Other common examples: TDD or ATDD could decrease the amount of testing needed at the end, but you have to change how you manage requirement and how you design.

Every test is waste?

Brian Marick proposes a classification of test in four quadrants. In one dimension you have whether the tests “support the team” or “critique the product”. The other dimension is whether the tests are “technology-facing” or “business-facing”.

Most (or all?) tests that support the team are potentially automated and test-first-able. If you find yourself doing this kind of test after the fact or manually, it’s waste.

But what about tests that critique the product? While doing them (i.e. exploratory testing) you are learning the emergent properties of the product, you are finding ways to improve the product. It is just too easy to say that those are not waste but value adding activities. Don’t. Read Agile Testing, of Lisa Crispin and Janet Gregory. Make some experiments. Some of those tests could be done before having a product.

I will lose my job!

If testing is in fact waste, what does that mean for us poor testers! It can’t be true, or not always or not everybody will do the right things.

I’m trying to shake you a little, but is your current way of developing software (process has a bad name lately) bad?

I think that usually the answer is no. It is the best way … until now. You can always improve it.

Don’t give up hope, read Testers: The hidden resource by Lisa Crispin and Janet Gregory. There are many ways in which a person coming with a testing background could contribute to an agile team.

Conclusion

Quality is paramount in agile development; we need quality to have effective and efficient teams. But quality is not testing. Testing is commonly done by people with great commitment to quality, but is the least efficient moment to add quality to a product.

The team needs quality champions, but those people (testers like me) could feel out of place in an agile team. As testers, we need to leverage our skill, help the team find ways to include quality early on. As testers, we have to work hard for a team that doesn’t need after the fact testing, cannibalizing our experience, the same way that successful companies do with products.

We are living interesting times; we are in the middle of paradigm change (not to mention some financial problems). If you define your value as professional based in your ability to do a particular task, you are at risk. I try to identify how I contribute to software development at a higher level, and I keep learning. I hope that this is the answer!

What is your answer?

(Juan’s follow up article provides a potential answer to this dilemma.)

About the author: Juan Gabardini has more than 20 years of experience in IT management, IT product development, teaching and consulting in financial services, retail, telecommunications, medical equipment and manufacturing sectors.

Juan is currently focused on testing and coaching the Agile way, and building an Argentinian and Latam Agile community (Agile Open Tour and Ágiles 200x).

Juan is a member of IEEE, Agile Alliance and Scrum Alliance, and holds degrees in System Analysis and Electronic Engineering from the Buenos Aires University, where he also is teaching. You can find him (in Spanish) in http://softwareagil.blogspot.com and http://www.agiles.org

Tagged ,

Moving People Around

Effective Destabilization

Software practitioners familiar with the rules of extreme programming know that moving people around, or the process of continually appointing team members to work on different parts of the solution or to serve in different team roles, can effectively shuttle knowledge between team members helping to ensure a common level of understanding in both the problem and solution domains.

Moving people around serves as a force that effectively destabilizes the team and continually engages them  towards establishing a new comfort zone.  In the process, team members cross-train towards improving their skills, are deterred from complacency, become more aware of project status, while becoming more responsive to project risk.  In this series, Moving People Around, I’ll discuss the challenges and opportunities that resulted from my own experience with a similar destabilizing force as I moved from a colocated team setting in New York City, to a teleworking setting in Buenos Aires, Argentina.   I’ll discuss how our team overcame the resulting challenges and amplified the opportunities towards ensuring the successful delivery of our software projects.

Surging Ahead

Prior to my moving, our team was composed of 4 members, 3 superb programmers and 1 tech lead/hands-on project manager (me).   Our efforts were focused on delivering a .NET analytics platform which was in beta release.

We regularly incorporated agile software development practices including frequent software deliverable, daily meetings, daily interaction between business people and developers, continuous integration, no overtime, refactoring, and face-to-face team conversations.

Following the move it was clear that we had also improved our practice of collective code ownership while becoming better at moving people around.  We lost the benefits that come with face-to-face team conversations but overall our system had matured into a  production release and we signed our first customer.

The immediate impact following my move was surprising.  In these first few months, we experienced a team surge that saw us sprinting through 4 development iterations, invigorating the project and more importantly, introducing breakthrough features that singled out the product in the marketplace.

I believe this surge was directly related to positive effects that resulted from my move to a teleworking environment and I wonder whether the same intensity could have been achieved had the move not occurred.   The catalyst behind this surge included:

  • Eliminating commute time gave me more time to focus on developing the product.
  • The team raised its performance level in order to prove effectiveness in a virtual team setting.
  • Three hour timezone difference allowed us to benefit from “Follow the Sun” development.
  • Daily group chat sessions eased the recalcitrance and shyness that sometimes occurred in face-to-face conversations, empowering all members to speak up.  The result was continuous feedback that improved our team-intimacy, awareness to project risk and productivity.

Counter Balancing

After this initial surge and as the year progressed, some of the challenges facing teleworkers and virtual teams began to set in, specifically team intimacy began to suffer as the lack of face-to-face meetings, coupled with the transition of part of the team to separate projects began to disconnect and disorient us all.

To counter these effects, we:

  • Introduced a weekly meeting with business leads.
  • Transitioned my original role as tech lead to a colocated member, allowing me to better focus on clearing obstacles that were preventing colocated members from maximizing their productivity and brilliance.
  • Increased my visits to the office.
  • Decreased project velocity to achieve a more sustainable pace.

Subtle adjustments also helped counter the effects of my moving to a remote work environment.   For example, I could sense a collective relief from business leads (as well as increased call volumes) once my Argentine ring tone was changed to sound like the standard USA ring tone.

Conclusion

Looking back, I do believe the initial team surge was directly related to the benefits agile promotes in moving people around.   The intensity of this surge was largely a result of our response to the changing circumstances.   As challenges arose, it was critical to address each quickly and decisively.  In my new role as a teleworker, I can confirm the importance of continually reaching out to all stakeholders, make them feel comfortable that you are accessible during their work hours.  The move to a teleworking environment also gave me time to focus exclusively on writing code while transitioning my role as tech lead to a colocated member.  In the process we improved our practice of collective code ownership.

While this move served as an extreme case of the challenges and opportunities that result when you move people around, more traditional examples more would show team members owning different portions of the system or  fulfilling different team roles at different times in the development lifecycle.   When effectively managed, the benefits will be distributed across individuals, their team and the stakeholders they serve.

This concludes the series Moving People Around.

Tagged ,

Agile on Wall Street

Throughout this past year I’ve presented various articles highlighting the high-performance computing requirements on Wall Street, where latency and temporal constraints are closely tied profit and compliance.

As many of you know, The Techdoer Times is also focused on topics surrounding highly-productive teams.   Our day-to-day experience with Agile software development makes us biased to the productivity benefits of this software development approach, especially with regards to Wall Street, where project success is rooted in a team’s ability to solve functional requirements, as is the case with most technology solutions, but also the ability to solve the complicated non-functional temporal requirements behind the industry’s performance and compliance needs.

We’re happy to share our nascent thoughts surrounding the cultural, managerial and engineering benefits of Agile on Wall Street at Agiles2008 – the first ever Latin American conference on Agile Development methodologies.

Our goals is to raise awareness to Agile for projects heavy in non-functional requirements.

A PDF formatted presentation can be downloaded here.   As always we welcome your comments or suggestions by emailing us at techdoer@gmail.com.

Tagged

Standing Up For Agile – People, Process, Technology

In my previous post, I suggested that Agile’s success is rooted in the creative responses that result from embracing continuous feedback and satisfying the right customer needs.

In this third and final part to the Standing Up For Agile series, I’ll address Agile’s current criticisms and also take a look at the emerging challenges in software development.

The NaySayers

Since its inception, agile methodologies have been heavily criticized by industry practitioners.  Common complaints include the following:

  • lack of discipline
  • lack of focus towards non-functional requirements
  • doesn’t work for with teams composed of junior-level people
  • doesn’t support thorough architectural design and planning
  • makes scheduling impossible

As a practitioner of agile methodologies, I can relate with these concerns.   The non-functional high-performance requirements on Wall Street represent the types of complex requirements that naysayers claim are not adequately addressed with agile software development efforts.  In addition, it is true that junior-level people lacking the experience and knowledge to make quick decisions in a constantly evolving process add significant risk to the overall result.

Perhaps the real challenge in succeeding with Agile, or any software development process, is understanding the true characteristics of the people, process and technology in the problem and solution domains to be able to tailor an approach that inherits the appropriate elements from the many software development methodologies.

Moving forward, Agile’s popularity over existing methodologies will continue to grow because agile practices are purposely designed to address the key differences between software development versus other engineering disciplines.  The same cannot be said for predictive methodologies like the Waterfall approach.

Standing Up For Agile

Agile acknowledges that many stakeholders don’t know what they want until they see what they don’t want, or see what they would like more of.

Frequent deliverables gives stakeholders something to build on.

Agile acknowledges that requirements change.

Test-driven development focuses junior-team members and provides regression testing capabilities for everyone.

Continuous integration synchronizes parallel efforts, minimizes the time-between-failures, and builds team moral and momentum by promoting an always buildable system.

A system metaphor serves as a figurative construct that unites stakeholders’ understanding of the problem and the solution behind it.

No overtime keeps team members productive, lucid and motivated.

Continuous customer interaction ensures the right customer needs are solved.

Agile, like Computer Science, acknowledges that complexity can be addressed with divide-and-conquer approaches by approaching development as a series of releases each consisting of multiple iterations.

Self-organization attracts the best programmers and project managers who tend to avoid command-and-control organizations.

Daily meetings promote accountability throughout the project’s life ensuring all team members carry their weight.

Refactoring addresses quality issues resulting from minimal planning and promotes frequent deliverables.

Pair programming drives up productivity and quality of code.

Collective code ownership promotes coding standards, improving code maintainability, facilitating moving people around and pair programming.

Looking Ahead

For better or worse, the reality is that more and more software development is occurring in virtual teams.  For Agile to continue to grow and mature it must successfully address the challenges of software development in these distributed settings.

This concludes our series Standing Up For Agile.  For more information please contact me at techdoer@gmail.com.

Tagged , , , ,

Standing Up For Agile – Continuous Change

In my previous post, I pondered whether the overwhelming marketing hype surrounding the Agile movement has tainted its early success and diluted its purpose.  I don’t believe this is the case.  Agile methodologies are very much at the forefront of software development and should continue building on this early success.

In part two of the Standing Up For Agile series, I’ll present the key elements behind Agile’s success in the early 21st century.

Thriving on Change

Agile’s success is rooted in its ability to embrace change and thrive on it.  Agile teams act as ‘engines’ whose fuel is the continuous feedback loop coming from daily meetings, changing stakeholder needs, continuous testing, and continuous integration to name a few.  In his book, Becoming a Technical Leader, Gerald Weinberg defines an effective MOI model for leading change and solving problems.  This model is built on Motivating and justifying action, Organizing to support action, and Innovating towards the desired outcome.

In some respects Agile methodologies represent an instance of Weinberg’s model.  Let me explain how.

Motivation

Constantly changing stakeholder requirements can certainly demotivate team members.  How does a software development methodology promote change while keeping managers, developers, and testers motivated?  The key seems to be in minimizing the time required to discover and react to change.  Agile methodologies promote:

  • Daily Meetings – motivate through accountability
  • Continuous Integration – synchronizes parallel efforts and  keeps the system buildable at all times
  • Test-Driven Development – provides quick pulse on progress
  • Frequent Customer Interaction – keeps everyone focused on the right set of needs
  • No Overtime – keeps team members productive, lucid, and motivated.

These elements help mitigate the risk that progress will get too far off target, and more importantly that team progress is always visible and focused on addressing the true needs of the customer.  The same cannot be said for the heavyweight upfront planning that occurs in predictive methodologies like Waterfall or RUP, where the lack of frequent stakeholder feedback only increases the probability that the plan is not aligned with the most relevant needs of the customer.

Changing scope and requirements can demotivate team members if they conclude that a large portion of completed work was time wasted.  Agile prevents this by collapsing the time required to discover and adjust to change and instead motivates by ensuring team efforts are always addressing the most relevant needs of the customer, which only increases the chances for customer satisfaction.

Organization

Agile promotes small self-organized, self-empowered teams for good reason.  They represent and efficient and effective organization of disciplines that can best satisfy customer needs.  Agile represents a branch of Nonlinear Management and thus inherits similar criticisms including:

  • Only works with senior-level developers
  • Requires too much cultural change to adopt
  • Can lead to more difficult contractual negotiations

These are legitimate concerns that reinforce the notion that implementing agile software development requires a lot of work.    Both Agile and nonlinear management implicitly tell us that in this era of knowledge work, where no hierarchy exists between managers and programmers, managers must adapt and evolve to promote and empower self-organizing teams within their organizations.

Innovation

Weinberg MOI model also suggests that innovation depends on the flow of ideas, understanding of the problem, and ensuring quality.   Agile’s promotion of daily meetings, frequent customer interaction, no overtime, pair-programming, moving people around, continuous integration and test-driven development all effectively address these aspects of innovation.

With daily meetings and moving people around, team members are continuously challenging and stretching their individual and collective understanding of the problem and solution domains.  With frequent customer interaction, the team members are educated on the subtle characteristics in the problem domain, knowledge typically locked in the heads of domain experts,  which increases the chances for an effective solution and overall customer satisfaction.  Promoting no overtime is a good way to keep team members lucid and motivated during productive hours.  Test-driven development promotes higher-quality deliverables by focusing developers and testers to the solution’s ability to satisfy requirements at any point during the project lifecycle.

With these elements in place, agile teams are empowered to continuously address the most relevant functional and non-functional requirements in most creative and effective way.

In part 3 to the series Standing Up For Agile, I’ll cover the criticism of agile methodologies as well as the emerging challenges in software development and how Agile can effectivley address them.

Tagged , , , , ,

Standing Up For Agile

The Techdoer Times is going global, Buenos Aires, Argentina to be exact.  I’ll be giving a presentation at the first ever Latin American conference on Agile Software Development Methodologies in late October.

Introduction

It takes a lot of effort to be Agile, yet the whole notion of a conference dedicated to Agile software development first caused me to raise dubious questions over its continued importance.  Did the Agile movement only rebrand common elements that had already existed and should be part of any software development approach or did it pave the way for a new way of thinking about software development?  I’m sure I’m not the first software development practitioner who has tuned out to the nauseating hype and misguided support for Agile methodologies in recent years.  During this time, businesses and individuals used its popularity as a marketing tool to promote their reinvention and ability to keep up with the times.  It was only a matter of time before these organizations confronted the reality that stand up meetings alone couldn’t bring the success sought through Agile’s adoption.

Agile Manifesto

Seven years ago, the thought of teaching the benefits of Agile software development would have evoked real passion from me and perhaps many of you as well.  Back then, the The Agile Manifesto provided a much needed antidote to the fear that the dot-com collapse would also bring with it the return to less-empowering and less-effective software development approaches.

New Era of Software Delivery

The reality is that the Agile movement identified and continues to promote key elements that are extremely effective in this new era of software delivery.  This new era is defined by three key characteristics.  First, software delivery attracts a wider audience composed of creative and business disciplines not just the traditional engineering/management specialists.  Second, software standards and technology have evolved to permit rapid assembly and delivery of “mashups” as we’ve become accustomed to in the Web 2.0 world.  For better or worse, many non-internet related software schedules are held accountable to this standard.  Third, the historic failures of software projects remain fresh in the minds of practitioners.

Agile software development may have rebranded long-standing good software practices, but it also introduced many of its own, and more importantly, clearly communicated them to a wider audience.

This three part series will present the benefits delivered by Agile software development methodologies in recent years and suggest ways it can continue to address software development issues in the near future.   Part 2 will cover the importance of embracing change in the agile development lifecycle and part 3 will address the criticisms of agile development as well as restate the key elements of agile moving forward.

Tagged , , ,

Clarifying Requirements – Verification vs. Validation

In part 1 I discussed how user needs are the core of a good requirement.  The restaurant example  highlighted the key differences between needs, features and requirements.  In part 2 I showed how features, or the range of options and constraints across people, process and technology, will allow analysts and engineers to thread these needs through the requirements specification process.  In part 3, I’ll talk about the categories of a requirement, the difference between their validation and verification and conclude with a brief summary of the linguistic features behind a good requirement.

First, it is important to distinguish between user requirements and system requirements.  User requirements are typically written without any reference to a solution, technology or implementation approach.  Instead they are centered on the needs of the user in the problem domain, as well as the characteristics and context behind these needs.  For example, a requirement may incorporate elements from an overarching project vision statement.

System requirements, on the other hand,  contain references to technologies and solutions but ultimately should be written so that the functional behavior and non-functional characteristics of the solution are specified without describing how they should be implemented.  Use cases, for example are system requirements rooted in the solution domain that describe how the system is used and how it should behave and respond to user input.

Requirements Pyramid – http://www.ibm.com

System requirements can also be classified as functional or non-functional in nature.  This distinction is an ongoing source of much confusion.  Functional requirements describe the behavior of the system whereas non-functional requirements describe the characteristics of this behavior (click here for examples of non-functional requirements from our Agile on Wall Street presentation).   Indicating that a system should first authenticate a user’s credentials before allowing him to proceed to the reservation booking page is an example of a functional requirement.   Indicating that this process should take no longer than one second is an example of a non-functional requirement.

Verification vs. Validation

The differences between verification and validation in the software context are similar to those between efficiency and effectiveness in the business context. For our purposes, validation refers to ensuring the right needs are being met.  In part 1, I discussed the importance of correctly defining the user group from which needs will be elicited.   Successful software projects will prioritize the needs of this group so they can answer the validation question – Are we doing the right thing?

Verification, on the other hand, refers to the process of ensuring the correct implementation of these needs.  In other words, verification answers the question – Are we doing the thing right?

Any project that begins the software development phase with a validated set of requirements, and finishes the software deployment phase by verifying this set, is on the brink of satisfying customer demands.

Linguistic Characteristics

Quite simply, any stated requirement should satisfy the following criteria: the requirement should be written in such a way so that it is unambiguous, concise, and complete.

This concludes the series on Clarifying Requirements.  If you have any questions or comments on this article please email techdoer@gmail.com.

Tagged ,