Test-driving Agility into Software

Chris Holland, Director, Engineering, TriNet
15
43
5
Chris Holland, Director, Engineering, TriNet

Chris Holland, Director, Engineering, TriNet

The Agile Manifesto was drafted in 2001 by thought-leaders in software engineering who met at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah. This meeting was the culmination of their collective recognition of the failure of legacy methodologies to address the ever-present spectre of "change" in business.

Software exists to bring business value into the World. "Change" is arguably a defining characteristic of business, and by extension, the software that drives it.

While the manifesto did not advocate for any specific methodology, its authors were adept practitioners of several methodologies which aimed to "respond to change" on two fronts:

• From an organizational standpoint, with methodologies such as Scrum.

• From an engineering standpoint, with methodologies such as extreme programming (XP).

Nearly two decades later, many enterprises purporting to have undergone an "agile transformation" still fail to ship software at a sustained pace.

What happened?

Project management ecosystems were quick to put-together Scrum seminars and certifications. They sold these products to enterprises with wild promises of highly-functional teams shipping superior products.

  Software exists to bring business value into the World."Change" is arguably a defining characteristic of business, and by extension, the software that drives it  

The Agile Manifesto's 12 principles clearly call-out the importance of "continuous attention to technical excellence". Engineering teams however failed to recognize the importance of adopting methodologies and practices which would put their software in a position to embrace change.

As a result, we arrived at "agile teams" shipping software iterations rife with "technical debt", resulting in so-called "sprints" turning into "marathons" of failed execution. With the rapid buildup of monumental amounts of technical debt, it becomes exponentially more difficult to change a system without breaking existing functionality.

As I trudged through these challenges over 22 years of shipping software, my embracing of various principles of XP has significantly improved our ability to ship better software on a more sustained basis. Among XP's principles, the most impactful has been Software Testing.

Let's explore two types of activities:

• Testing-driving software, also known as test-driven development(TDD)

• Testing software which has already been written

I consider both activities complementary and of equal importance.

TDD is a software engineering practice. First, we write a failing test representing a small unit of functionality, then we write just enough code to make the test pass. Then we refactor. This workflow is also known as "Red-Green-Refactor".

At first this may seem counter-intuitive. Why would we write a test for code which has not yet been written? Of course it's going to fail! But in practice, this gets us into a highly productive state of "flow". Because we can focus on one thing, and one thing only: Make that test pass. Once we're done writing the code, we no-longer have to ask ourselves whether or not our code works as intended: We *know* it works as intended, as evidenced by the fact that our test passes.

Outside of this methodology, all other approaches are rife with uncertainty and tedious manual testing.

This process can be easily repeated throughout all of the components of an application. Almost effortlessly, we are progressively building a full regression-test suite as a function of writing software. Our test suite can be continually-run, automatically. Should we introduce a bug, it will immediately make a test fail, yielding immediate insight as to what precisely broke.

A big part of TDD is "refactoring". Simply put, we constantly enhance components such that they might be more reusable across an application. In a TDD workflow, refactoring is generally risk-free and effortless because our tests help us catch and fix any breaking change. Better component reusability is key to averting technical debt, which in turn makes software easier to adapt to evolving business needs.

With all this said, code written in a test-driven way is only as good as our tests, and those tests are only as good as our current understanding of a given problem. We will miss some things. And that is perfectly okay.

This is where "testing software which has already been written" comes-in. This can be done by software engineers, but at its core, it's a quality assurance or quality engineering function, QE. QE's role is to verify that business objectives are being met, and to discover less obvious cases. When QE finds a bug, a software engineer can track-down the root-cause, replicate the errant behavior as a failing test, modify the code to make their test pass, then ship the fix back to QE. In turn QE can verify that the issue was resolved. The test written by the Software Engineer will ensure that this specific bug does not get reintroduced at a later point. It becomes part of the regression suite.

Experience shows that if I test something once, I'll be testing it again. As such, automation matters. When working with QE, I tend to carve out at least half of their time for writing automated browser tests with Selenium, to avoid compounding manual testing as we introduce more features.

QE's testing complements software engineers' test-driven development process in that it uncovers missing assumptions, incomplete or erroneous understandings. In my experience, software engineers ship about 1/10th of the bugs to QE when following a TDD workflow, and the bugs which are uncovered are generally more ... "interesting". Finally, I find that regression bugs become extremely rare.

In conclusion, if we are to execute on the agile manifesto's vision, such that we may adequately respond to change in the software that drives business, it is of critical importance that, per the manifesto's principles, we commit ourselves to "technical excellence". Achieving this excellence includes constantly improving our software for increased reusability of its components, as well as shipping new features without breaking existing functionality.

In my experience, a blended approach embracing test automation while test-driving new code as well as testing existing code has been the most impactful game-changer in our ability to sustainably ship software.

And in the end, this should be the true measure of agility.

Read Also

Cloud Adoption-The Key to Business Success

Cloud Adoption-The Key to Business Success

Pankaj Sabnis, Principal Architect, Cloud Computing, Infogain
Software Quality in 2016: The State of the Art

Software Quality in 2016: The State of the Art

Capers Jones, VP & CTO, Namcook Analytics LLC
Onshore, Offshore, and Models for Testing Teams in Light of Recent Data Breaches

Onshore, Offshore, and Models for Testing Teams in Light of Recent Data Breaches

Jennifer Bonine, VP, Global Delivery and Solutions, tap|QA LLC
Shortcut Time-to-Market with Automated Code Testing

Shortcut Time-to-Market with Automated Code Testing

John Chang, Head of Solution Design, CAST