Automated acceptance tests to get around requirement misunderstanding in a distributed teams

A year ago, I was part of an integration team who’s role was to implement new functionality in a travel web site based on a back end that was developed overseas. The functionality was about providing users with discounts when they select more than one product in their shopping basket. It may seem simple at first glance, but actually the business rules surrounding these discounts were everything but simple. They were based on dates (discounts were only available on certain dates for instance), products’ combinations (some products combined may give a discount but if you add a third product for instance, it will give another discount and the first one must be discarded etc) and destinations (some discounts are available only for certain destinations).

Before starting the development, we were given a business analysis document that described these business rules and we had QA team that was checking whether or not we implemented the right business rules. This functionality was crucial for the company and needed to be sure 100% that it was bug free… ok say 99% (perfect does not exist).

As we started to deliver the first iterations, the QA team started to raise bugs almost every day and these bugs were off 4 reasons:

  1. Business rules misunderstanding by the QA team
  2. Business rules misunderstanding by the development team
  3. Bugs in discounts calculations at the back end level
  4. Data issues (because all the rules were configured in the back end, if someone changed the rules, of course we would have different results and sometimes QA reported that as bugs)

The back end team was delivering a new version every week that was supposed to contain bug fixes and new functionality. However, it also contained few, not to say many, regressions.

This situation was frustrating. We were under the impression that no progress was made and fingers started to point out in all directions. The development team pointed the overseas team, the overseas team the business analysts and these latter the QA team.

We found out that the situation was due to 1 main factor: lack of requirements understanding.


We decided somewhat to “automate” the communication and comprehension of the requirements.

A senior developer and I started to write a kind of domain language (DSL). Basically it was a list of utility classes that abstracted our domain objects model and the interaction with the back end. These classes had methods such like ‘AddFlightTo(string destination, DateTime departureDate, DateTime returnDate)’ and ‘AddHotel(…)’ etc… and the most important method was ‘CalculateDiscount’. In the meantime, the QA team along with the business analysts were asked to write the requirements in terms of acceptance tests and not business rules. We (developers) asked them to provide us with tables which columns contain values and the expected results expressed with numbers (discount percentage & discount amount). By expressing requirements this way, we reduced the room of misinterpretation of business rules.

Using the DSL made writing automated tests far easiest and less time consuming in fact. We could write a bunch of scenarios according to the acceptance tests provided by the BA and QA teams and automated them using MS UNIT. These tests were scheduled to run as part of our night build which made regression bugs detection easiest and we no longer deployed a new version of the software than did not pass all the tests (note that we already had unit tests but they did not validate the requirements as these particular requirements were implemented on the third party back end).

We also provided the compiled automated tests to the overseas team ensuring that they always have the latest version. The back end team started to run them before they deliver a new version discovering regressions and misunderstanding of requirements far earlier than before. More important, they did no longer deliver bugs (especially regression bugs) or at least less regressions which had an impact not only on the productivity, since our back end had always a relatively stable version, but also it had an impact on the back end team reputation as well. Note that for safety reasons, we were validating back end deliveries by running our tests battery. If the number of bugs was not acceptable, the new version was simply not installed on our servers avoiding situation were developers and testers were blocked because of an unstable version.

Once all this in place, we started to see less frustration and more progress in the project. Of course, we still had some communication issues, sometime misunderstanding business rules, but it had nothing to do with the situation before the automated acceptance tests. We basically reduced the number of bugs from a dozen for each delivery to something like 2 to 3 bugs; it was a huge shift.


Even though we were not TDD oriented team and we never practiced acceptance tests driven development, we found out that a test is much more efficient to validate our understanding of the requirements than lengthy discussions and meetings. Making these tests automated took us less than 2 weeks efforts for 2 developers but saved us much more than the invested 2 weeks; it restored the confidence between different technical stakeholders.

The acceptance tests became the contract that bound the teams together and automating them removed any room for misunderstanding. Either the test passes and we could deploy or it did not pass and then we had either to fix the bug or discuss the test itself with the QA and BA teams. In both cases, the situation was clear.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s