Unit vs. Acceptance Testing (Part 2)
By now, my post on unit tests influenced you so much that now you have 100% unit test coverage. But yet, some functional bugs still creep in and you still get the dreaded “It’s a bug because you didn’t build what I wanted” bug.
So let’s take a look at acceptance tests and how they can you help you improve your external quality and your compliance with product requirements.
NOTE: You can find the code discussed in this post at this GitHub repository
Executable Documentation
- Your product person writes up a “spec” for some new functionality in the form of wireframes, workflow diagrams and/or text describing what they want you to do.
- The QA person takes the requirements and writes up a test plan, probably in the form of a spread-sheet with steps, expected results, an empty spot to fill in with the actual results and a PASS/FAIL status.
- You – the engineer – take the specs and write the code, using TDD the entire time.
- If you’re lucky, you or your QA person update the existing automated tests – most likely JUnit tests using Selenium WebDriver to simulate user interaction – to test your changes.
And yet, with all of that work there are bound to bugs caused by feature confusion: someone in this chain is going to misunderstand a requirement. Best case scenario, your product person will catch the misunderstanding and you can “fix it” during the sprint. Worst case, a customer calls to complain saying the new feature isn’t working as expected and now you have a high priority production bug.
Buy why? Firstly, because the requirements had to go through several transformations: From spec –> test plan –> code –> JUnit test. “Don’t repeat yourself” isn’t only a good principle in writing code, it’s also a good principle in working effectively as a team. Every time someone writes their own version of an understanding that should be universal, then you’re inviting a chance for confusion to set it.
Secondly, even with traditional automated web tests there’s no easy way to map a failed test (i.e., Element not found exception in testUserSearchesForFooOnHomePage) to a requirement (i.e., “As a web user, I can search for products by name to browse the current product catalog”). Again, you’re stuck having to do translations: QA person gets the failed tests report, manually walks through the steps and has to file a bug.
The solution: Gherkin!
[code language=”text”]
Feature: Some terse yet descriptive text of what is desired
In order to realize a named business value
As an explicit system actor
I want to gain some beneficial outcome which furthers the goal
Scenario: Some determinable business situation
Given some precondition
And some other precondition
When some action by the actor
And some other action
And yet another action
Then some testable outcome is achieved
And something else we can check happens too
[/code]
Gherkin is a structured, domain specific language suited for describing how a system works by enumerating pre-conditions, actions and results for a given scenario.
The main benefit of Gherkin is that is both structured (we’ll see the true benefits of that later) and very readable. It’s so readable that once it’s created, anyone can run through the steps and validate that system is acting as expected. It’s documentation that you can execute!
Automated Acceptance Testing
So your product person starts writing Gherkin-styled specs and your QA person uses it as a test plan, so nothing for you to do. Right?
WRONG! No soup for you, 2 years!
TDD isn’t limited to just unit tests, it actually begins with automated acceptance tests. The very first test your write that fails should be an acceptance test based off of the product requirements. The benefit of TDD with automation is immediate feedback in order to achieve the highest quality. But while unit tests help you achieve internal quality (i.e., simple cohesive objects, clean interfaces, removal of duplication), automated acceptance tests help you achieve the highest external quality (see this excellent post for more information on external vs. internal quality).
So how can you create automated acceptance tests from these Gherkin defined requirements? Cucumber, cucumber, cucumber. To the bat cave!
src/test/resources/stories/product_search.feature
[code language=”text”]
Feature:
As an anonymous user
I want to do search on Amazon’s homepage
So that I can find books about clean coding
Scenario: Searches for a specific book and sees results
Given an anonymous user
When I go to Amazon Home Page
And I search for "The Clean Coder"
Then the products results page displays a list of results containing product
"""
The Clean Coder: A Code of Conduct for Professional Programmers
"""
Scenario: Searches for a specific book and sees results
Given an anonymous user
When I go to Amazon Home Page
And I search for "Bob Martin"
Then the products results page displays a list of results containing product
"""
The Clean Coder: A Code of Conduct for Professional Programmers
"""
[/code]
src/test/java/acceptance/steps/AmazonSearchSteps.java
[code language=”java”]
package acceptance.steps;
import acceptance.pages.AmazonHomePage;
import acceptance.pages.ProductSearchResultsPage;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class AmazonSearchSteps {
WebDriver webDriver = new FirefoxDriver();
AmazonHomePage homePage = new AmazonHomePage(webDriver);
ProductSearchResultsPage resultsPage = new ProductSearchResultsPage(webDriver);
@Given("^an anonymous user$")
public void setupAnonymousUser() throws Throwable {
}
@When("^I go to Amazon Home Page$")
public void goToGoogleHomePage() throws Throwable {
homePage.go();
}
@When("^I search for \"([^\"]*)\"$")
public void searchForTerm(String term) throws Throwable {
homePage.searchForProduct(term);
}
@Then("^the products results page displays a list of results containing product$")
public void assertSearchResultsContainsResultForProduct(String productName) throws Throwable {
resultsPage.assertResultsContainProduct(productName);
}
}
[/code]
With a feature file describing the functionality, and a step definition defining what setup, action or assertion to execute for a given step, you have all the pieces you need for automated testing.
If you downloaded the gradle java project I linked to, you can run the gradle cucumber
task which runs the cucumber-jvm
acceptance test runner.
[code language=”text”]
jmistral:~/projects/acceptance-testing-blog (master)$ gradle cucumber
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:jar UP-TO-DATE
:assemble UP-TO-DATE
:compileTestJava UP-TO-DATE
:cucumber
Feature:
As an anonymous user
I want to do search on Amazon’s homepage
So that I can find books about clean coding
Scenario: Searches for a specific book and sees results # stories/product_search.feature:6
Given an anonymous user # AmazonSearchSteps.setupAnonymousUser()
When I go to Amazon Home Page # AmazonSearchSteps.goToGoogleHomePage()
And I search for "The Clean Coder" # AmazonSearchSteps.searchForTerm(String)
Then the products results page displays a list of results containing product # AmazonSearchSteps.assertSearchResultsContainsResultForProduct(String)
"""
The Clean Coder: A Code of Conduct for Professional Programmers
"""
Scenario: Searches for a specific book and sees results # stories/product_search.feature:15
Given an anonymous user an anonymous user # AmazonSearchSteps.setupAnonymousUser()
When I go to Amazon Home Page # AmazonSearchSteps.goToGoogleHomePage()
And I search for "Bob Martin" # AmazonSearchSteps.searchForTerm(String)
Then the products results page displays a list of results containing product # AmazonSearchSteps.assertSearchResultsContainsResultForProduct(String)
"""
The Clean Coder: A Code of Conduct for Professional Programmers
"""
BUILD SUCCESSFUL
Total time: 40.639 secs
[/code]
Some Parting Thoughts…
One of the axioms I left you with in my first part of this serias was: “If it’s easy to test, it’s easy to write, and leads to a better design”. And you know what, it also applies to acceptance tests and the effect it has on your coding.
Bad coding practices produce bad code, but so do convoluted and complex product requirements. By forcing you and your product person to clearly define users, pre-conditions, actions and results early on, you’re more likely to produce a product your users will enjoy because it will be simple to use. And an easy to use product is an easy one to develop.
Until next time, I hope you enjoy this series and keep on testing!