A common way to describe requirements on Agile projects is through the use of user story mapping, and, as a result, user stories. This post will not cover this process, but rather the process of taking an existing set of user stories and leveraging them within our development workflow to ensure that an application is built as accurately and efficiently as possible. To this effect, we will set up tools (Rails, RSpec, Capybara, FactoryGirl, and Guard, to be precise) for implementing our app using behavior-driven development. Structuring our app in this way gives us much better odds of producing robust, low-defect code that delivers on the requirements we set out to build.
If you would like to skip ahead and look at the resulting code, you can head straight over to the demo repo for this blog post and download everything there.
Imagine a hypothetical social media site for rich cats. Let’s call these wealthy felines “Cash Cats”. We can safely assume a very short list of user stories that will be relevant regardless of the direction that the app takes.
- As a Cash Cat, so that I can use the sites account functionality, I can sign up for a Cash Cats account when I first visit the site.
- As a Cash Cat, so that I can get back to my account information, I can log into my Cash Cat account.
- As a Cash Cat, so that I can keep track of my growing wealth, I can record my cash when logged in.
Let’s start out by laying down a fresh Rails app:
rails new cash_cats
group :test do
# For dummy data
From the command-line, we bundle, set up RSpec, and remove the (now) unused
$ bundle exec rails generate rspec:install
$ rm -rf test
Next, modify the Rails spec helper to use both Database Cleaner and Capybara Webkit. The boilerplate for Database Cleaner shown below can be found in the README for the repo:
RSpec.configure do |config|
# Other stuff.
config.use_transactional_fixtures = false
config.before(:each) do |example|
DatabaseCleaner.strategy= example.metadata[:js] ? :truncation : :transaction
# Maybe some more other stuff.
At this point, if you run
rake from the root of the project, you should see some output indicating that RSpec is running, albeit with
0 examples as the result. One last bit of cleanup before we move on is to update the generators in the application config so that they use RSpec instead of Minitest:
class Application < Rails::Application
# …a bunch of other stuff.
config.generators do |g|
g.hidden_namespaces << "test_unit"
g.test_framework :rspec, fixture: false
# blah blah blah more stuff.
Now, when we run a generator that creates a test, it will use RSpec and FactoryGirl instead of Minitest and fixtures. Additionally, we hide the
test_unit generator namespace so that it doesn’t muddy up the help menu output when
rails g is run without any arguments.
To test-drive this cat party, we will write out a handful of feature specs, then work on getting them to pass. A method I have found helpful when working with a fairly well-defined set of features is to write out a number of them ahead of time using placeholder specs. This acts both as a todo list of sorts, as well as an indicator of progress. I also find that it helps me to keep a high-level picture of the current application component in mind.
Let’s make two feature groups:
$ bundle exec rails g rspec:feature login_and_authentication
$ bundle exec rails g rspec:feature recording_munny
…and add a handful of specs to them:
RSpec.feature "Login And Authentication", type: :feature do
it ‘can register for an account’
context ‘after creating an account’ do
it ‘can log into my account’
RSpec.feature "Recording Munnies", type: :feature do
context ‘when logged in’ do
it ‘can add munny to my total and show it off on my profile’
You may notice that the format of these specs fairly closely matches the format of the user stories. This is intentional: the goal is to map the specs back to the stories as closely as possible. Running
rake should now display three pending specs.
With our mini feature suite in place, we are just about ready to drive full-speed ahead toward Internet-dominating MVP-dom. But first, let’s stop and make one final improvement to our test cycle. Running
rake manually is great and all, but wouldn’t it be even better if we could automate that a bit? Let’s add
guard-rspec to the mix to do just that:
group :development, :test do
Now bundle, initialize the Guard gem, and start it up:
$ bundle exec guard init
$ bundle exec guard
If all goes as expected, saving a spec file should now trigger a test run for only that file. Keep in mind that this works only for files suffixed with
_spec, which is the default for generated specs. Give it a try by opening up one of the two feature spec files and saving it. There are a number of other settings that can be tweaked in Guard to make it focus failed tests, use Spring, etc., but we will skip those features for the sake of this walk-through.
That’s it for the first part of this blog series. In the follow-up to this post, we’ll go about implementing the actual code to get these feature specs passing. There’s technically enough in place at this point to allow the reader to continue with the implementation as an exercise. Otherwise, drop by again to see everything come together.