• About
  • Jerry’s Story

swalchemist

swalchemist

Tag Archives: TDD

Beyond Scrum

28 Friday Jun 2024

Posted by Danny R. Faught in technology

≈ 5 Comments

Tags

Kanban, scrum, TDD, XP

At the last few companies I worked for, my organizations happened to follow a similar agile methodology. It worked really well for us, but I don’t have a name for it. Maybe you can help name it. For now, I call it “Beyond Scrum.”

I’ve followed the Scrum methodology on a handful of teams in the past, including all of the ceremonies (colloquially – daily standup, sprints, sprint planning, demo, retro). I was even a certified scrum master.

But over time, some of the ceremonies seemed more useful than others. For one, sprints (a.k.a. iterations) especially created friction without providing any benefit. At the end of a sprint, there would almost always be stories that either had to be split, taking partial “credit” in the current sprint and the rest in later sprints, or pushed wholesale into the next sprint without getting any recognition for any of the work in the current sprint. We could get better at fitting stories into one iteration, but you know what makes more sense? Don’t arbitrarily chop our work up into sprints and then fret about whether the last few stories fit into the time at the end of the sprint.

My recent teams evolved into more of a Kanban sort of flow, where we would finish a story then pull the next story off of the backlog. There was no sprint backlog, just the overall project backlog. I don’t really know how much of Kanban we followed, because I don’t know much about Kanban. But I do like the continuous flow of starting a new story as soon as people were available to take on something new. Caveat: one team used a project management tool to groom at least a week’s worth of work at a time, which looked on the surface like a 1-week sprint, but the reality was that they always wanted at least a little more than a week of stories so they wouldn’t run out and have to groom more stories mid-week. Any stories that weren’t finished at the end of the week would flow smoothly into the next week with no anxiety about what sprint it was a part of.

Speaking of stories, story sizing seemed to lose its value. With no sprints, we didn’t need to calculate velocity in order to know how many stories we could cram into a sprint. Discussing size may have had value in terms of making sure the team understood the scope of a story, but ultimately the teams didn’t care what T-shirt size or Fibonacci number was put on each story. What did still matter, though, was writing good stories that had clear acceptance criteria and weren’t too large. Large stories that couldn’t be finished in a week were difficult to manage. But writing small stories was often difficult to do. Sometimes we’d split a story after seeing that it was taking a long time, or we’d just march on for two or three weeks and get it done.

My last few teams were working on infrastructure software–someone that a product owner might have difficulty relating to customer-visible features. So we found that the product owner role wasn’t very active or useful. Typically we would have a manager or designated person on the team take care of accepting stories, which was often rather informal. Often we wouldn’t even have clearly written acceptance criteria, and often the acceptance process was a rubber stamp that didn’t provide any value to the team. On a related vein – the sprint demo didn’t make much sense. We might demo individual stories as needed to facilitate story acceptance, but with the diminished participation of the product owner, we often skipped the demos. In their place, we might demo significant new features or newly introduced technologies to our team as needed. One team had a weekly time slot for this and other technical discussions.

Besides Kanban, another methodology we borrowed from was Extreme Programming (XP). We didn’t strive to follow all of the essential XP elements, so it would be disingenuous to say we were an XP team. But we did follow the most commonly known XP practices, test-driven development and pair programming. Another element of XP is the 40-hour workweek, and we did pretty well at that one too. Many of the other elements were there, like collective code ownership and continuous integration. But not the iterations and not much of the iteration planning.

We kept some of the other Scrum ceremonies. The daily standup was still useful, especially with a remote team. There were experiments with going more lightweight with an asynchronous standup in a chat tool, and in the other direction with adding a daily stand-down. And the retro was still popular, at least once every two weeks but more likely once every week. It wasn’t hard to find recent things to celebrate or improve on.

So there you have it – the elements of “Beyond Scrum” that were remarkably similar at two different companies. Maybe many companies have evolved toward something similar? Let me know if any of this sounds familiar to you.

feature image photo credit: Tom Natt (CC BY-NC 2.0)

Test-driving fizzbuzz in bash

01 Saturday Jun 2024

Posted by Danny R. Faught in technology

≈ 1 Comment

Tags

TDD

Here’s one for the programmers in my audience. At a recent software crafters meetup, someone brought up the fizzbuzz coding exercise, and how funny it would be to code in bash. Examples of solutions in bash were easy to find, but I didn’t see any that included unit tests. So I tried a test-driven (TDD) solution for fizzbuzz in bash. Here’s how it went.

I updated Bats using homebrew on my Mac. There is now a GitHub organization serving as a home for Bats. I uninstalled an older Bats version I already had, made sure to remove the old tap (“kaos/shell”), and reinstalled from its new home using the brew instructions:

$ brew install bats-core
...
==> Installing bats-core
==> Pouring bats-core--1.11.0.all.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
...

Pay attention to those errors (the “Error:” label was in red on my terminal, but it was buried in a large amount of other log output). I needed to follow the instructions in the error output before I had the new bats-core in my path:

brew link --overwrite bats-core

I also wanted to use the bats-assert library, which depends on bats-support, so I ran:

$ brew tap bats-core/bats-core
$ brew install bats-support
$ brew install bats-assert

The fizzbuzz exercise asks for 100 lines of output, printing out the numbers 1 to 100, with these modifications: if the number is divisible by 3, print “fizz” instead of the number. If the number is divisible by 5, print “buzz”, and if it’s divisible by both 3 and 5, print “fizzbuzz”. Rather than try to process 100 lines of output in each test, I planned to write a function to return one number in the sequence. I started with this test in a test subdirectory:

#!/usr/bin/env bats

load '/usr/local/lib/bats-support/load.bash'
load '/usr/local/lib/bats-assert/load.bash'

setup() {
source "../fizzbuzz.bash"
}

@test "fizzbuzz 1 returns 1" {
run fizzbuzz 1
assert_success
assert_output 1
}

I created a dummy function in fizzbuzz.bash so the test could run:

#!/usr/bin/env bash

function fizzbuzz {
:
}

And now the test does its job:

✗ fizzbuzz 1 returns 1
(from function assert_success' in file /usr/local/lib/bats-assert/src/assert_success.bash, line 42, in test file fizzbuzz-test.bash, line 12) assert_success' failed

-- command failed --
status : 1
output :
--

1 test, 1 failure

Getting the test to pass was easy enough:

function fizzbuzz {
echo 1
}

Not shown below is the satisfying green color on the last line showing 0 failures (it was red before):

$ ./fizzbuzz-test.bash
fizzbuzz-test.bash
✓ fizzbuzz 1 returns 1

1 test, 0 failures

Next I added a test to the test script to triangulate and force a more useful implementation:

@test "fizzbuzz 2 returns 2" {
run fizzbuzz 2
assert_success
assert_output 2
}

The first test passes, and the new one fails as expected. A simple change to the function makes both tests happy:

function fizzbuzz {
local num="$1"
echo "$num"
}

Now finally a test that makes it interesting:

@test "fizzbuzz 3 returns fizz" {
  run fizzbuzz 3
  assert_success
  assert_output fizz
}

Some simplistic logic gets the test to pass:

function fizzbuzz {
local num="$1"

if [[ "$num" = 3 ]]; then
echo fizz
return
fi

echo "$num"
}

So we triangulate again:

@test "fizzbuzz 6 returns fizz" {
run fizzbuzz 6
assert_success
assert_output fizz
}

And that drives a full solution for the “fizz” part of the problem:

function fizzbuzz {
local num="$1"

if [[ $((num % 3)) = 0 ]]; then
echo fizz
return
fi

echo "$num"
}

On to the “buzz” part of the challenge:

@test "fizzbuzz 5 returns buzz" {
run fizzbuzz 5
assert_success
assert_output buzz
}

Here’s what the test output looks like now:

$ ./fizzbuzz-test.bash
fizzbuzz-test.bash
✓ fizzbuzz 1 returns 1
✓ fizzbuzz 2 returns 2
✓ fizzbuzz 3 returns fizz
✓ fizzbuzz 6 returns fizz
✗ fizzbuzz 5 returns buzz
(from function assert_output' in file /usr/local/lib/bats-assert/src/assert_output.bash, line 194, in test file fizzbuzz-test.bash, line 37) assert_output buzz' failed

-- output differs --
expected : buzz
actual : 5
--

5 tests, 1 failure

Again I did a simple implementation that would require triangulation to complete:

function fizzbuzz {
  local num="$1"

  if [[ $((num % 3)) = 0 ]]; then
    echo fizz
  elif [[ $num = 5 ]]; then
    echo buzz
  else
    echo "$num"
  fi

}

One more test to triangulate:

@test "fizzbuzz 10 returns buzz" {
  run fizzbuzz 10
  assert_success
  assert_output buzz
}

A little tweak makes the test pass:

function fizzbuzz {
  local num="$1"

  if [[ $((num % 3)) = 0 ]]; then
    echo fizz
  elif [[ $((num % 5 )) = 0 ]]; then
    echo buzz
  else
    echo "$num"
  fi

}

Now finally, one more test for the “fizzbuzz” output:

@test "fizzbuzz 15 returns fizzbuzz" {
run fizzbuzz 15
assert_success
assert_output fizzbuzz
}

My solution was unsatisfying, but it worked:

function fizzbuzz {
  local num=$1
  if [[ $((num % 3)) != 0 && $((num % 5)) != 0 ]]; then
    echo "$num"
    return
  fi

  if [[ $((num % 3)) = 0 ]]; then
    echo -n fizz
  fi
  if [[ $((num % 5)) = 0 ]]; then
    echo -n buzz
  fi
  echo
}

I was happier after a refactor to remove the redundant logic:

function fizzbuzz {
local num="$1"
local output=""

if [[ $((num % 3)) = 0 ]]; then
output=fizz
fi
if [[ $((num % 5)) = 0 ]]; then
output="${output}buzz"
fi
if [[ -z "$output" ]]; then
output="$num"
fi

echo $output
}

There are far more concise solutions out there, but I like the readability of my solution. Now, as for getting 100 lines of output when running the scripts directly, I tacked this to the end of my script. This allowed it to meet this requirement while still not affecting how the unit tests work with the fizzbuzz function at all:

if [ "${BASH_SOURCE[0]}" -ef "$0" ]
then
for i in $(seq 1 100); do
fizzbuzz "$i"
done
fi

I tested this chunk of code manually rather than try to automate a test for it.

$ ./fizzbuzz.bash
1
2
fizz
4
buzz
fizz
7
...

And there you have it, a testable fizzbuzz in bash. Some people add a few extra rules to continue the fizzbuzz exercise, and I’m confident that the tests would help support any further additions to the code.

Further reading: I first wrote about the Bats unit test framework in “Going bats with bash unit testing“.

Going bats with bash unit testing

05 Wednesday Aug 2020

Posted by Danny R. Faught in technology, testing

≈ 2 Comments

Tags

bash, shell script, TDD, unit test

My team is committed to Test-Driven Development. Therefore, I was struck with remorse recently when I found myself writing some bash code without having any automated unit tests. In this post, I’ll show how we made it right.

Context: this is a small utility written in bash, but it will be used for a fairly important task that needs to work. The task was to parse six-character record locators out of a text file and cancel the associated flight reservations in our test system after the tests had completed. Aside: I was also pair programming at the time, but I take all the blame for our bad choices.

We jumped in doing manual unit testing, and fairly quickly produced this script, cancelpnrs.bash:

#!/usr/bin/env bash

for recordLocator in $(egrep '\|[A-Z]{6}\s*$'|cut -d '|' -f 2)
do 
  recordLocator=$(echo -n $recordLocator|tr -d '\r')
  echo Canceling $recordLocator
  curl "http://testdataservice/cancelPnr?recordLocator=$recordLocator"
  echo
done

The testing cycles at the command line started with feeding a sample data file to egrep. We tweaked the regular expression until it was finding what it needed and filtering out the rest. Then we added the call to cut to output the record locator from each line, and then put it in a for loop. I like working with bash code because it’s so easy to build and test code incrementally like this.

After feeling remorse for shirking the ways of TDD, I remembered having some halting successes in the past with writing unit tests for bash code. We installed bats, the Bash Automated Testing System, then wrote a couple of characterization tests as penance:

#!/usr/bin/env bats

# Requires that you run from the same directory as cancelpnr.bash

load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'

scriptToTest=./cancelpnrs.bash

@test "Empty input results in empty output" {
  run source "$scriptToTest" </dev/null

  assert_equal "$status" 0
  assert_output ""
}

@test "PNRs are canceled" {
  function curl() { 
    echo "Successfully canceled: (record locator here)"
  }
  export -f curl

  run source "$scriptToTest" <<EOF
	                        Thu Apr 02 14:23:45 CDT 2020
Checkin2Bags_Intl|LZYHNA
Checkin2Bags_TicketNum|SVUWND
EOF

  assert_equal "$status" 0
  assert_output --partial "Canceling LZYHNA"
  assert_output --partial "Canceling SVUWND"
}

We were pretty pleased with the result. Of course, the test is a good deal more code than the code under test, which is typical of our Java code as well. We installed the optional bats-support and bats-assert libraries so we could have some nice xUnit-style assertions. A few other things to note here–when we’re invoking the code under test using “source“, it runs all of the code in the script. This is something we’ll improve upon shortly. We needed to stub out the call to curl because we don’t want any unit test to hit the network. This was easy to do by creating a function in bash. The sample input in the second test gives anyone reading the test a sense for what the input data looks like.

Looking at the code we had, we saw some opportunity for refactoring to make the code easier to understand and maintain. First we needed to make the code more testable. We knew we wanted to extract some of the code into functions and test those functions directly. We started by moving all the cancelpnrs.bash code into one function, and added one line of code to call that function. The tests still passed without modification. Then we added some logic to detect whether the script is being invoked directly or sourced into another script, and it only calls the main function when invoked directly. So when sourced by the test, the code does nothing but defines functions, but it still works the same as before when invoked on the command line. We changed the existing tests to call a function rather than just expecting all of the code to run when we source the code under test. This transformation was typical of any kind of script code that you would want to unit test.

At this point, following a proper TDD process felt very similar to the development process in any other language. We added a test to call a function we wanted to extract, and fixed bugs in the test code until it failed because the function didn’t yet exist. Then we refactored the code under test to get back to “green” in all the tests. Here is the current unit test code with two additional tests:

#!/usr/bin/env bats

# Requires that you run from the same directory as cancelpnrs.bash

load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'

scriptToTest=./cancelpnrs.bash
carriageReturn=$(echo -en '\r')

setup() {
  source "$scriptToTest"
}

@test "Empty input results in empty output" {
  run doCancel </dev/null

  assert_equal "$status" 0
  assert_output ""
}

@test "PNRs are canceled" {
  function curl() {
    echo "Successfully canceled: (record locator here)"
  }
  export -f curl

  run doCancel <<EOF
	                        Thu Apr 02 14:23:45 CDT 2020

Checkin2Bags_Intl_RT|LZYHNA
Checkin2Bags_TicketNum_Intl_RT|SVUWND
EOF

  assert_equal "$status" 0
  assert_output --partial "Canceling LZYHNA"
  assert_output --partial "Canceling SVUWND"
}

@test "filterCarriageReturn can filter" {
  doTest() {
    echo -n "line of text$carriageReturn" | filterCarriageReturn
  }

  run doTest

  assert_output "line of text"
}

@test "identifyRecordLocatorsFromStdin can find record locators" {
  doTest() {
    echo -n "testName|XXXXXX$carriageReturn" | identifyRecordLocatorsFromStdin
  }

  run doTest

  assert_output $(echo -en "XXXXXX\r\n")
}

You’ll see some code that deals with the line ending characters “\r” (carriage return) and “\n” (newline). Our development platform was Mac OS, but we also ran the tests on Windows because the cancelpnrs.bash script also needs to work in a bash shell on Windows. The script ran fine under git-bash on Windows, but it took some tweaking to get the tests to work on both platforms. There is surely a better solution to make the code more portable.

We installed bats from source and committed it to our source repository, and followed the instructions to install bats-support and bats-assert as git submodules. We’re not really familiar with submodules and not entirely happy with having to do a separate installation of the submodules on every system we clone our repository to (we have to run “git submodule init” and “git submodule update” after cloning, or else remember to add the option “–recurse-submodules” to the clone command).

Running the tests takes a fraction of a second. It looks like this:

$ ./bats test-cancelpnrs.bats 
 ✓ Empty input results in empty output
 ✓ PNRs are canceled
 ✓ filterCarriageReturn can filter
 ✓ identifyRecordLocatorsFromStdin can find record locators

4 tests, 0 failures

Here is the current refactored version of cancelpnrs.bash:

#!/usr/bin/env bash

cancelEndpoint='http://testdataservice/cancelPnr'

doCancel() {
  for recordLocator in $(identifyRecordLocatorsFromStdin)
  do
    recordLocator=$(echo -n $recordLocator | filterCarriageReturn)
    echo Canceling $recordLocator
    curl -s --data "recordLocator=$recordLocator" "$cancelEndpoint"
    echo
  done
}

identifyRecordLocatorsFromStdin() {
  egrep '\|[A-Z]{6}\s*$' | cut -d '|' -f 2
}

filterCarriageReturn() {
  tr -d '\r'
}

if [ "${BASH_SOURCE[0]}" -ef "$0" ]
then
  doCancel
fi

There are two lines of code not covered by unit tests. Because the one test that hits the loop body in the doCancel stubs out curl, the actual curl call is not tested. Also, the doCancel call near the bottom is never tested by the unit tests. We ran manual system tests with live data as a final validation, and don’t see a need at this point to automate those tests.

So there you go – no more excuses!

Recent Posts

  • Seeking the Inner Ring – Revisited
  • Use Your Unit Tests, or Else
  • Open-Minded Networking
  • Design Evolutions Doomed to Never Finish
  • Adventures in Personal Library Management

Recent Comments

Danny R. Faught's avatarDanny R. Faught on Seeking the Inner Ring –…
coutré's avatarcoutré on Seeking the Inner Ring –…
Danny R. Faught's avatarDanny R. Faught on Use Your Unit Tests, or E…
coutré's avatarcoutré on Use Your Unit Tests, or E…
Five for Friday… on Four Years after the 343

Archives

  • October 2025
  • September 2025
  • March 2025
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • March 2024
  • February 2024
  • February 2022
  • September 2021
  • August 2020
  • July 2020
  • February 2019
  • December 2018
  • October 2018
  • August 2018
  • June 2018
  • March 2018
  • February 2018
  • October 2017
  • September 2017
  • May 2017
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • September 2013
  • August 2013
  • November 2006
  • April 2003

Categories

  • archive
  • career
  • Jerry's story
  • life
  • security
  • software-development
  • technology
  • testing
  • travel

Meta

  • Create account
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Blog at WordPress.com.

  • Subscribe Subscribed
    • swalchemist
    • Join 26 other subscribers
    • Already have a WordPress.com account? Log in now.
    • swalchemist
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar