It’s Still a Wonderful Career

A reader discovered my article from 2018, “It’s a Wonderful Career,” where I said this about software testing jobs: “Get out!” and “Don’t transition to doing test automation.” He asked if I still feel that way now, and the answer is “yes.” Here’s an update on my thoughts.

I’ve been a programmer for most of my life, but a majority of my career has been clustered around software testing. That shifted in 2017 when I took a job as a developer. Since then I’ve flirted with software quality in a 6-month role coaching some test automation folks who were building their programming skills, but otherwise I’ve stayed on the developer track. In my developer roles, I have worked with production code as well as a wide variety of automated tests, plus documentation and release pipeline automation. There have been no testing specialists involved at all.

In my post “Reflections on Boris Beizer” I briefly mentioned how testing as a specialty role has waxed and waned. It was perhaps the 1980s when “tester” became a common role that was distinct from software developers. Fast forward to 2011 when Alberto Savoia gave his famous “Test is Dead” keynote.

I first wrote about the possible decline of the testing role in 2016 in “The black box tester role may be fading away.” I suggested that testing might be turning into low-pay manual piecework (think, Amazon Mechanical Turk), but I don’t see any evidence now that that’s coming true. I mentioned the outsourcing company Rainforest QA. A company by the same name now offers no-code AI-driven test automation instead.

I followed up in “What, us a good example?” where I wrote about companies that aren’t evolving their roles, and yet they’re surviving, but they’re going to have to survive without me. My expectations have evolved.

I know that many people are still gainfully employed as testing and test automation specialists. I can’t fault them for doing what works for them. And I’ll admit that it’s still tempting to go to my comfort zone again and go back to focusing on testing. Maybe I can shed some light on why I’m resisting that temptation. It’s pretty simple, really.

As a developer, my salary has grown tremendously. There are multiple factors involved here, including moving to a different company a few years ago. But the organization I’m working in has no testing specialists, so this opportunity wouldn’t have been available to me if I were applying for a job as a tester. I have a sense there are many more jobs open for developers than for testers out there, especially with a salary that I would accept now, and I’m curious if anyone has any numbers to back that up.

I’m not having any issues with people not respecting my contribution to the product. I don’t have to justify myself as overhead – I have a direct impact on a product that’s making money. And I still do all aspects of testing in addition to developing, maintaining, and releasing products.

In “The Resurrection of Software Testing,” James Whittaker recently described the decline of the testing role as being even more precipitous than I thought. And he also says that it needs to make a comeback now because AI needs testing specialists. He’s organizing meet-ups near Seattle to plot a way forward. I don’t have an opinion on whether AI is going to lead to a resurgence in testing jobs. Instead I’m focusing on an upcoming conference where I’m going to hone my development skills.

And that’s really where I stand on a lot of discussions about testing jobs – no opinion, really. I don’t benefit by convincing testing specialists to change their career path, but I’m happy to share my story if it helps anyone navigate their own career.

One thing I do ponder–there are still so many organizations out there that employ testing specialists, and I might end up working as a developer for one of them some day. How strange that would be if it comes to pass.

[credit for feature image: screen shot from the video of Alberto Savoia’s “Test is Dead” presentation at the 2011 Google Test Automation Conference]

The Rise of the Tabulators

In this installment of “Jerry’s Story,” we’ll get some background on the first IBM machine that Jerry Weinberg learned how to operate, even if it still wasn’t the first computer he programmed. See the home page for Jerry’s Story for the other installments.

In Jerry Weinberg’s first week working at IBM in June 1956, he learned how to use a data processing machine, the IBM Type 607 Electronic Calculating Punch. This type of machine is also called a tabulator, unit record equipment, or an electric accounting machine. But what you’re not likely to hear it called is a computer. Permit me, if you will, to explore the roots of this machine.

In the 1880s, Herman Hollerith built the first punch card tabulator. He established a company that would evolve into the International Business Machines Corporation, along with a few other companies that manufactured business machines like time clocks and computing scales. As we follow the evolution of tabulators, we’ll focus on IBM’s innovations, though IBM did have competitors producing tabulators of their own.

The core feature of the first tabulator was essentially one thing: counting. Like a hand-held tally counter, it could add 1 to a count repeatedly. It had forty counters that could each count up to 9,999. The machine could be wired to recognize a certain spot in a punched card and increment one of the counters when a card was presented that had a hole in that spot. When a batch of cards was done, the operator would read the values from the relevant counters and write them down by hand.

Many of the basic components of the first tabulator weren’t new – punch cards had been used for automated looms, and in a similar way, paper tape was used for automated musical instruments like player pianos. Mechanical counters had been used in many different forms, though it’s possible that Hollerith’s design was unique. Cash registers already had printers. We already had mechanical calculators. The innovation was being able to handle a large number of tabulations (and later, more complex calculations) much more quickly than ever before.

Hollerith Census Tabulator
Hollerith Census Tabulator, likely a replica. Photo credit: Erik Pitti

In 1886, the first tabulator was put to use in the Baltimore Department of Health. The components were electro-mechanical. A manually operated card reader was built-in. The cards were prepared using a keyboard-based punch. Next to the tabulator was a sorter, where a lid over one of the bins would automatically pop open so the operator could manually place a card in the correct bin.

Customer needs called for improved capabilities, and in 1889, The Hollerith Integrating Tabulator was able to add two numbers as well as tabulate. Subtraction didn’t become a feature until 1928. Multiplication followed quickly in 1931, first on the IBM 601 Multiplying Punch. It took six seconds to multiply two eight-digit whole numbers. Then in 1946, the IBM 602 Calculating Punch could do division.

Among other innovations worth noting was the automatic card feed in 1900, followed the next year by the Hollerith Automatic Horizontal Sorter, allowing a machine to process a batch of cards without needing anyone to move the cards through the machine.

In 1906, control panels (also called “plugboards”) were added that allowed operators to change the way a tabulator works by moving plugs to different sockets on the panel. Before then, changing the way the punch card data was tabulated required re-soldering the wires connected to the counters. Then perhaps as early as 1928, IBM introduced removable control panels, allowing control panels to be prepared while not connected to the machine, which greatly reduced the downtime when changing the function of the machine. Operators could maintain a library of prepared control panels.

By 1920, printers were developed, at first only able to print numbers. This freed operators from having to manually write down the values shown on the many dials. Somewhere around 1933 we had the first alphabetic tabulator, able to print out words in addition to numbers.

In 1946, the evolution of IBM’s 600 series continued with the 603 Electronic Multiplier, which used 300 large vacuum tubes. It’s called the world’s first mass-produced electronic calculator, though only about 20 units were built. Two years later, the IBM 604 Electronic Calculating Punch was a much improved model, with more than 1000 miniature vacuum tubes. It would sell more than 5000 units. Later that year, the IBM 605 Electronic Calculator, a slightly modified 604, was released. There didn’t seem to be a model “606” from IBM.

That brings us up to 1953 and the release of the IBM 607 Electronic Calculating Punch that Jerry would get his hands on a few years later. It was similar to the 604, and it included a memory unit. It was able to read data, but not program instructions, from punch cards. According to the Columbia University Computing History web site, the 607 weighed a little more than 2 tons, occupied 36 square feet of floor space, and had a heat load of 26,000 BTU.

The printer Jerry used with the 607 was an IBM 402 Accounting Machine. The 402 could print 43 letters and numbers on a line, followed by up to 45 numbers on the right side of the line. It was introduced in 1948. This can get a bit confusing, as the function of the 402 overlapped a bit with the 607. Later he used the IBM 407 as a printer.

Jerry described the machines he used with the 607 and how he configured them:

The 402 was a ‘tabulating’ machine. The wiring boards allowed formatting of the input and the output. I could add up numbers from successive cards and print totals, but no other calculations except by tricks. Like, you could multiply by 2 by adding, or take 1 percent of a number by displacing wires by two places.

The keypunch was ‘programmed’ with a code punch card.

The verifier was a fixed function machine that compared a punched card with the supposedly duplicate key strokes.

The sorter had no programming except by what the operator chose to do, which was basically by choosing a card column to sort on and the handling of the card for successive sort runs. You could only sort on one column at a time, and for alpha sorting you had to sort twice on the same column, if I remember correctly. We didn’t do a lot of alpha sorting because it was a PIA, so wherever possible, we used numeric codes.

The others [607 and reproducing punch] were wired program machines.

Following the 607 in 1957 was the IBM 608 Transistor Calculator, fully transistorized with no vacuum tubes. In 1960, the IBM 609 Calculator improved on this by adding core memory. This was the end of the line for the 600 series at IBM. But the IBM 407 wasn’t withdrawn from marketing until 1976.

Coming up next, a look at the history of the machine Jerry ran his first program on, the IBM 650, and programming in general.


References

I’ve chased many squirrels in the last few years trying to produce this installment. I finally realized that this installment was trying to be an entire book of its own. I’m satisfying that itch in a small way by cutting out a great deal of scope, and covering tabulators here, and computers in the next installment, then going back to focusing on Jerry.

Below are some of the resources I’ve used in producing this installment. I apologize for not having good enough notes at this point to be able to footnote all of the facts with the relevant reference.

IBM 405: Alphabetical Accounting Machine
Predecessor to the IBM 402 (despite the higher model number) – The IBM 405: Alphabetical Accounting Machine, introduced in 1934. Photo credit: IISG. Published in De Heerenveense Koerier, May 22, 1947.

Beware Groupthink

Why you should own your opinions

I know you can’t tell from the looks of it, but I’ve been hanging around here a lot lately, working on a post that wants to turn info a book of its own. I’m making good progress getting it tamed into a reasonable length. But first, inspired by LinkedIn posts from James Bach and from Jon Bach and subsequent comments, I want to explore another idea rattling around in my head.

I’m going to talk about three examples where members of a community were accused of groupthink of some sort. In many cases, some people observing the communities say that they see cult-like behavior. I’d prefer not to use the derogatory term “cult” here for a couple of reasons. One, because cult leaders actively encourage groupthink and blind obedience, and I don’t see that happening in these cases (even if their followers are picking up some of these traits). And also, because real cults have done a lot of damage, such as permanently ripping families apart. Let’s not try to equivocate that with what I’m talking about here.

Example 1: I learned a lot from the author and consultant Jerry (Gerald) Weinberg. I am of his followers. People outside his circle often don’t understand the level of devotion that many of his followers exhibited during his life and afterward. Someone even coined a term for it: “Weinborg”, which many of us have adopted for ourselves. (If you don’t get the reference, look up the fictional “Borg” in the Star Trek universe – we have been assimilated).

I attended three of Jerry’s week-long workshops. Every time I’ve been through an intense experiential training, it has been a deeply moving experience. That’s true of Jerry’s workshops, plus other experiential trainings I’ve attended (several Boy Scout training sessions come to mind, for example). Once you’ve recovered, you want more. But you can’t effectively explain why it was so moving to someone who wasn’t there, in fact, for many of them, you can’t give away too many of the details, or you may ruin the experience for someone who attends later.

There are likely many other behaviors among Jerry’s followers that looked odd to outsiders. Perhaps we would invoke his laws by name, like “Rudy’s Rutabaga Rule”. Or we might reference “egoless programming” and point to the book where Jerry wrote about it. We might get ourselves into trouble, though, if we recommended that people follow his ideas without being able to explain them ourselves. “Go read his book” isn’t very persuasive if we can’t give the elevator speech ourselves to show the value in an idea.

Early in my career, a wise leader cautioned me to build opinions of my own rather than constantly quoting the experts. That has been a guiding principle for me ever since, and one that I hope has steered me away from groupthink.

Example 2: James Bach is a consultant and trainer who has influenced a lot of people in the software testing community, along with his business partner. I have learned a lot from James, and I continue to check in with him periodically, though I have never chosen to join his community of close followers. Incidentally, he has also been influenced by Jerry Weinberg.

James has grown a community of people who agree on some common terminology, which streamlines their discussions. It gets interesting, though, when someone uses that terminology outside that community without explaining what it means to them. I remember attending a software quality meetup that advertised nothing indicating that it was associated with James Bach or his courses. But then I heard the organizers and some attendees use terminology that I recognized as originating from James. It’s been several years since the meetings I attended, but I think I remember them presenting other ideas that closely align with what James teaches, not always identifying where they came from or why they recommended them. I vaguely remember that I stood up once or twice and told them that I hadn’t accepted some of those ideas, and I don’t recall the discussion going very far.

If a group has an ideology that they expect participants to adopt as a prerequisite for participating, that’s fine, but it needs to be explicit. Otherwise, they need to be prepared to define their terms and defend their ideas.

Example 3: I participate in the “One of the Three” Slack forum and often listen to its associated podcast created by Brent Jensen and Alan Page. They have spoken out about James Bach and his community a few times. At one point, some participants piled on to some negative comments that seemed to have no basis other than “because Alan and Brent said so.” I called them out for groupthink, not unlike the very thing they were complaining about. Fortunately, I think they got the message.


I remember talking about the “luminary effect” with author and consultant Boris Beizer years ago. This is where people hesitate to challenge an expert, especially a recognized luminary in their field, because of their perceived authority on a topic. But in fact, all of the experts I’ve mentioned love for you to give them your well-reasoned challenges to any of their ideas. Granted, the more they love a debate, the more exhausting and intimidating it can be to engage with them. There are smart, after all, and you need to do your homework so you can competently defend your ideas – that’s not asking too much, right? In fact, one of the best ways to get their respect is to challenge them with a good argument. I just hope that a few of my ideas here will survive their scrutiny.

In this post I’m talking about some controversial people and some controversial topics. Where I’ve stayed neutral here, I’ve done so very deliberately, and though I have some further opinions unrelated to the topics I’m discussing, I’m not going to muddy this post with them.

Further reading – Beware Cults of Testing by Jason Arbon.

Code Whines

Software developers, have you had this experience? You start to fix a bug or add a feature to some existing code, and you have a hard time working with the code because it’s poorly designed. It might not have decent unit tests. It might be full of code smells like long functions, poor naming, and maybe even misspelled words in names and comments. It’s really difficult not to complain about the state of the code you have to work with. If you’re pairing like I often do, you’ll complain to your pair. Or maybe you’ll whine about it to your whole team.

I’m going to make a case for developers to tone down the whining.

I can find peace when I’m annoyed by software that’s hard to maintain by remembering Boulding’s Backward Bias from Jerry Weinberg’s book The Secrets of Consulting: “Things are the way they are because they got that way.” Jerry attributed this to his mentor, the economist Kenneth Boulding. It was possibly inspired by biologist D’Arcy Wentworth Thompson, who said, “Everything is the way it is because it got that way.”

Boulding’s Backward Bias, in a tautological sort of way, reminds us to consider the potentially complex history that got us to where we are now. Weinberg points out “There were, at the time, good and sufficient reasons for decisions that seem idiotic today.” And, he says, the people who created the problems might still be around, and they might be in a position of authority. This leads to what Weinberg calls Spark’s Law of Problem Solution: “The chances of solving a problem decline the closer you get to finding out who was the cause of the problem.”

So resist the urge to track down who committed the code you’re concerned about. But do try to put yourself in their shoes when they were writing the code. Let’s consider a number of possible factors that could lead to awful code.

  • Maybe the developer wanted to do better, but they had constraints that prevented them from doing so. Common examples are schedule pressure or not thinking they have permission to write unit tests. Frequently I’ve seen that these constraints are imaginary; management probably wants developers to take the time to do it right the first time, but the developer nevertheless puts pressure on themselves to finish faster.
  • Maybe the developer was inexperienced at the time they developed the code, and there wasn’t enough technical leadership oversight to notice and correct the problems.
  • Maybe the developer thoughtfully chose a different design standard than the one that you’re judging the code by.
  • Maybe the developers who worked with the flawed code after it was initially written didn’t feel empowered to improve the design.

When dealing with organizational issues, you might want to learn about the history of how you got here. But with internal code design decisions, I find that it’s sufficient to understand that there probably were good reasons for them without knowing what the reasons actually were. Granted, if you can’t figure out why a particular feature works the way it does, that may require some historical investigation, and that’s beyond what I’m discussing here.

Complaining wastes time and distracts from getting the work done. What if some of the people who wrote that code hear your complaints? Some people are good-natured about such criticism, but not everyone is likely to appreciate these complaints about their work.

Typically when I start working with problematic code, I’ll grumble about it either out loud or to myself, but then I’ll get to work on it. My approach was heavily influenced by Michael Feathers’ book Working Effectively with Legacy Code. I will do enough refactoring to make sure I can write unit tests to cover the code I’m working with. I might do additional refactoring to make the code more readable. But I have to make tough choices about how deep to go with improving the code, or else I wouldn’t ever get much work done. I think I’ve done pretty well with this.

When I asked about this on Twitter, some of the responses indicated deeper issues than hearing whining about bad code.

There was a report about developers who said the code was too far gone to fix. There was some discussion about code ownership – it’s better to talk about how the team’s code has problems, rather than complaining about the output of one specific person. There was even a mention of a developer who wouldn’t fix the code because it was written by someone else. A few people didn’t think the complaints were much of a problem, and they suggested having a dialog with the original authors to get help improving the code.

Have you or will you ever write code that isn’t perfect? Could your own code be the subject of someone else’s complaints? Surely it will, and my hope is that the team will focus on making whatever improvements are necessary to get the job at hand done effectively without worrying about why the code is harder to work with than they’d like.

Computer Jobs in the 1950s

In this installment of “Jerry’s Story,” we’ll take a quick look at the computer job market when Jerry Weinberg started his career, plus a peek at his first project at IBM. Refer to the home page for Jerry’s Story to see the other installments.

When Jerry applied for a job at IBM in early 1956, he was answering a job ad in Physics Today. He said this was the first computer job ad he ever saw. I found what was most likely the ad he saw, which he might have seen in either the January or March, 1956 issues of Physics Today. I looked through the 1955 and 1956 Physics Today archives and can give a bit of context around what it may have been like to look for a computer job at the dawn of the computer age.

The term “programmer” was not very common in these job ads in the mid-1950s. Surprisingly, one job ad from an unidentified company in the Gulf South region mentions programmers. It was in the context of a “computer-analyst” who is expected to be able to supervise a team of programmers for a magnetic drum computer. Other job titles that involved directly supporting or using computers included “machine operator,” “draftsman,” “engineer,” “designer,” “mathematician,” “physicist,” “scientist,” and of course, IBM’s “applied science representative.” That same unidentified company, amazingly, also asked for experienced candidates: “Knowledge of digital computer techniques desirable but not essential.” National Cash Register was looking for a senior electronic engineer with a master’s degree “and minimum of 2 years digital computer experience.”

At least one job ad didn’t make it clear whether they were talking about working with human computers or machines. In the 1950s, it still wasn’t unusual for someone to be employed as a “computer” doing manual calculations (as Jerry had done was while he was at the University of Nebraska).

The wording in the ads in this era didn’t necessarily encourage diversity. Melpar was looking for engineers, saying it was “an opportunity for qualified men to join a company that is steadily growing.” IBM, in its applied science representative ad, used phrases like “For the mathematician who’s ahead of his time…” “This man is a pioneer, an educator…” and “You may be the man….” One ad even gave an acceptable maximum age for applicants.

One mystery that remains is why Jerry hadn’t noticed a computer-related job ad sooner. I found ads for data processing and computer jobs in publications such as Scientific American and Popular Science as early as 1952. At least five different companies placed job ads that mentioned computers in Physics Today in 1955 and early 1956. One, in March 1955, was an IBM ad similar to the one that got Jerry’s attention in early 1956. Though Jerry was a voracious reader, he had missed reading about several opportunities to fulfill his dream of working with computers. We can presume that by the time he got to college, he was no longer able to absorb all available information around him like he could when he was sitting at the breakfast table reading everything on the cereal box. I did notice that the frequency of the mentions of computer jobs was much higher in 1956 than even 1955, so the odds of one of them getting noticed were going up over time.

A few ads mentioned both analog and digital computers, including ads from General Electric and Melpar. In June 1956, a Honeywell Aeronautical Division ad said, “Several unusual positions are open in our Aeronautical Research Department… Experience or interest is desirable in digital and analog computing…” Jerry’s first programming project for a client involved writing a program to replace an old analog computer.


analog computing machine
Jerry said the analog computers he helped replace looked similar to this one that was in the Engine Research Building at the Lewis Flight Propulsion Laboratory. Photo credit: NASA/Fred Lingelbach, 1949.

There were two room-sized electronic analog computers that were used to analyze hydraulic networks for city water systems in the U.S., one in Oregon and one in New York. They were built using resistors, and they solved systems of non-linear algebraic equations. To use one, you had to travel to one of the two locations and spend several days setting it up for a single calculation, all of which would cost thousands of dollars. IBM was tasked to replace these analog computers with a program that could run on any IBM 650.

Jerry partnered with civil engineer Lyle Hoag on the project. He said the two were essentially doing pair programming and test-first development as they replicated the analog computer’s features. Though the 650 wasn’t appreciably smaller than the analog computer, we can surmise that the program could run much more quickly and cheaply than its predecessor, and it could run anywhere there was an IBM 650 installation.

In his 2009 blog post “My Favorite Bug,” Jerry wrote about how this project produced his first and favorite bug. When the program passed all of the tests they had written, the pair brought in a small real-world problem to solve. After waiting two hours with no result, they were about to abort the program, when finally it started printing the results. This spurred them to make improvements in the program’s usability.

The experience led to Jerry’s first published article: “Pipeline Network Analysis by Electronic Digital Computer” [paywalled] (Lyle N. Hoag and Gerald Weinberg, May 1957, Journal of the American Water Works Association, vol. 49, no. 5). He hadn’t yet decided to use his middle initial in his “author name.” Jerry told me he got some unexpected fame from the article–

I had training in electrical engineering as part of my physics education, so I was familiar with networks and flow equations. As the article points out, the same program (modified) could be used for all sorts of network flow. But most of the civil engineering was provided by my partner, Lyle Hoag.

Many years later, I was way up north in Norway up in the fjords teaching a class in programming or something, and some student came up to me at the first break and said ‘Are you the famous Gerald Weinberg?’ I had published a few books by that time. I asked him, ‘Which book?’ ‘It’s not your book,’ was the answer, ‘it’s your program for hydraulic networks. Civil engineers everywhere use this, and they all know your name.’ It’s the only civil engineering paper I ever wrote. My partner became a famous civil engineer. He’s got quite a reputation; they named a few awards for him.

Going bats with bash unit testing

Tags

, , ,

My team is committed to Test-Driven Development. Therefore, I was struck with remorse recently when I found myself writing some bash code without having any automated unit tests. In this post, I’ll show how we made it right.

Context: this is a small utility written in bash, but it will be used for a fairly important task that needs to work. The task was to parse six-character record locators out of a text file and cancel the associated flight reservations in our test system after the tests had completed. Aside: I was also pair programming at the time, but I take all the blame for our bad choices.

We jumped in doing manual unit testing, and fairly quickly produced this script, cancelpnrs.bash:

#!/usr/bin/env bash

for recordLocator in $(egrep '\|[A-Z]{6}\s*$'|cut -d '|' -f 2)
do 
  recordLocator=$(echo -n $recordLocator|tr -d '\r')
  echo Canceling $recordLocator
  curl "http://testdataservice/cancelPnr?recordLocator=$recordLocator"
  echo
done

The testing cycles at the command line started with feeding a sample data file to egrep. We tweaked the regular expression until it was finding what it needed and filtering out the rest. Then we added the call to cut to output the record locator from each line, and then put it in a for loop. I like working with bash code because it’s so easy to build and test code incrementally like this.

After feeling remorse for shirking the ways of TDD, I remembered having some halting successes in the past with writing unit tests for bash code. We installed bats, the Bash Automated Testing System, then wrote a couple of characterization tests as penance:

#!/usr/bin/env bats

# Requires that you run from the same directory as cancelpnr.bash

load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'

scriptToTest=./cancelpnrs.bash

@test "Empty input results in empty output" {
  run source "$scriptToTest" </dev/null

  assert_equal "$status" 0
  assert_output ""
}

@test "PNRs are canceled" {
  function curl() { 
    echo "Successfully canceled: (record locator here)"
  }
  export -f curl

  run source "$scriptToTest" <<EOF
	                        Thu Apr 02 14:23:45 CDT 2020
Checkin2Bags_Intl|LZYHNA
Checkin2Bags_TicketNum|SVUWND
EOF

  assert_equal "$status" 0
  assert_output --partial "Canceling LZYHNA"
  assert_output --partial "Canceling SVUWND"
}

We were pretty pleased with the result. Of course, the test is a good deal more code than the code under test, which is typical of our Java code as well. We installed the optional bats-support and bats-assert libraries so we could have some nice xUnit-style assertions. A few other things to note here–when we’re invoking the code under test using “source“, it runs all of the code in the script. This is something we’ll improve upon shortly. We needed to stub out the call to curl because we don’t want any unit test to hit the network. This was easy to do by creating a function in bash. The sample input in the second test gives anyone reading the test a sense for what the input data looks like.

Looking at the code we had, we saw some opportunity for refactoring to make the code easier to understand and maintain. First we needed to make the code more testable. We knew we wanted to extract some of the code into functions and test those functions directly. We started by moving all the cancelpnrs.bash code into one function, and added one line of code to call that function. The tests still passed without modification. Then we added some logic to detect whether the script is being invoked directly or sourced into another script, and it only calls the main function when invoked directly. So when sourced by the test, the code does nothing but defines functions, but it still works the same as before when invoked on the command line. We changed the existing tests to call a function rather than just expecting all of the code to run when we source the code under test. This transformation was typical of any kind of script code that you would want to unit test.

At this point, following a proper TDD process felt very similar to the development process in any other language. We added a test to call a function we wanted to extract, and fixed bugs in the test code until it failed because the function didn’t yet exist. Then we refactored the code under test to get back to “green” in all the tests. Here is the current unit test code with two additional tests:

#!/usr/bin/env bats

# Requires that you run from the same directory as cancelpnrs.bash

load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'

scriptToTest=./cancelpnrs.bash
carriageReturn=$(echo -en '\r')

setup() {
  source "$scriptToTest"
}

@test "Empty input results in empty output" {
  run doCancel </dev/null

  assert_equal "$status" 0
  assert_output ""
}

@test "PNRs are canceled" {
  function curl() {
    echo "Successfully canceled: (record locator here)"
  }
  export -f curl

  run doCancel <<EOF
	                        Thu Apr 02 14:23:45 CDT 2020

Checkin2Bags_Intl_RT|LZYHNA
Checkin2Bags_TicketNum_Intl_RT|SVUWND
EOF

  assert_equal "$status" 0
  assert_output --partial "Canceling LZYHNA"
  assert_output --partial "Canceling SVUWND"
}

@test "filterCarriageReturn can filter" {
  doTest() {
    echo -n "line of text$carriageReturn" | filterCarriageReturn
  }

  run doTest

  assert_output "line of text"
}

@test "identifyRecordLocatorsFromStdin can find record locators" {
  doTest() {
    echo -n "testName|XXXXXX$carriageReturn" | identifyRecordLocatorsFromStdin
  }

  run doTest

  assert_output $(echo -en "XXXXXX\r\n")
}

You’ll see some code that deals with the line ending characters “\r” (carriage return) and “\n” (newline). Our development platform was Mac OS, but we also ran the tests on Windows because the cancelpnrs.bash script also needs to work in a bash shell on Windows. The script ran fine under git-bash on Windows, but it took some tweaking to get the tests to work on both platforms. There is surely a better solution to make the code more portable.

We installed bats from source and committed it to our source repository, and followed the instructions to install bats-support and bats-assert as git submodules. We’re not really familiar with submodules and not entirely happy with having to do a separate installation of the submodules on every system we clone our repository to (we have to run “git submodule init” and “git submodule update” after cloning, or else remember to add the option “–recurse-submodules” to the clone command).

Running the tests takes a fraction of a second. It looks like this:

$ ./bats test-cancelpnrs.bats 
 ✓ Empty input results in empty output
 ✓ PNRs are canceled
 ✓ filterCarriageReturn can filter
 ✓ identifyRecordLocatorsFromStdin can find record locators

4 tests, 0 failures

Here is the current refactored version of cancelpnrs.bash:

#!/usr/bin/env bash

cancelEndpoint='http://testdataservice/cancelPnr'

doCancel() {
  for recordLocator in $(identifyRecordLocatorsFromStdin)
  do
    recordLocator=$(echo -n $recordLocator | filterCarriageReturn)
    echo Canceling $recordLocator
    curl -s --data "recordLocator=$recordLocator" "$cancelEndpoint"
    echo
  done
}

identifyRecordLocatorsFromStdin() {
  egrep '\|[A-Z]{6}\s*$' | cut -d '|' -f 2
}

filterCarriageReturn() {
  tr -d '\r'
}

if [ "${BASH_SOURCE[0]}" -ef "$0" ]
then
  doCancel
fi

There are two lines of code not covered by unit tests. Because the one test that hits the loop body in the doCancel stubs out curl, the actual curl call is not tested. Also, the doCancel call near the bottom is never tested by the unit tests. We ran manual system tests with live data as a final validation, and don’t see a need at this point to automate those tests.

So there you go – no more excuses!

Dusty Old Articles

Cover of Software QA magazine, Vol 2 No 3

It’s been too long. Hello again. I’m still working on the next installment of Jerry’s Story. I’m going to restart it once again, and some day I’m going to get it right. Meanwhile, I dug up a list of my articles, presentations, and other content from 1995 – 2009, both self-published and otherwise, and found working URLs for them (I appreciate archive.org so much!). I’m putting them here mostly for future reference, but if you do delve in, please let me know if any of the links still don’t work.

One bonus if you scroll all the way down–my very first feature article, from 1987.

There are also some posts on my Tejas Software Consulting Newsletter blog that I may republish here. Many of the articles linked below have outdated contact information. I’d love to hear from the modern you either in a comment on this post or via Twitter.

So with no further ado, in reverse order of musty outdatedness, here is my long list of nostalgia.


Testers from Another Planet
StickyMinds.com column, January 26, 2009

January 2009 StickyMinds SoundByte podcast
I was interviewed about the “Testers from Another Planet” column here.

Unit vs. System Testing-It’s OK to be Different
StickyMinds.com column, October 20, 2008

Real-World Math
StickyMinds.com column, July 14, 2008

Peeling the Performance Onion
StickyMinds.com column, April 21, 2008. Coauthored by Rex Black.

StickyMinds SoundByte, late April 2008
I was interviewed for this podcast, talking about the Peeling the Performance Onion column.

March 2008 Gray Matters podcast (mp3)
Jerry McAllister interviewed me for this podcast, where we talked about the testingfaqs.org Boneyard and the strength of the worldwide test tools market.

Bisection and Beyond
StickyMinds.com column, January 7, 2008

Communicating with Context
StickyMinds.com column, October 15, 2007. Coauthored with Michael Bolton.

Synthesize Your Test Data
StickyMinds.com column, July 20, 2007

Do You Work in IT?
Last Word column, Better Software magazine, June 2007

Toolsmiths—Embrace Your Roots
StickyMinds.com column, April 2, 2007

Challenges of the Part-Time Programmer
StickyMinds.com column, January 8, 2007

Hurdling Roadblocks
StickyMinds.com column, September 21, 2006

Get to the Point
StickyMinds.com column, June 26, 2006. Coauthored with Rick Brenner.

Wreaking Havoc With Simple Tests
StickyMinds.com column, April 1, 2006

A Look at Command Line Utilities
Better Software magazine Tool Look article, March 2006

Exploratory Load Testing
StickyMinds.com column, January 9, 2006

A Bug Begets a Bug
StickyMinds.com column, October 3, 2005

A Look at PerlClip
Better Software Tool Look article, September 2005 (free membership required)

Five Minutes Ahead of the Boot
StickyMinds.com column, July 11, 2005

After the Bug Report
StickyMinds.com column, April 4, 2005

Not Your Father’s Test Automation: An Agile Approach to Test Automation
StickyMinds.com column, January 10, 2005. Co-authored with James Bach.

Browser-Based Testing Survey
Open Testware Reviews, December 2004

Seeking the Inner Ring
Tejas Software Consulting Newsletter, December 2004/January 2005, vol. 4 #6

Technology Bulletin: Web Link Checkers
Open Testware Reviews, November 2004

Free Test Tools are Like a Box of Chocolates
Invited presentation at STARWest 2004, November 2004

Test Tools for Free: Do you get more than you pay for?

Keynote presentation at the TISQA Test Automation Today Conference and Expo, October, 2004

Keyword-Driven Testing
StickyMinds.com column, November 8, 2004

How to Make your Bugs Lonely: Tips on Bug Isolation
Pacific Northwest Software Quality Conference, October 2004

The Consultant as Human
Tejas Software Consulting Newsletter, October/November 2004, vol. 4, #5

Open Source Development Tools: Coping with Fear, Uncertainty, and Doubt (PDF slides)
2004 Better Software Conference

Keywords are the Key to Good Automation (PDF slides)
DFW Mercury Users Group, September 14, 2004

Being Resourceful When Your Hands are Tied
StickyMinds.com column, August 30, 2004. Co-authored by Alan Richardson.

Software I Don’t Install
Tejas Software Consulting Newsletter, August/September 2004, vol. 4, #4

Review: MWSnap
Open Testware Reviews, August 2004

Stress Test Tools Survey
Open Testware Reviews, July 2004

Meaningful Connections
StickyMinds.com column, July 26, 2004

Alphabet Soup for Testers
Tejas Software Consulting Newsletter, June/July 2004, vol. 4, #3

Review: Mantis
Open Testware Reviews, May 2004

A Testing Career in 3-D
StickyMinds.com column, May 25, 2004

Linking up on LinkedIn
Tejas Software Consulting Newsletter, April/May 2004, vol. 4, #2

Interviewing the Interviewer: Turning the tables on Vipul Kocher
Tejas Software Consulting Newsletter, April/May 2004, vol. 4, #2

Book review: The Book of VMware
StickyMinds.com Books Guide, April 5, 2004

Testing, Zen, and Positive Forces
StickyMinds.com column, March 15, 2004

My Mentor: The Internet 

Better Software magazine, February 2004

Review: FitNesse
Open Testware Reviews, February 2004

Out of the Terrible Two’s
Tejas Software Consulting Newsletter, February/March 2004, vol 3, #6

Scripting Language Survey

Open Testware Reviews, January 2004

Books in the Pipe
Tejas Software Consulting Newsletter, December 2003/January 2004, v3 #6

Let’s Hear It for the Underdogs
Better Software, 2004 Tools Directory, December 2003

Review: The Grinder
Open Testware Reviews, December 2003

Screen Capture Tools Survey
Open Testware Reviews, October 2003

Technology Bulletin: System Call Hijacking Tools
Open Testware Reviews, October 2003

What Is This “Testing” Thing?
Tejas Software Consulting Newsletter, October/November 2003, v3 #5. Republished on StickyMinds as “Dear Aunt Fern.”

Data Comparator Survey
Open Testware Reviews, September 2003

Review: QMTest
Open Testware Reviews, September 2003

Test Design Tool Survey
Open Testware Reviews, August 2003

Review: Holodeck Enterprise Edition (Trial Version)
Open Testware Reviews, August 2003

Trip Report from a USENIX Moocher
Dallas/Fort Worth Unix User’s Group, July 2003, reprinted in the Tejas Software Consulting Newsletter, August/September 2003, v3 #4

Review: JUnit
Open Testware Reviews, July 2003

Diving in Test-First
developer.*, July 26, 2003

Populating the Boneyard
Tejas Software Consulting Newsletter, June/July 2003, v3 #3

Testware for Free: Adventures of a Freeware Explorer
StickyMinds.com column, May 19, 2003

GUI Test Driver Survey
Open Testware Reviews, May 2003

Review: InstallWatch
Open Testware Reviews, May 2003

Unit Test Tool Survey
Open Testware Reviews, April 2003

Review: Sclc Metrics Tool
Open Testware Reviews, April 2003

Convex is dead, long live Convex
Tejas Software Consulting Newsletter, April/May 2003, v3 #2

Black Box Test Driver Survey
Open Testware Reviews, March 2003

Review: OpenSTA (Open System Testing Architecture)
Open Testware Reviews, March 2003

Bug Report: But It’s a Feature!
STQE Magazine, March/April 2003

Defect Tracking Tools Survey
Open Testware Reviews, February 2003

Review: ALLPAIRS Test Case Generation Tool
Open Testware Reviews, February 2003

Two Years at Tejas
Tejas Software Consulting Newsletter, February/March 2003, v3 #1

Yes Virginia, We Do Want Our Software to Work
Tejas Software Consulting Newsletter, December 2002/January 2003, v2 #6

Mini-Review – How to Break Software
Tejas Software Consulting Newsletter, December 2002/January 2003, v2 #6

Strawberry Jam and Self-Esteem: A Review of More Secrets of Consulting
Tejas Software Consulting Newsletter, October/November 2002, v2 #5

The Dark Underbelly of Certification
Tejas Software Consulting Newsletter, August/September 2002, v2 #4

A Survey of Freeware Test Tools
 (zipped pdf slides, 420K)
Quality Week 2002 quickstart tutorial

The Making of an Open Source Stress Test Tool (paper)
slides in zipped pdf format
Quality Week 2002 track session

Event-Driven Scripting (html slides)
Presented at the July 12, 2001 meeting of the DFW Unix Users Group.

What Flavor is Your Freeware?
Tejas Software Consulting Newsletter, June/July 2002, v2 #3

Test-First Maintenance: A Diary

Dallas/Fort Worth Unix Users Group newsletter, June 2002

A Lesson in Scripting (pdf)
STQE magazine, Mar/Apr 2002

Book Review: Mastering Regular Expressions
Dallas/Fort Worth Unix Users Group newsletter, March 2002

A Year at Tejas
Tejas Software Consulting Newsletter, February/March 2002, v2 #1

A Bug Tracking Story  (pdf)
Slides from my presentation at the ASEE Software Engineering Process Improvement Workshop, 2002

Use Your Resources
Tejas Software Consulting Newsletter, December 2001, v1 #9

Boom! Celebrating a Successful Test
Tejas Software Consulting Newsletter, October 2001, v1 #8

“Reference Point: Testing Applications on the Web” (book review)
STQE magazine, November/December 2001

Book review – The Art of Software Testing
Tejas Software Consulting Newsletter, September 2001, v1 #7

Scripts on My Tool Belt (PowerPoint slides)
Fall 2001 Software Test Automation Conference.

Tools, Tools, Everywhere, but How Do I Choose?
Tejas Software Consulting Newsletter, August 2001, v1 #6, and also a letter from the Technical Editor on StickyMinds.com.

What’s on Your Tool Belt?
Tejas Software Consulting Newsletter, July 2001, v1 #5

Software Quality Notes: Test Tools for Free
Dallas/Fort Worth Unix Users Group newsletter, June 2001

Book Review: Load Testing for eConfidence
Tejas Software Consulting Newsletter, v1 #4, June 2001

Watts Humphrey on Teams
Tejas Software Consulting Newsletter, May 2001, v1 #3

Performance Testing Terms – the Big Picture
Tejas Software Consulting Newsletter, April 2001, v1 #2

Setting up a Mailing List with Subscribe Me Lite
Dallas/Fort Worth Unix Users Group newsletter, April 2001

Consumer protection efforts threatened by UCITA
Dallas-Fort Worth TechBiz, April 30, 2001.

Software Quality Notes — A Return to Fundamentals in the 00’s
Dallas/Fort Worth Unix Users Group newsletter, March 2001, reprinted in the Tejas Software Consulting Newsletter, March 2001, v1 #1

My observations on UCITA in Texas
Dallas/Fort Worth Unix Users Group newsletter, February 2001

Seamless Risk Management from Project to CEO (PowerPoint slides)
February 12, 2001 meeting of the IEEE Dallas Section Consultants Network.

Developing Your Professional Network
The Career Development column in the January/February 2001 issue of Software Testing and Quality Engineering magazine. References from the article are available on the STQE web site.

Position Statement: Is Software Reliability an Oxymoron?
Panel Session at the Technology Business Council Software Roundtable, Richardson Chamber of Commerce, October 26, 2000

Book Review: The Cathedral & the Bazaar
Dallas/Fort Worth Unix Users Group newsletter, April 2000

Asynchronous Improvement: a cst-improve experience report
Presented at the ASEE Annual Software Engineering Process Improvement Workshop, February 19, 2000

Book Review: Open Sources
Dallas/Fort Worth Unix Users Group newsletter, October 1999

Getting Published, or, Help Bring Software Engineering out of the Dark Ages and Help Your Career Too
The Career Development column in the July/August 1999 (Volume 1, Issue 4) Software Testing and Quality Engineering magazine.

Book Review: Managing Mailing Lists
Dallas/Fort Worth Unix Users Group newsletter, October 1998 (10/98)

The PIT Crew: A Grass-Roots Process Improvement Effort
Presented at the Software Test, Analysis, and Review conference, May 1998.

Software Defect Isolation
Co-authored with Prathibha Tammana. Presented at the High-Performance Computing Users Group, March 1998, and InterWorks, April 1998.

Integrating Perl 5 Into Your Software Development Process
Co-authored with Orion Auld. Presented at the High-Performance Computing Users Group, March 1998, and InterWorks, April 1998.

The Jargon of J. Random Hacker
Dallas/Fort Worth Unix Users Group newsletter, November 1997

Usenetters Leaving the Homeland
Dallas/Fort Worth Unix Users Group newsletter, July 1997

Book Review – The FAQ Manual of Style
Dallas/Fort Worth Unix Users Group newsletter, June 1997

Experience with OS Reliability Testing on the Exemplar System: How we built the CHO test from recycled materials (slides)
Presented at Quality Week ’97 and the May 16, 2000 meeting of the IEEE Reliability Society, Dallas Chapter.

The Scourge of Email Spam
Dallas/Fort Worth Unix Users Group newsletter, January 1997

What Color is Your Surfboard?
Dallas/Fort Worth Unix Users Group newsletter, December 1996

Hunting for Unix Knowledge on the Internet
Dallas/Fort Worth Unix Users Group newsletter, November 1996

Tester’s Toolbox column: The Shell Game
Software QA magazine, pp. 27-29, Vol. 3 No. 4, 1996
Using Unix shell scripts to automate testing. Basic information about the available shells on Unix and other operating systems.

Tester’s Toolbox column: Toward A Standard Test Harness
Software QA magazine, pp. 26-27, Vol. 3 No. 2, 1996
The TET test harness and where it fits into the picture.

Tester’s Toolbox column: Testing Interactive Programs
Software QA magazine, pp. 29-31, Vol. 3 No. 1, 1996
A concrete example of using expect to automate a test of a stubborn interactive program.

Tester’s Toolbox column: Using Perl Scripts
Software QA magazine, pp. 12-14, Vol. 2. No. 3, 1995
The advantages of using the perl programming language in a test environment, help in deciding whether to use perl and which version to use.

Apple Kaleidoscope, Compute! Magazine, pp. 111-112, issue 91, Vol. 9, No. 12, December 1987.
I was interviewed about this article for “The Software Update” episode of the Codebreaker podcast, released December 2, 2015.

Jerry at Berkeley

In this installment of “Jerry’s Story,” we’ll continue the tale of Jerry Weinberg’s education. Refer to the home page for Jerry’s Story to see the other installments.

While he was finishing his undergraduate degree in Lincoln, Nebraska,  Jerry decided that he still had much more to learn. He applied to six graduate schools: Harvard University, Princeton University, Stanford University, the University of Chicago, Massachusetts Institute of Technology, and the University of California, Berkeley. All but Stanford accepted him and offered a fellowship. Despite getting accepted to five schools, the rejection by Stanford bothered him for some time – he was still sensitive about the awards he was cheated out of in high school. Later, when he realized how small Stanford was compared to the others, he had a better understanding of why they might not have had room for him.

UC Berkeley was his first choice, because two of his physics professors at the University of Nebraska had strong connections there. They could arrange a job for him to supplement his fellowship, and they could help him get into a program to earn his Masters and PhD simultaneously. He accepted the invitation from Berkeley. Shortly after that, he received an acceptance letter from MIT, also offering a job in their computing lab. He very much wanted to go to MIT so he could work with computers, but he was afraid that someone at Berkeley would tell MIT that he had reneged on his acceptance and then MIT would reject him. On later reflection, he realized that this thinking was naive. But he would become much fonder of the Western region of the U.S. than the East coast, so he was probably happier in California than he would have been in Massachusetts. The computers would come soon enough.

Jerry moved to Berkeley, California in 1955 with his wife Pattie. Shortly thereafter, in September, their first child, Chris, was born. They had some typical first-time parent worries. They were in what was essentially a one-room apartment, so Chris slept in a crib not far away from them. The first night they brought him home, they worried all night about whether he was still breathing. They would drift off to sleep, then both of them would wake with a start because they couldn’t hear him breathing. But he was fine – Chris kept breathing, and he slept a lot better than his parents did.

When Chris was two weeks old, they took him to a pediatrician for his first checkup. I’ll share with you the conversation I had with Jerry about how that went –

Jerry: So we go in and we had about thirty pages of handwritten questions for our pediatrician.

Danny: Oh my Lord. Poor doctor.

And they were prioritized.

Thank goodness for that.

We knew we probably couldn’t get to all of them so we had the most important one first. What do you think the first question was? Two weeks.

So many possibilities.

You’ll never guess it.

Well you mentioned breathing so I guess I just have to say, “How do you make sure he keeps breathing without staying up all night?”

No the first question was when should we get him his first pair of shoes.

Wow.

I’ll never forget this, I wish I had a video of this. And he’s this wise old guy and he says, ‘Well that’s an important question. Because you know if your kid gets to high school and he’s barefoot, the other kids are gonna mock him, it’s going to destroy him psychologically.’ I remember the answer, it was just wonderful.

Great answer!

And we just put away the rest of the questions. It was so good, that was one of the great learnings of my life.

Jerry started his coursework in Physics. This included working with a particle accelerator called the “Bevatron” at Lawrence Berkeley National Laboratory, which overlooked the UC Berkeley campus. The Bevatron had only begun operation the previous year. He set up experiments to try to simulate cosmic ray events. About 90% of the work involved stacking lead bricks to build a shelter from the particle beam. The researchers didn’t carry any kind of radiation detection with them, and Jerry worried later about whether the beam in the accelerator had caused him any harm. Records show that proper shielding may have only been installed later.

The Bevatron was used for some groundbreaking work around this same time, but we don’t know whether Jerry was involved with any of it. In 1955, the existence of the antiproton was proven using the Bevatron, which earned a Nobel Prize for two people.  The antineutron was discovered there in 1956. The work for either of these could have overlapped the 1955-1956 school year Jerry was working with the Bevatron, and cosmic ray experiments like he was doing may have been relevant to the antiproton work.

3522994561_2848f507bd_b

Interior of the Bevatron without shielding in place, 1956. Photo credit: Berkeley Lab.

In less than a year, Jerry had passed the necessary exams and finished the experiments for his thesis, which concerned a mysterious bump in a cosmic ray energy graph. But he never finished writing his thesis. Early in 1956, Jerry saw an ad in Physics Today that changed everything for him. Here’s the text of it, in part–

FOR THE MATHEMATICIAN
who’s ahead of his time

IBM is looking for a special kind of mathematician, and will pay especially well for his abilities.

This man is a pioneer, an educator—with a major or graduate degree in Mathematics, Physics, or Engineering with Applied Mathematics equivalent.

You may be the man.

If you can qualify, you’ll work as a special representative of IBM’s Applied Science Division, as a top-level consultant to business executives, government officials and scientists. It is an exciting position, crammed with interest, and responsibility.

Employment assignment can probably be made in almost any major U.S. city you choose. Excellent working conditions and employee-benefit program.

Other ads that IBM placed that year were more clear that the job involved computers, but this one did include a picture of a computer room with a caption talking about data processing. You can imagine the appeal – the chance to finally work with computers, a promise of a good salary, and a choice of where to live. He had the right degree. He happened to be male, which the ad strongly implied was an important factor. Jerry applied for the job. 

Jerry and Pattie were almost out of money. His fellowship covered his tuition. Wedding gifts and a small amount of savings were covering the rest. They had no health insurance to help pay for Chris’ birth, and now their second child was on the way. Jerry borrowed $400 from his father, the only time in his life Jerry had to borrow from him. Though they were down to their last penny, he would be able to pay it back soon.

Jerry got an offer to start at IBM on June 15. He told the university he was leaving, and his fellowship was terminated. His advisor cried after hearing the news – Jerry needed perhaps only two months more to complete his thesis to earn his doctorate. He did leave UC Berkeley with a master’s degree in Physics as a consolation prize. When I asked Jerry if he had any regrets about leaving, he answered, “only my regret that I’m finite, and can’t do everything I’m interested in.”

He had also applied for an engineering job at Boeing in Seattle, which led to a job offer from Boeing. This job did not involve working with computers, but the salary was more than twice as much as IBM was offering. Plus, he could start a few weeks earlier, which was important, because his fellowship money was gone and he was broke. But the computers were calling him. Jerry told IBM that if he couldn’t start a few weeks earlier, he would go to Boeing instead. IBM said “Yes” and Jerry accepted their offer.

It’s hard to tell whether Jerry was bluffing about going to Boeing. The part of the decision that was easy for him was leaving the university. He said, “I realized that the PhD would be irrelevant to my life, and I wouldn’t learn anything new completing the thesis. My favorite expression about education I think is by Mark Twain, who said ‘I was always careful never to let my schooling interfere with my education.'” (The Quote Investigator gives compelling reasons for why Grant Allen is more likely the originator of this aphorism.)

Going to college for him was all about what he could learn, and only peripherally about earning a degree. His desire to always be learning extended beyond his schooling. This influenced all of his decisions about how he spent his time, including his decision to walk away from a chance to double his salary at Boeing and work for IBM instead. 

If he saw opportunity that didn’t involve learning, he was likely to turn it down. And if he was doing something that didn’t allow him to learn at a sufficient pace, he would tend to stop that activity. But how did he judge whether he was learning fast enough? Jerry told me, “It’s just a feeling. Like how do you know you’re hungry?”

Years later, Jerry did earn a doctorate, but that part of the story will be easier to understand after exploring his role as a programmer.

The next installment is: Computer Jobs in the 1950s.

It’s a Wonderful Career

As I sit here listening to Christmas music, I’m giving myself the gift of extra time to write. I want to respond to something Paul Maxwell-Walters recently tweeted:

If there is such a thing as a Tester’s Mid-Life Crisis, I think I may be in the middle of it….

He followed it up with an interesting blog post–The Fear of Age and Irrelevancy – On the Tester’s Midlife Crisis (1)

Paul cited the director of a counseling center who said mid-life crises are likely to happen between age 37 through the 50s. Paul, approaching his 40s, worries that his crisis is here. As I see my 50s getting large on the horizon, I don’t know if my crisis has past, is still coming, or will never come. I was actually around Paul’s age when my consulting business dried up and I ended my 16-year run in software testing. Four years later, though, I went back to my comfort zone, and had four consecutive short stints in various testing jobs.

That last testing job morphed into a development job. I’m very happy with my current employer for encouraging that path to unfold. Over the years, I have fervently resisted several opportunities to move into development, some of them very early in my career. I had latched onto my identity as a tester and staunch defender of the customer, and I wouldn’t let it go.

Paul wrote:

I have also come across people around my age and older who are greatly dissatisfied or apathetic with testing. They feel that they aren’t getting anywhere in their careers or are tired of the constant learning to stay relevant. They feel that they are being poorly treated or paid much less than their developer colleagues even though they all work in the same teams. They hate the low status of testing compared to other areas of software development. They regret not choosing other employers or doing something else earlier.

That’s surely the story of any tester’s career. Low status, low pay, slow growth. I embraced it, because I loved the work and loved what it stood for. The dissatisfaction seems to be more common now than it used to be, though. My advice, which you will know if you’ve been reading things on my blog like “The black box tester role may be fading away“, is: get out! Don’t transition to doing test automation. Become a developer, or a site reliability engineer, or a product owner, or an agile coach, or anything else that has more of a future. I think being a testing specialist is going to continue to get more depressing as the number of available testing jobs slowly continues to dwindle.

Because I’m writing this on Christmas Eve, I want to put an It’s a Wonderful Life spin on it. What if my testing career had never been born? In fact, what if the test specialist role had never been born?

Allow me to be your Angel 2nd Class and take you back to a time when developers talked about how to do testing. Literature about testing was directed toward developers. What if no one had worried about adding a role that had critical distance from the development process? What if developers had been willing to continue being generalists rather than delegating the study of testing practices to specialists, while shoving unit testing into a no-man’s land no one wanted to visit?

And what if I could have gotten over the absolute delight I got from destroying things and started creating things instead? I’m sure I’d be richer now. I’d have better design skills now. But alas, I’m not actually an Angel 2nd Class, and more to the point, I haven’t dug up enough historical context to really play out this thought experiment. But I’ll try to make a few observations. Within the larger community of developers, I might not have been able to carve out a space to start a successful independent consulting practice, which I dearly loved doing as a tester. Maybe I wouldn’t have developed my appreciation for software quality that I have now. Maybe I wouldn’t have adopted Extreme Programming concepts so readily as I have, which has now put me in a very good position process-wise, even if I’m having to catch up my enterprise design and architecture skills.

How about not having any testers in the first place? Maybe the lack of critical distance would have actually caused major problems. Maybe the lack of a quality watchdog would have allowed more managers to actually execute those bad decisions. And maybe those managers would have been driven out of software management. Would the lack of a safety net have actually improved the state of software management by natural selection, and even allowed some companies with inept executives to die a necessary death? I think I’m hoping for too much here, and perhaps being too brutal on Christmas Eve.

It has been a wonderful career. It could have been a different career, but I’m just glad that it has taken me to where I am now. Paul, I wish you a successful outcome from your mid-career crisis. I realize that my advice to get out is much easier said than done.

Reflections on Boris Beizer

Another one of my mentors is gone – I got the news that Boris Beizer passed away on October 7, 2018. I’d like to pause to share some of my recollections of Boris. If you knew him, I would love to hear your stories, too.

I think my first introduction to him was reading his book Software Testing Techniques. It was published before the software testing specialist role was common. I was working as a software test engineer, and I was a bit confused by the book’s point of view. I discovered that Boris and most of the other authors who wrote about software testing at the time were participating in the comp.software.testing Usenet newsgroup. This was likely in 1994, give or take a year. I was amazed that I could interact with the people who “wrote the book” on software testing. So I joined in, and I learned a lot more than I would have just from their books. Somewhere along the way, Boris explained that Software Testing Techniques was written for programmers, and suddenly it made a lot more sense to me. When I wrote the frequently asked questions list for the newsgroup, I used quite a bit of material from Boris to flesh it out.

In 1995, I set up the swtest-discuss email list that Mark Wiley and I conceived to discuss how to test operating systems with a few colleagues we knew. The list grew to 500 subscribers and the topic area greatly expanded. Some people liked how we could enforce a better signal to noise ratio than what we had on comp.software.testing. Boris participated on the list. But some people felt that his tone was too abrasive. I’ve forgotten the details of the social dynamics that were in play so long ago. Some people moved on to other forums where Boris wasn’t invited. I realize I can’t make everyone happy. And Boris clearly didn’t care to.

My participation on Usenet got the attention of Dr. Edward Miller, the conference chair for the Quality Week conference. He invited me to join the conference’s advisory board that chose the papers that would be included. I was flabbergasted. I was still practically a kid. But Dr. Miller was certain he wanted me on the board. So I accepted. I joined a distinguished group of industry experts and academics, including Boris Beizer, who was a prominent industry consultant and also still acted like an academic, having gotten one of the first ever PhD’s in computer science.

I traveled to the Quality Week conference in San Francisco, which was in the Sheraton Palace. I remember going to the dinner that the advisory board was invited to during the conference each year as a thank-you for our efforts. I wasn’t sure how I was going to get to the restaurant as I stood on the curb in front of the hotel with Boris and other board members, many of them smartly dressed, and me in my business casual. Then Boris hailed a limo. What? I didn’t know then that you could hail a limo, but that’s how several of us got to the restaurant. Edward and Boris and the rest accepted me as one their own, despite my inexperience and casual mode of dress.

Some of the specific things I remember from Boris include the Pesticide Paradox. which taught us that test suites lose their effectiveness over time. His software bug taxonomy inspired many discussions, and I even helped him research the origin of the word “bug.” He taught me that if I can model any aspect of a program using a graph, I can use that graph to guide my testing. And not long ago, at a talk I was giving, someone in the audience reminded me of the fabulous poem “The Running of Three-Oh-Three.” Boris published it at the very beginning of Software Testing Techniques, “with apologies to Robert W. Service.” It remains the best poem about software testing that I’ve seen. I’ve only now bothered to figure out the link to Robert Service; it seems that Boris’ inspiration was Service’s poem “The Cremation of Sam McGee,” published in 1907.

Boris must have been in high demand. He told me at one point that he sold his services in 1-week blocks for US$20,000. Any shorter time than that wasn’t worth his attention. He told me later that he had enough “f-you money” to be very selective about which clients he took. He is credited with changing the industry in ways I don’t even understand, because the transformation was well underway when I joined the scene. With his brash nature, he made enemies along the way. But I didn’t like to choose sides. I have learned both from Boris and many of the people who steered clear of him.

I am especially proud of the inscription that Boris wrote in my copy of his book Black Box Testing:

Boris Beizer inscription

However, after I read the book, I had to report to him that I really didn’t like it. He explained that the publisher had assigned him an inexperienced editor who made a wreck of the book. I sure learned a lesson about dealing with publishers.

I found out at some point that Boris had written two fiction books under the pseudonym Ethan I. Shedley. They were both out of print, but I found a used copy of The Medusa Conspiracy. I started reading the book but didn’t finish it. I probably don’t have the generational context to be able to appreciate it.

Ever since Boris retired some time ago, I’ve wondered if we would ever hear about him again. Last February, I felt an urge to check on him. I no longer had a working email address for him (he seemed to change his email account regularly), but his phone number was easy to find in his Usenet signature. Dialing a phone is a quaint thing nowadays, but I was determined. Sure enough, someone picked up the phone, and when she asked who was calling, I hastily had to summarize who Boris was to me. She summoned him to the phone and we had a nice talk. I mentioned that I’m writing a biography, and as soon as it came out of my mouth, I felt that awkward sensation that I’ve felt a few times before, that I was talking to someone who may merit a biography of their own, but yet they hadn’t made the cut. Boris mentioned his last book, “Software Quality Reflections.” I still didn’t have a copy (it may have been an e-book), and I think the only way to get one is to get it straight from him. I sent him an email to his new email address to request it as he asked me to do, but I never got an answer.

For more about Dr. Beizer, see the interview in the May 13, 1985 issue of Computerworld. This was before he started his consulting practice, and there’s a great picture of him. You’ll also find his resume here. Other remembrances have been posted by Jerry DurantSimon Mills, Bob Binder, and Rex Black. Here is his obituary.

We’ve come full circle, with Boris ushering in the age of the testing specialist, and now as he makes his exit, testing efforts are shifting right back to the developers he originally addressed. I think his goals are well-stated in the dedication that he wrote in Software Testing Techniques. I’ll let him have the last word–

Dedicated to several unfortunate, very bad software projects for which I was privileged to act as a consultant (albeit briefly). They provided lessons on the difficulties this book is intended to circumvent and led to the realization that this book is needed. Their failure could have been averted—requiescat in pace.