• About
  • Jerry’s Story

swalchemist

swalchemist

Category Archives: testing

Use Your Unit Tests, or Else

04 Thursday Sep 2025

Posted by Danny R. Faught in software-development, testing

≈ 2 Comments

Tags

programming, software-development, testing

I’ve been talking to a few people about Test-Driven Development, which has me thinking about what important factors for success might be overlooked. Here’s a big one: you have to use the unit tests. Otherwise, they’ll rot on the vine and the whole TDD effort will be in jeopardy.

Here’s what I look for – you want to have a CI pipeline that automatically runs all of the unit tests and fails the build if any test case fails. You shouldn’t be able to release the software (without an emergency override from senior management, perhaps) if the unit tests aren’t all passing. If you don’t have a CI pipeline yet, you can still put this safeguard into your build process, and ideally into your version control system so you can’t make a code change if the tests aren’t all passing.

If you have failing unit tests already, the first step is to get them to pass (you might say “the tests are red”). Maybe this means fixing bugs in your production code, or more likely, fixing broken tests. But in a lot of cases, the most practical thing to do is to disable or delete the troublesome tests. You can put tasks in your backlog to fix them if you want to, but if they’re failing now, they’re not helping you, and you should get your tests back to “green” quickly. Only then is it feasible to introduce a mandate that they have to stay green.

Do you see a failed unit test run and instinctively hit the button to re-run them because they’re likely to pass after a few tries? Does your CI already do this automatically? Flakes in unit tests are less likely than with higher order tests, but it can happen. It’s also fairly common that tests that are called unit tests are really higher order tests (a topic we can explore later). This is a tough problem to deal with. I recommend that you take measures to identify which tests are flaky, and also keep metrics on which builds encountered one or more flaky tests. Though you might choose to remove flaky tests, when you do, you’re also removing the possibility that they might start failing consistently and thus demand the investigation of a bug in production code. It might make sense to take a long-term look at improving the design of your troublesome test suites, because flaky tests have a cost that you have to bear over time.

Do you have no unit tests at all? It’s time to get started! You only need one in order to update your build process and get people used to having green tests. Really, just one.

Why is all of this important? As soon as failing tests become normalized, you lose track of how much bit rot has happened in the tests. The technical debt quickly builds to the point that the most dedicated TDD adherents end up spending a large fraction of their time fixing problems introduced by other developers so that they can get the tests back to green and get back to their work. The pressure to ship the next code change will soon overwhelm everyone, and then your developers aren’t doing TDD at all.

If you’re not going to use the tests, don’t add them to your code base.

Design Evolutions Doomed to Never Finish

28 Wednesday Aug 2024

Posted by Danny R. Faught in technology, testing

≈ Leave a comment

When I’m maintaining a code base, sometimes I’ll get a big idea about a redesign that will improve the code. And because I strongly prefer to make changes in small steps, this means that the new design may be done in only a small part of the code for some time, living alongside the old design it’s replacing. In fact, the old design may never be completely replaced. How do we feel about that?

I’ll point to a concrete example. I was working with the open source CloudFoundry app called UAA. There were a few tests for the web UI that were relevant to some changes I needed to make to the code, and I saw that the tests had some duplicated code that could be cleaned up. Rather than simply extract the duplicated code to some common methods, I decided to introduce the Page Object Pattern (I’ve had a lot of success with this pattern on other projects, but I won’t go into all the details of how it works here.) You can see the first two tests that I redesigned in this pull request: Rearchitect two integration tests to use page objects.

After that pull request was merged, we had two tests using page objects, but many more that did not. I had agreement from my team that this was a good change, but we then had a mix of different designs in the tests. Even after later converting several more tests in the same test class to use page objects (current version: SamlLoginIT.java), as of now, not all of the tests in the class are using this design, and there are several other unconverted test classes as well. It’s now almost a year later with little further progress, and I’m no longer working on the team. The redesign will probably not be complete across all of the relevant tests for the lifetime of the repository.

We had a great discussion about this at a Legacy Code Rocks meetup (thanks Scott, Chris, Sarosh, Steve, Jacob, and Jeremy). The incomplete evolution seems to be a very common experience for developers (I’ve encountered it myself several times). The general feeling was that doing them is the right decision, even knowing that the rest of the code may never catch up. It was suggested that we should document these design choices so that in the future, maintainers will know which of the existing design approaches should be used for new code and further refactoring. Finding the best place to put this kind of documentation and keep it up to date can be a challenge, however.

So my advice is to make the code you’re working on better than you left it, even if you don’t clean up all the other code.

Thanks to Bruce Ricard for making some great contributions to the UAA redesign mentioned above and for inspiring this post.

featured photo credit: spaceamoeba, CC BY-NC-ND 2.0

Four Years after the 343

05 Friday Jul 2024

Posted by Danny R. Faught in career, testing

≈ 1 Comment

Almost four years ago, in 2020, I joined Alan Page for a podcast episode, ABT 343 – Danny Faught, which was part of a series where he would interview some of the listeners of the AB Testing podcast. I was recently asked whether I still stand behind what I said. I had to go back and listen to the episode to answer that question. My answer is, pretty much, “yes”, except I might end up not taking my own advice. Here’s my followup on the interview.

First of all, I was talking about how I was happy about working as a software developer and I did not recommend being a testing specialist. I find myself between jobs right now, and I’m considering various jobs openings where I could help developers get more directly involved in building test automation for the code they’re developing. Perhaps that would get me back into the business of being a testing specialist? I’m not going to overthink it. Strike that, yes, I am going to overthink it.

Another thing we talked about was being a software generalist. I’ve been asking around trying to learn how to market myself as a generalist. After not getting any help with that, I found this help: “A Career Guide for the Recovering Software Generalist” (along with several other great posts on that site, and a book). I’ve taken “generalist” out of my LinkedIn headline and decided to take more of a stand on what types of things I like to do.

We talked about my attempts to write a biography of Jerry Weinberg. I’ve only posted two more installments since then, but I was recently at a writer’s meetup working on the next one, so I’m still plugging along on it very slowly.

I mentioned having discussions on Twitter. I haven’t tweeted for about a year and a half. I miss it, but I just can’t support it any more. Instead, I’ve been getting to know Mastodon (I’m @swalchemist@fosstodon.org). I’ve found that I’m getting better engagement on LinkedIn, actually, so I’ve been more active there too.

Listening to that episode again, I like to hear how much we also learn about Alan. I hope you’ve tried DuoLingo again, Alan, or if you’ve used another language learning app, I’m curious which one. My DuoLingo streak is 1,704 days now. (Join me! I’m @swalchemist.)

It’s Still a Wonderful Career

17 Sunday Mar 2024

Posted by Danny R. Faught in career, testing

≈ 1 Comment

A reader discovered my article from 2018, “It’s a Wonderful Career,” where I said this about software testing jobs: “Get out!” and “Don’t transition to doing test automation.” He asked if I still feel that way now, and the answer is “yes.” Here’s an update on my thoughts.

I’ve been a programmer for most of my life, but a majority of my career has been clustered around software testing. That shifted in 2017 when I took a job as a developer. Since then I’ve flirted with software quality in a 6-month role coaching some test automation folks who were building their programming skills, but otherwise I’ve stayed on the developer track. In my developer roles, I have worked with production code as well as a wide variety of automated tests, plus documentation and release pipeline automation. There have been no testing specialists involved at all.

In my post “Reflections on Boris Beizer” I briefly mentioned how testing as a specialty role has waxed and waned. It was perhaps the 1980s when “tester” became a common role that was distinct from software developers. Fast forward to 2011 when Alberto Savoia gave his famous “Test is Dead” keynote.

I first wrote about the possible decline of the testing role in 2016 in “The black box tester role may be fading away.” I suggested that testing might be turning into low-pay manual piecework (think, Amazon Mechanical Turk), but I don’t see any evidence now that that’s coming true. I mentioned the outsourcing company Rainforest QA. A company by the same name now offers no-code AI-driven test automation instead.

I followed up in “What, us a good example?” where I wrote about companies that aren’t evolving their roles, and yet they’re surviving, but they’re going to have to survive without me. My expectations have evolved.

I know that many people are still gainfully employed as testing and test automation specialists. I can’t fault them for doing what works for them. And I’ll admit that it’s still tempting to go to my comfort zone again and go back to focusing on testing. Maybe I can shed some light on why I’m resisting that temptation. It’s pretty simple, really.

As a developer, my salary has grown tremendously. There are multiple factors involved here, including moving to a different company a few years ago. But the organization I’m working in has no testing specialists, so this opportunity wouldn’t have been available to me if I were applying for a job as a tester. I have a sense there are many more jobs open for developers than for testers out there, especially with a salary that I would accept now, and I’m curious if anyone has any numbers to back that up.

I’m not having any issues with people not respecting my contribution to the product. I don’t have to justify myself as overhead – I have a direct impact on a product that’s making money. And I still do all aspects of testing in addition to developing, maintaining, and releasing products.

In “The Resurrection of Software Testing,” James Whittaker recently described the decline of the testing role as being even more precipitous than I thought. And he also says that it needs to make a comeback now because AI needs testing specialists. He’s organizing meet-ups near Seattle to plot a way forward. I don’t have an opinion on whether AI is going to lead to a resurgence in testing jobs. Instead I’m focusing on an upcoming conference where I’m going to hone my development skills.

And that’s really where I stand on a lot of discussions about testing jobs – no opinion, really. I don’t benefit by convincing testing specialists to change their career path, but I’m happy to share my story if it helps anyone navigate their own career.

One thing I do ponder–there are still so many organizations out there that employ testing specialists, and I might end up working as a developer for one of them some day. How strange that would be if it comes to pass.

[credit for feature image: screen shot from the video of Alberto Savoia’s “Test is Dead” presentation at the 2011 Google Test Automation Conference]

Beware Groupthink

05 Monday Feb 2024

Posted by Danny R. Faught in technology, testing

≈ 1 Comment

Why you should own your opinions

I know you can’t tell from the looks of it, but I’ve been hanging around here a lot lately, working on a post that wants to turn info a book of its own. I’m making good progress getting it tamed into a reasonable length. But first, inspired by LinkedIn posts from James Bach and from Jon Bach and subsequent comments, I want to explore another idea rattling around in my head.

I’m going to talk about three examples where members of a community were accused of groupthink of some sort. In many cases, some people observing the communities say that they see cult-like behavior. I’d prefer not to use the derogatory term “cult” here for a couple of reasons. One, because cult leaders actively encourage groupthink and blind obedience, and I don’t see that happening in these cases (even if their followers are picking up some of these traits). And also, because real cults have done a lot of damage, such as permanently ripping families apart. Let’s not try to equivocate that with what I’m talking about here.

Example 1: I learned a lot from the author and consultant Jerry (Gerald) Weinberg. I am of his followers. People outside his circle often don’t understand the level of devotion that many of his followers exhibited during his life and afterward. Someone even coined a term for it: “Weinborg”, which many of us have adopted for ourselves. (If you don’t get the reference, look up the fictional “Borg” in the Star Trek universe – we have been assimilated).

I attended three of Jerry’s week-long workshops. Every time I’ve been through an intense experiential training, it has been a deeply moving experience. That’s true of Jerry’s workshops, plus other experiential trainings I’ve attended (several Boy Scout training sessions come to mind, for example). Once you’ve recovered, you want more. But you can’t effectively explain why it was so moving to someone who wasn’t there, in fact, for many of them, you can’t give away too many of the details, or you may ruin the experience for someone who attends later.

There are likely many other behaviors among Jerry’s followers that looked odd to outsiders. Perhaps we would invoke his laws by name, like “Rudy’s Rutabaga Rule”. Or we might reference “egoless programming” and point to the book where Jerry wrote about it. We might get ourselves into trouble, though, if we recommended that people follow his ideas without being able to explain them ourselves. “Go read his book” isn’t very persuasive if we can’t give the elevator speech ourselves to show the value in an idea.

Early in my career, a wise leader cautioned me to build opinions of my own rather than constantly quoting the experts. That has been a guiding principle for me ever since, and one that I hope has steered me away from groupthink.

Example 2: James Bach is a consultant and trainer who has influenced a lot of people in the software testing community, along with his business partner. I have learned a lot from James, and I continue to check in with him periodically, though I have never chosen to join his community of close followers. Incidentally, he has also been influenced by Jerry Weinberg.

James has grown a community of people who agree on some common terminology, which streamlines their discussions. It gets interesting, though, when someone uses that terminology outside that community without explaining what it means to them. I remember attending a software quality meetup that advertised nothing indicating that it was associated with James Bach or his courses. But then I heard the organizers and some attendees use terminology that I recognized as originating from James. It’s been several years since the meetings I attended, but I think I remember them presenting other ideas that closely align with what James teaches, not always identifying where they came from or why they recommended them. I vaguely remember that I stood up once or twice and told them that I hadn’t accepted some of those ideas, and I don’t recall the discussion going very far.

If a group has an ideology that they expect participants to adopt as a prerequisite for participating, that’s fine, but it needs to be explicit. Otherwise, they need to be prepared to define their terms and defend their ideas.

Example 3: I participate in the “One of the Three” Slack forum and often listen to its associated podcast created by Brent Jensen and Alan Page. They have spoken out about James Bach and his community a few times. At one point, some participants piled on to some negative comments that seemed to have no basis other than “because Alan and Brent said so.” I called them out for groupthink, not unlike the very thing they were complaining about. Fortunately, I think they got the message.


I remember talking about the “luminary effect” with author and consultant Boris Beizer years ago. This is where people hesitate to challenge an expert, especially a recognized luminary in their field, because of their perceived authority on a topic. But in fact, all of the experts I’ve mentioned love for you to give them your well-reasoned challenges to any of their ideas. Granted, the more they love a debate, the more exhausting and intimidating it can be to engage with them. There are smart, after all, and you need to do your homework so you can competently defend your ideas – that’s not asking too much, right? In fact, one of the best ways to get their respect is to challenge them with a good argument. I just hope that a few of my ideas here will survive their scrutiny.

In this post I’m talking about some controversial people and some controversial topics. Where I’ve stayed neutral here, I’ve done so very deliberately, and though I have some further opinions unrelated to the topics I’m discussing, I’m not going to muddy this post with them.

Further reading – Beware Cults of Testing by Jason Arbon.

Going bats with bash unit testing

05 Wednesday Aug 2020

Posted by Danny R. Faught in technology, testing

≈ 2 Comments

Tags

bash, shell script, TDD, unit test

My team is committed to Test-Driven Development. Therefore, I was struck with remorse recently when I found myself writing some bash code without having any automated unit tests. In this post, I’ll show how we made it right.

Context: this is a small utility written in bash, but it will be used for a fairly important task that needs to work. The task was to parse six-character record locators out of a text file and cancel the associated flight reservations in our test system after the tests had completed. Aside: I was also pair programming at the time, but I take all the blame for our bad choices.

We jumped in doing manual unit testing, and fairly quickly produced this script, cancelpnrs.bash:

#!/usr/bin/env bash

for recordLocator in $(egrep '\|[A-Z]{6}\s*$'|cut -d '|' -f 2)
do 
  recordLocator=$(echo -n $recordLocator|tr -d '\r')
  echo Canceling $recordLocator
  curl "http://testdataservice/cancelPnr?recordLocator=$recordLocator"
  echo
done

The testing cycles at the command line started with feeding a sample data file to egrep. We tweaked the regular expression until it was finding what it needed and filtering out the rest. Then we added the call to cut to output the record locator from each line, and then put it in a for loop. I like working with bash code because it’s so easy to build and test code incrementally like this.

After feeling remorse for shirking the ways of TDD, I remembered having some halting successes in the past with writing unit tests for bash code. We installed bats, the Bash Automated Testing System, then wrote a couple of characterization tests as penance:

#!/usr/bin/env bats

# Requires that you run from the same directory as cancelpnr.bash

load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'

scriptToTest=./cancelpnrs.bash

@test "Empty input results in empty output" {
  run source "$scriptToTest" </dev/null

  assert_equal "$status" 0
  assert_output ""
}

@test "PNRs are canceled" {
  function curl() { 
    echo "Successfully canceled: (record locator here)"
  }
  export -f curl

  run source "$scriptToTest" <<EOF
	                        Thu Apr 02 14:23:45 CDT 2020
Checkin2Bags_Intl|LZYHNA
Checkin2Bags_TicketNum|SVUWND
EOF

  assert_equal "$status" 0
  assert_output --partial "Canceling LZYHNA"
  assert_output --partial "Canceling SVUWND"
}

We were pretty pleased with the result. Of course, the test is a good deal more code than the code under test, which is typical of our Java code as well. We installed the optional bats-support and bats-assert libraries so we could have some nice xUnit-style assertions. A few other things to note here–when we’re invoking the code under test using “source“, it runs all of the code in the script. This is something we’ll improve upon shortly. We needed to stub out the call to curl because we don’t want any unit test to hit the network. This was easy to do by creating a function in bash. The sample input in the second test gives anyone reading the test a sense for what the input data looks like.

Looking at the code we had, we saw some opportunity for refactoring to make the code easier to understand and maintain. First we needed to make the code more testable. We knew we wanted to extract some of the code into functions and test those functions directly. We started by moving all the cancelpnrs.bash code into one function, and added one line of code to call that function. The tests still passed without modification. Then we added some logic to detect whether the script is being invoked directly or sourced into another script, and it only calls the main function when invoked directly. So when sourced by the test, the code does nothing but defines functions, but it still works the same as before when invoked on the command line. We changed the existing tests to call a function rather than just expecting all of the code to run when we source the code under test. This transformation was typical of any kind of script code that you would want to unit test.

At this point, following a proper TDD process felt very similar to the development process in any other language. We added a test to call a function we wanted to extract, and fixed bugs in the test code until it failed because the function didn’t yet exist. Then we refactored the code under test to get back to “green” in all the tests. Here is the current unit test code with two additional tests:

#!/usr/bin/env bats

# Requires that you run from the same directory as cancelpnrs.bash

load 'test/libs/bats-support/load'
load 'test/libs/bats-assert/load'

scriptToTest=./cancelpnrs.bash
carriageReturn=$(echo -en '\r')

setup() {
  source "$scriptToTest"
}

@test "Empty input results in empty output" {
  run doCancel </dev/null

  assert_equal "$status" 0
  assert_output ""
}

@test "PNRs are canceled" {
  function curl() {
    echo "Successfully canceled: (record locator here)"
  }
  export -f curl

  run doCancel <<EOF
	                        Thu Apr 02 14:23:45 CDT 2020

Checkin2Bags_Intl_RT|LZYHNA
Checkin2Bags_TicketNum_Intl_RT|SVUWND
EOF

  assert_equal "$status" 0
  assert_output --partial "Canceling LZYHNA"
  assert_output --partial "Canceling SVUWND"
}

@test "filterCarriageReturn can filter" {
  doTest() {
    echo -n "line of text$carriageReturn" | filterCarriageReturn
  }

  run doTest

  assert_output "line of text"
}

@test "identifyRecordLocatorsFromStdin can find record locators" {
  doTest() {
    echo -n "testName|XXXXXX$carriageReturn" | identifyRecordLocatorsFromStdin
  }

  run doTest

  assert_output $(echo -en "XXXXXX\r\n")
}

You’ll see some code that deals with the line ending characters “\r” (carriage return) and “\n” (newline). Our development platform was Mac OS, but we also ran the tests on Windows because the cancelpnrs.bash script also needs to work in a bash shell on Windows. The script ran fine under git-bash on Windows, but it took some tweaking to get the tests to work on both platforms. There is surely a better solution to make the code more portable.

We installed bats from source and committed it to our source repository, and followed the instructions to install bats-support and bats-assert as git submodules. We’re not really familiar with submodules and not entirely happy with having to do a separate installation of the submodules on every system we clone our repository to (we have to run “git submodule init” and “git submodule update” after cloning, or else remember to add the option “–recurse-submodules” to the clone command).

Running the tests takes a fraction of a second. It looks like this:

$ ./bats test-cancelpnrs.bats 
 ✓ Empty input results in empty output
 ✓ PNRs are canceled
 ✓ filterCarriageReturn can filter
 ✓ identifyRecordLocatorsFromStdin can find record locators

4 tests, 0 failures

Here is the current refactored version of cancelpnrs.bash:

#!/usr/bin/env bash

cancelEndpoint='http://testdataservice/cancelPnr'

doCancel() {
  for recordLocator in $(identifyRecordLocatorsFromStdin)
  do
    recordLocator=$(echo -n $recordLocator | filterCarriageReturn)
    echo Canceling $recordLocator
    curl -s --data "recordLocator=$recordLocator" "$cancelEndpoint"
    echo
  done
}

identifyRecordLocatorsFromStdin() {
  egrep '\|[A-Z]{6}\s*$' | cut -d '|' -f 2
}

filterCarriageReturn() {
  tr -d '\r'
}

if [ "${BASH_SOURCE[0]}" -ef "$0" ]
then
  doCancel
fi

There are two lines of code not covered by unit tests. Because the one test that hits the loop body in the doCancel stubs out curl, the actual curl call is not tested. Also, the doCancel call near the bottom is never tested by the unit tests. We ran manual system tests with live data as a final validation, and don’t see a need at this point to automate those tests.

So there you go – no more excuses!

Dusty Old Articles

26 Sunday Jul 2020

Posted by Danny R. Faught in technology, testing

≈ Leave a comment

Cover of Software QA magazine, Vol 2 No 3

It’s been too long. Hello again. I’m still working on the next installment of Jerry’s Story. I’m going to restart it once again, and some day I’m going to get it right. Meanwhile, I dug up a list of my articles, presentations, and other content from 1995 – 2009, both self-published and otherwise, and found working URLs for them (I appreciate archive.org so much!). I’m putting them here mostly for future reference, but if you do delve in, please let me know if any of the links still don’t work.

One bonus if you scroll all the way down–my very first feature article, from 1987.

There are also some posts on my Tejas Software Consulting Newsletter blog that I may republish here. Many of the articles linked below have outdated contact information. I’d love to hear from the modern you either in a comment on this post or via Twitter.

So with no further ado, in reverse order of musty outdatedness, here is my long list of nostalgia.


Testers from Another Planet
StickyMinds.com column, January 26, 2009

January 2009 StickyMinds SoundByte podcast
I was interviewed about the “Testers from Another Planet” column here.

Unit vs. System Testing-It’s OK to be Different
StickyMinds.com column, October 20, 2008

Real-World Math
StickyMinds.com column, July 14, 2008

Peeling the Performance Onion
StickyMinds.com column, April 21, 2008. Coauthored by Rex Black.

StickyMinds SoundByte, late April 2008
I was interviewed for this podcast, talking about the Peeling the Performance Onion column.

March 2008 Gray Matters podcast (mp3)
Jerry McAllister interviewed me for this podcast, where we talked about the testingfaqs.org Boneyard and the strength of the worldwide test tools market.

Bisection and Beyond
StickyMinds.com column, January 7, 2008

Communicating with Context
StickyMinds.com column, October 15, 2007. Coauthored with Michael Bolton.

Synthesize Your Test Data
StickyMinds.com column, July 20, 2007

Do You Work in IT?
Last Word column, Better Software magazine, June 2007

Toolsmiths—Embrace Your Roots
StickyMinds.com column, April 2, 2007

Challenges of the Part-Time Programmer
StickyMinds.com column, January 8, 2007

Hurdling Roadblocks
StickyMinds.com column, September 21, 2006

Get to the Point
StickyMinds.com column, June 26, 2006. Coauthored with Rick Brenner.

Wreaking Havoc With Simple Tests
StickyMinds.com column, April 1, 2006

A Look at Command Line Utilities
Better Software magazine Tool Look article, March 2006

Exploratory Load Testing
StickyMinds.com column, January 9, 2006

A Bug Begets a Bug
StickyMinds.com column, October 3, 2005

A Look at PerlClip
Better Software Tool Look article, September 2005 (free membership required)

Five Minutes Ahead of the Boot
StickyMinds.com column, July 11, 2005

After the Bug Report
StickyMinds.com column, April 4, 2005

Not Your Father’s Test Automation: An Agile Approach to Test Automation
StickyMinds.com column, January 10, 2005. Co-authored with James Bach.

Browser-Based Testing Survey
Open Testware Reviews, December 2004

Seeking the Inner Ring
Tejas Software Consulting Newsletter, December 2004/January 2005, vol. 4 #6

Technology Bulletin: Web Link Checkers
Open Testware Reviews, November 2004

Free Test Tools are Like a Box of Chocolates
Invited presentation at STARWest 2004, November 2004

Test Tools for Free: Do you get more than you pay for?

Keynote presentation at the TISQA Test Automation Today Conference and Expo, October, 2004

Keyword-Driven Testing
StickyMinds.com column, November 8, 2004

How to Make your Bugs Lonely: Tips on Bug Isolation
Pacific Northwest Software Quality Conference, October 2004

The Consultant as Human
Tejas Software Consulting Newsletter, October/November 2004, vol. 4, #5

Open Source Development Tools: Coping with Fear, Uncertainty, and Doubt (PDF slides)
2004 Better Software Conference

Keywords are the Key to Good Automation (PDF slides)
DFW Mercury Users Group, September 14, 2004

Being Resourceful When Your Hands are Tied
StickyMinds.com column, August 30, 2004. Co-authored by Alan Richardson.

Software I Don’t Install
Tejas Software Consulting Newsletter, August/September 2004, vol. 4, #4

Review: MWSnap
Open Testware Reviews, August 2004

Stress Test Tools Survey
Open Testware Reviews, July 2004

Meaningful Connections
StickyMinds.com column, July 26, 2004

Alphabet Soup for Testers
Tejas Software Consulting Newsletter, June/July 2004, vol. 4, #3

Review: Mantis
Open Testware Reviews, May 2004

A Testing Career in 3-D
StickyMinds.com column, May 25, 2004

Linking up on LinkedIn
Tejas Software Consulting Newsletter, April/May 2004, vol. 4, #2

Interviewing the Interviewer: Turning the tables on Vipul Kocher
Tejas Software Consulting Newsletter, April/May 2004, vol. 4, #2

Book review: The Book of VMware
StickyMinds.com Books Guide, April 5, 2004

Testing, Zen, and Positive Forces
StickyMinds.com column, March 15, 2004

My Mentor: The Internet 

Better Software magazine, February 2004

Review: FitNesse
Open Testware Reviews, February 2004

Out of the Terrible Two’s
Tejas Software Consulting Newsletter, February/March 2004, vol 3, #6

Scripting Language Survey

Open Testware Reviews, January 2004

Books in the Pipe
Tejas Software Consulting Newsletter, December 2003/January 2004, v3 #6

Let’s Hear It for the Underdogs
Better Software, 2004 Tools Directory, December 2003

Review: The Grinder
Open Testware Reviews, December 2003

Screen Capture Tools Survey
Open Testware Reviews, October 2003

Technology Bulletin: System Call Hijacking Tools
Open Testware Reviews, October 2003

What Is This “Testing” Thing?
Tejas Software Consulting Newsletter, October/November 2003, v3 #5. Republished on StickyMinds as “Dear Aunt Fern.”

Data Comparator Survey
Open Testware Reviews, September 2003

Review: QMTest
Open Testware Reviews, September 2003

Test Design Tool Survey
Open Testware Reviews, August 2003

Review: Holodeck Enterprise Edition (Trial Version)
Open Testware Reviews, August 2003

Trip Report from a USENIX Moocher
Dallas/Fort Worth Unix User’s Group, July 2003, reprinted in the Tejas Software Consulting Newsletter, August/September 2003, v3 #4

Review: JUnit
Open Testware Reviews, July 2003

Diving in Test-First
developer.*, July 26, 2003

Populating the Boneyard
Tejas Software Consulting Newsletter, June/July 2003, v3 #3

Testware for Free: Adventures of a Freeware Explorer
StickyMinds.com column, May 19, 2003

GUI Test Driver Survey
Open Testware Reviews, May 2003

Review: InstallWatch
Open Testware Reviews, May 2003

Unit Test Tool Survey
Open Testware Reviews, April 2003

Review: Sclc Metrics Tool
Open Testware Reviews, April 2003

Convex is dead, long live Convex
Tejas Software Consulting Newsletter, April/May 2003, v3 #2

Black Box Test Driver Survey
Open Testware Reviews, March 2003

Review: OpenSTA (Open System Testing Architecture)
Open Testware Reviews, March 2003

Bug Report: But It’s a Feature!
STQE Magazine, March/April 2003

Defect Tracking Tools Survey
Open Testware Reviews, February 2003

Review: ALLPAIRS Test Case Generation Tool
Open Testware Reviews, February 2003

Two Years at Tejas
Tejas Software Consulting Newsletter, February/March 2003, v3 #1

Yes Virginia, We Do Want Our Software to Work
Tejas Software Consulting Newsletter, December 2002/January 2003, v2 #6

Mini-Review – How to Break Software
Tejas Software Consulting Newsletter, December 2002/January 2003, v2 #6

Strawberry Jam and Self-Esteem: A Review of More Secrets of Consulting
Tejas Software Consulting Newsletter, October/November 2002, v2 #5

The Dark Underbelly of Certification
Tejas Software Consulting Newsletter, August/September 2002, v2 #4

A Survey of Freeware Test Tools
 (zipped pdf slides, 420K)
Quality Week 2002 quickstart tutorial

The Making of an Open Source Stress Test Tool (paper)
slides in zipped pdf format
Quality Week 2002 track session

Event-Driven Scripting (html slides)
Presented at the July 12, 2001 meeting of the DFW Unix Users Group.

What Flavor is Your Freeware?
Tejas Software Consulting Newsletter, June/July 2002, v2 #3

Test-First Maintenance: A Diary

Dallas/Fort Worth Unix Users Group newsletter, June 2002

A Lesson in Scripting (pdf)
STQE magazine, Mar/Apr 2002

Book Review: Mastering Regular Expressions
Dallas/Fort Worth Unix Users Group newsletter, March 2002

A Year at Tejas
Tejas Software Consulting Newsletter, February/March 2002, v2 #1

A Bug Tracking Story  (pdf)
Slides from my presentation at the ASEE Software Engineering Process Improvement Workshop, 2002

Use Your Resources
Tejas Software Consulting Newsletter, December 2001, v1 #9

Boom! Celebrating a Successful Test
Tejas Software Consulting Newsletter, October 2001, v1 #8

“Reference Point: Testing Applications on the Web” (book review)
STQE magazine, November/December 2001

Book review – The Art of Software Testing
Tejas Software Consulting Newsletter, September 2001, v1 #7

Scripts on My Tool Belt (PowerPoint slides)
Fall 2001 Software Test Automation Conference.

Tools, Tools, Everywhere, but How Do I Choose?
Tejas Software Consulting Newsletter, August 2001, v1 #6, and also a letter from the Technical Editor on StickyMinds.com.

What’s on Your Tool Belt?
Tejas Software Consulting Newsletter, July 2001, v1 #5

Software Quality Notes: Test Tools for Free
Dallas/Fort Worth Unix Users Group newsletter, June 2001

Book Review: Load Testing for eConfidence
Tejas Software Consulting Newsletter, v1 #4, June 2001

Watts Humphrey on Teams
Tejas Software Consulting Newsletter, May 2001, v1 #3

Performance Testing Terms – the Big Picture
Tejas Software Consulting Newsletter, April 2001, v1 #2

Setting up a Mailing List with Subscribe Me Lite
Dallas/Fort Worth Unix Users Group newsletter, April 2001

Consumer protection efforts threatened by UCITA
Dallas-Fort Worth TechBiz, April 30, 2001.

Software Quality Notes — A Return to Fundamentals in the 00’s
Dallas/Fort Worth Unix Users Group newsletter, March 2001, reprinted in the Tejas Software Consulting Newsletter, March 2001, v1 #1

My observations on UCITA in Texas
Dallas/Fort Worth Unix Users Group newsletter, February 2001

Seamless Risk Management from Project to CEO (PowerPoint slides)
February 12, 2001 meeting of the IEEE Dallas Section Consultants Network.

Developing Your Professional Network
The Career Development column in the January/February 2001 issue of Software Testing and Quality Engineering magazine. References from the article are available on the STQE web site.

Position Statement: Is Software Reliability an Oxymoron?
Panel Session at the Technology Business Council Software Roundtable, Richardson Chamber of Commerce, October 26, 2000

Book Review: The Cathedral & the Bazaar
Dallas/Fort Worth Unix Users Group newsletter, April 2000

Asynchronous Improvement: a cst-improve experience report
Presented at the ASEE Annual Software Engineering Process Improvement Workshop, February 19, 2000

Book Review: Open Sources
Dallas/Fort Worth Unix Users Group newsletter, October 1999

Getting Published, or, Help Bring Software Engineering out of the Dark Ages and Help Your Career Too
The Career Development column in the July/August 1999 (Volume 1, Issue 4) Software Testing and Quality Engineering magazine.

Book Review: Managing Mailing Lists
Dallas/Fort Worth Unix Users Group newsletter, October 1998 (10/98)

The PIT Crew: A Grass-Roots Process Improvement Effort
Presented at the Software Test, Analysis, and Review conference, May 1998.

Software Defect Isolation
Co-authored with Prathibha Tammana. Presented at the High-Performance Computing Users Group, March 1998, and InterWorks, April 1998.

Integrating Perl 5 Into Your Software Development Process
Co-authored with Orion Auld. Presented at the High-Performance Computing Users Group, March 1998, and InterWorks, April 1998.

The Jargon of J. Random Hacker
Dallas/Fort Worth Unix Users Group newsletter, November 1997

Usenetters Leaving the Homeland
Dallas/Fort Worth Unix Users Group newsletter, July 1997

Book Review – The FAQ Manual of Style
Dallas/Fort Worth Unix Users Group newsletter, June 1997

Experience with OS Reliability Testing on the Exemplar System: How we built the CHO test from recycled materials (slides)
Presented at Quality Week ’97 and the May 16, 2000 meeting of the IEEE Reliability Society, Dallas Chapter.

The Scourge of Email Spam
Dallas/Fort Worth Unix Users Group newsletter, January 1997

What Color is Your Surfboard?
Dallas/Fort Worth Unix Users Group newsletter, December 1996

Hunting for Unix Knowledge on the Internet
Dallas/Fort Worth Unix Users Group newsletter, November 1996

Tester’s Toolbox column: The Shell Game
Software QA magazine, pp. 27-29, Vol. 3 No. 4, 1996
Using Unix shell scripts to automate testing. Basic information about the available shells on Unix and other operating systems.

Tester’s Toolbox column: Toward A Standard Test Harness
Software QA magazine, pp. 26-27, Vol. 3 No. 2, 1996
The TET test harness and where it fits into the picture.

Tester’s Toolbox column: Testing Interactive Programs
Software QA magazine, pp. 29-31, Vol. 3 No. 1, 1996
A concrete example of using expect to automate a test of a stubborn interactive program.

Tester’s Toolbox column: Using Perl Scripts
Software QA magazine, pp. 12-14, Vol. 2. No. 3, 1995
The advantages of using the perl programming language in a test environment, help in deciding whether to use perl and which version to use.

Apple Kaleidoscope, Compute! Magazine, pp. 111-112, issue 91, Vol. 9, No. 12, December 1987.
I was interviewed about this article for “The Software Update” episode of the Codebreaker podcast, released December 2, 2015.

It’s a Wonderful Career

24 Monday Dec 2018

Posted by Danny R. Faught in technology, testing

≈ 4 Comments

As I sit here listening to Christmas music, I’m giving myself the gift of extra time to write. I want to respond to something Paul Maxwell-Walters recently tweeted:

If there is such a thing as a Tester’s Mid-Life Crisis, I think I may be in the middle of it….

He followed it up with an interesting blog post–The Fear of Age and Irrelevancy – On the Tester’s Midlife Crisis (1)

Paul cited the director of a counseling center who said mid-life crises are likely to happen between age 37 through the 50s. Paul, approaching his 40s, worries that his crisis is here. As I see my 50s getting large on the horizon, I don’t know if my crisis has past, is still coming, or will never come. I was actually around Paul’s age when my consulting business dried up and I ended my 16-year run in software testing. Four years later, though, I went back to my comfort zone, and had four consecutive short stints in various testing jobs.

That last testing job morphed into a development job. I’m very happy with my current employer for encouraging that path to unfold. Over the years, I have fervently resisted several opportunities to move into development, some of them very early in my career. I had latched onto my identity as a tester and staunch defender of the customer, and I wouldn’t let it go.

Paul wrote:

I have also come across people around my age and older who are greatly dissatisfied or apathetic with testing. They feel that they aren’t getting anywhere in their careers or are tired of the constant learning to stay relevant. They feel that they are being poorly treated or paid much less than their developer colleagues even though they all work in the same teams. They hate the low status of testing compared to other areas of software development. They regret not choosing other employers or doing something else earlier.

That’s surely the story of any tester’s career. Low status, low pay, slow growth. I embraced it, because I loved the work and loved what it stood for. The dissatisfaction seems to be more common now than it used to be, though. My advice, which you will know if you’ve been reading things on my blog like “The black box tester role may be fading away“, is: get out! Don’t transition to doing test automation. Become a developer, or a site reliability engineer, or a product owner, or an agile coach, or anything else that has more of a future. I think being a testing specialist is going to continue to get more depressing as the number of available testing jobs slowly continues to dwindle.

Because I’m writing this on Christmas Eve, I want to put an It’s a Wonderful Life spin on it. What if my testing career had never been born? In fact, what if the test specialist role had never been born?

Allow me to be your Angel 2nd Class and take you back to a time when developers talked about how to do testing. Literature about testing was directed toward developers. What if no one had worried about adding a role that had critical distance from the development process? What if developers had been willing to continue being generalists rather than delegating the study of testing practices to specialists, while shoving unit testing into a no-man’s land no one wanted to visit?

And what if I could have gotten over the absolute delight I got from destroying things and started creating things instead? I’m sure I’d be richer now. I’d have better design skills now. But alas, I’m not actually an Angel 2nd Class, and more to the point, I haven’t dug up enough historical context to really play out this thought experiment. But I’ll try to make a few observations. Within the larger community of developers, I might not have been able to carve out a space to start a successful independent consulting practice, which I dearly loved doing as a tester. Maybe I wouldn’t have developed my appreciation for software quality that I have now. Maybe I wouldn’t have adopted Extreme Programming concepts so readily as I have, which has now put me in a very good position process-wise, even if I’m having to catch up my enterprise design and architecture skills.

How about not having any testers in the first place? Maybe the lack of critical distance would have actually caused major problems. Maybe the lack of a quality watchdog would have allowed more managers to actually execute those bad decisions. And maybe those managers would have been driven out of software management. Would the lack of a safety net have actually improved the state of software management by natural selection, and even allowed some companies with inept executives to die a necessary death? I think I’m hoping for too much here, and perhaps being too brutal on Christmas Eve.

It has been a wonderful career. It could have been a different career, but I’m just glad that it has taken me to where I am now. Paul, I wish you a successful outcome from your mid-career crisis. I realize that my advice to get out is much easier said than done.

Reflections on Boris Beizer

23 Tuesday Oct 2018

Posted by Danny R. Faught in testing

≈ 4 Comments

Another one of my mentors is gone – I got the news that Boris Beizer passed away on October 7, 2018. I’d like to pause to share some of my recollections of Boris. If you knew him, I would love to hear your stories, too.

I think my first introduction to him was reading his book Software Testing Techniques. It was published before the software testing specialist role was common. I was working as a software test engineer, and I was a bit confused by the book’s point of view. I discovered that Boris and most of the other authors who wrote about software testing at the time were participating in the comp.software.testing Usenet newsgroup. This was likely in 1994, give or take a year. I was amazed that I could interact with the people who “wrote the book” on software testing. So I joined in, and I learned a lot more than I would have just from their books. Somewhere along the way, Boris explained that Software Testing Techniques was written for programmers, and suddenly it made a lot more sense to me. When I wrote the frequently asked questions list for the newsgroup, I used quite a bit of material from Boris to flesh it out.

In 1995, I set up the swtest-discuss email list that Mark Wiley and I conceived to discuss how to test operating systems with a few colleagues we knew. The list grew to 500 subscribers and the topic area greatly expanded. Some people liked how we could enforce a better signal to noise ratio than what we had on comp.software.testing. Boris participated on the list. But some people felt that his tone was too abrasive. I’ve forgotten the details of the social dynamics that were in play so long ago. Some people moved on to other forums where Boris wasn’t invited. I realize I can’t make everyone happy. And Boris clearly didn’t care to.

My participation on Usenet got the attention of Dr. Edward Miller, the conference chair for the Quality Week conference. He invited me to join the conference’s advisory board that chose the papers that would be included. I was flabbergasted. I was still practically a kid. But Dr. Miller was certain he wanted me on the board. So I accepted. I joined a distinguished group of industry experts and academics, including Boris Beizer, who was a prominent industry consultant and also still acted like an academic, having gotten one of the first ever PhD’s in computer science.

I traveled to the Quality Week conference in San Francisco, which was in the Sheraton Palace. I remember going to the dinner that the advisory board was invited to during the conference each year as a thank-you for our efforts. I wasn’t sure how I was going to get to the restaurant as I stood on the curb in front of the hotel with Boris and other board members, many of them smartly dressed, and me in my business casual. Then Boris hailed a limo. What? I didn’t know then that you could hail a limo, but that’s how several of us got to the restaurant. Edward and Boris and the rest accepted me as one their own, despite my inexperience and casual mode of dress.

Some of the specific things I remember from Boris include the Pesticide Paradox. which taught us that test suites lose their effectiveness over time. His software bug taxonomy inspired many discussions, and I even helped him research the origin of the word “bug.” He taught me that if I can model any aspect of a program using a graph, I can use that graph to guide my testing. And not long ago, at a talk I was giving, someone in the audience reminded me of the fabulous poem “The Running of Three-Oh-Three.” Boris published it at the very beginning of Software Testing Techniques, “with apologies to Robert W. Service.” It remains the best poem about software testing that I’ve seen. I’ve only now bothered to figure out the link to Robert Service; it seems that Boris’ inspiration was Service’s poem “The Cremation of Sam McGee,” published in 1907.

Boris must have been in high demand. He told me at one point that he sold his services in 1-week blocks for US$20,000. Any shorter time than that wasn’t worth his attention. He told me later that he had enough “f-you money” to be very selective about which clients he took. He is credited with changing the industry in ways I don’t even understand, because the transformation was well underway when I joined the scene. With his brash nature, he made enemies along the way. But I didn’t like to choose sides. I have learned both from Boris and many of the people who steered clear of him.

I am especially proud of the inscription that Boris wrote in my copy of his book Black Box Testing:

Boris Beizer inscription

However, after I read the book, I had to report to him that I really didn’t like it. He explained that the publisher had assigned him an inexperienced editor who made a wreck of the book. I sure learned a lesson about dealing with publishers.

I found out at some point that Boris had written two fiction books under the pseudonym Ethan I. Shedley. They were both out of print, but I found a used copy of The Medusa Conspiracy. I started reading the book but didn’t finish it. I probably don’t have the generational context to be able to appreciate it.

Ever since Boris retired some time ago, I’ve wondered if we would ever hear about him again. Last February, I felt an urge to check on him. I no longer had a working email address for him (he seemed to change his email account regularly), but his phone number was easy to find in his Usenet signature. Dialing a phone is a quaint thing nowadays, but I was determined. Sure enough, someone picked up the phone, and when she asked who was calling, I hastily had to summarize who Boris was to me. She summoned him to the phone and we had a nice talk. I mentioned that I’m writing a biography, and as soon as it came out of my mouth, I felt that awkward sensation that I’ve felt a few times before, that I was talking to someone who may merit a biography of their own, but yet they hadn’t made the cut. Boris mentioned his last book, “Software Quality Reflections.” I still didn’t have a copy (it may have been an e-book), and I think the only way to get one is to get it straight from him. I sent him an email to his new email address to request it as he asked me to do, but I never got an answer.

For more about Dr. Beizer, see the interview in the May 13, 1985 issue of Computerworld. This was before he started his consulting practice, and there’s a great picture of him. You’ll also find his resume here. Other remembrances have been posted by Jerry Durant, Simon Mills, Bob Binder, and Rex Black. Here is his obituary.

We’ve come full circle, with Boris ushering in the age of the testing specialist, and now as he makes his exit, testing efforts are shifting right back to the developers he originally addressed. I think his goals are well-stated in the dedication that he wrote in Software Testing Techniques. I’ll let him have the last word–

Dedicated to several unfortunate, very bad software projects for which I was privileged to act as a consultant (albeit briefly). They provided lessons on the difficulties this book is intended to circumvent and led to the realization that this book is needed. Their failure could have been averted—requiescat in pace.

Wikipedia: overcoming the difficulties in joining an online community

21 Wednesday Feb 2018

Posted by Danny R. Faught in technology, testing

≈ 1 Comment

I have given myself a challenge that I’m enjoying right now – immersing myself more deeply in the Wikipedia community. The culture that has developed around the people who add to and edit Wikipedia bears some resemblance to the culture that we see with users of services like Facebook, Twitter, and Slack, but it’s much more complex. I aim with this article to convey a sense of the richness an online community can develop, and the frustration that an outsider can feel, much like a physical community. Maybe I’ll convince you to be a Wikipedia editor too.

First I’ll mention another online community that encourages its members to make contributions that everyone can benefit from. I made an attempt to become a productive contributor on Stack Overflow, a web site for asking and answering questions about computer programming.

Stack Overflow uses a reputation system, where various contributions you make will increase your reputation score. Additional features becomes available when your reputation grows. I thought that building a good reputation on Stack Overflow could be something I could add to my resume. I answered a few questions, and was able to especially build reputation when I answered questions that no one else had answered. But I had trouble finding questions that hadn’t already been thoroughly answered within minutes of being posted. And I was chastised a few times for not strictly following the rules when I posted answers, which was disheartening when I had put effort into answering the question. I understood that a community should have rules, but I lost interest before I really learned enough of the rules to be productive on the platform. I went back to only reading the content on the site, which, like many programmers, I do often.

I created my Wikipedia account way back in 2003. According to the contribution log, which shows almost every detail from the beginning, this must have been because I wanted to add a new article about load testing. I had made some edits to other pages without an account, but I needed to have an account to create a new article. That article is still there today, and I’m happy to see that after hundreds of edits, some of my original phrasing is still there.

Wikipedia newbies would be wise to hold off on creating new articles, however. I have added a total of seven articles. Of those, two have been converted to redirects to other articles with a broader scope, and one was deleted. I created an article about Brian Marick in 2007. Amazingly, it lasted until 2016 before someone claimed that he was not notable and eventually got it deleted. Recently, someone created an article about Janet Gregory, and within 24 hours, someone started a proposal to delete it. It was deleted 8 days later. For many topics, perhaps especially for articles about people, it’s quite difficult to prove that they meet Wikipedia’s notability guidelines. It must be sad when someone sees that they’ve been declared non-notable. I will probably not be adding new articles any time soon.

My long tenure on Wikipedia has caused others to assume that I’m well-versed in the rules of the road, but with my off-and-on interest in contributing, I’m just beginning to absorb the elaborate set of rules and conventions that editors are subjected to. In response to an edit I made to the article on Jerry Weinberg that was not up to snuff, one editor told me “Surely you know by now that ‘he told me so himself’ is not considered to be adequate sourcing for anything here…” Unlike a traditional encyclopedia with articles written by trusted experts, everything on Wikipedia is expected to be backed up by published independent sources.

I find it rewarding to make edits that improve the content on Wikipedia. Doing the research to justify the edits helps me to build my knowledge. I can refer to the article later when I want to refresh my knowledge of a subject, and so can anyone else. But the benefits are greatly reduced if the changes aren’t accepted by the community. In the case of my edits on the Jerry Weinberg article, I swallowed my ego and asked how I could change my approach. This led to a healthy discussion, and I was successful when I used a different approach. It’s often not easy to engage in this sort of discussion when I’m still smarting from having someone erase my work.

I recently decided to get more involved with Wikipedia because of efforts by Noah Sussman and Alan Page. Noah put a lot of effort into improving the Wikipedia article on software testing. Then Walter Görlitz, another volunteer Wikipedia editor, removed a big edit that Noah made to the article, an action that Wikipedia calls a “revert.” A heated discussion ensued, and little progress was made on overhauling the article like Noah wanted to. Alan issued a call for help to figure out how to effectively improve the article (A call to action: let’s fix the Wikipedia page on software testing), and I and several others joined a discussion on a chat group outside of Wikipedia to strategize.

I decided to take an agile approach; make small changes and watch to see how the broader Wikipedia community responds. A few of us have also branched out and looked at the many other articles related to software testing and found that as a group, they aren’t very well coordinated or well written. We’ve made numerous small changes to these articles now with a great deal of success, though we haven’t made substantial progress on the original goal of completely revamping the software testing article. I’m starting to make somewhat larger edits now and I hope others do too.

Instead of wondering how Walter would react to my edits, I decided to engage with him directly, so I invited him to open a dialogue. I’ve seen that he puts a great deal of effort into improving and reverting vandalism on a large number of Wikipedia articles. Walter tells me that the small edits I’ve made successfully has improved my reputation with him, and he is now less likely to revert the changes I make. Unlike Stack Overflow where reputation is tracked in one place, on Wikipedia your reputation is earned individually with each editor.

I’ve found that the best way to get consensus on Wikipedia is to use the “talk page” feature on the site itself, so everyone who is following the changes on the page have an opportunity to respond. In fact, the Wikipedia community prefers for discussions like this to take place on Wikipedia, otherwise some editors may get suspicious that one person is trying to recruit a cabal of “meatpuppets” to artificially amplify their influence. Our chat group is very loosely coordinated, and the edits we make are all an individual decision, so I’m not worried that we’ll raise this suspicion, especially now that we’re using the talk pages more.

The response to an edit can vary significantly based on who is watching for changes on the article. Many areas don’t happen to have anyone watching closely, so whether the edit is useful or not, it may stay around for years. If someone feels strongly about maintaining the quality of a particular article, you’ll be held to a higher level of scrutiny. I’ve also found that newly added information may be held to a higher standard than what is already in an article. Once I added some information to an article without including a citation to back it up, and the information was removed, despite the fact that most of the information already in the article was also not referenced. It was just easier for the editor who did it to revert a recent edit than to address the broader problem. You can avoid this if you scrutinize your own contributions very carefully.

If you’re afraid you’ll have difficulty getting edits on Wikipedia to stick, take this to heart, from the instructions to “Be Bold” with your edits:

Think about it this way: if you don’t find one of your edits being reverted now and then, perhaps you’re not being bold enough.

Now if you’ll excuse me, that load testing article has become a bit of a mess.

My thanks to Simon Morley and Walter Görlitz for helping me improve this article.

← Older posts

Recent Posts

  • Seeking the Inner Ring – Revisited
  • Use Your Unit Tests, or Else
  • Open-Minded Networking
  • Design Evolutions Doomed to Never Finish
  • Adventures in Personal Library Management

Recent Comments

Danny R. Faught's avatarDanny R. Faught on Seeking the Inner Ring –…
coutré's avatarcoutré on Seeking the Inner Ring –…
Danny R. Faught's avatarDanny R. Faught on Use Your Unit Tests, or E…
coutré's avatarcoutré on Use Your Unit Tests, or E…
Five for Friday… on Four Years after the 343

Archives

  • October 2025
  • September 2025
  • March 2025
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • March 2024
  • February 2024
  • February 2022
  • September 2021
  • August 2020
  • July 2020
  • February 2019
  • December 2018
  • October 2018
  • August 2018
  • June 2018
  • March 2018
  • February 2018
  • October 2017
  • September 2017
  • May 2017
  • August 2016
  • July 2016
  • June 2016
  • May 2016
  • September 2013
  • August 2013
  • November 2006
  • April 2003

Categories

  • archive
  • career
  • Jerry's story
  • life
  • security
  • software-development
  • technology
  • testing
  • travel

Meta

  • Create account
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Blog at WordPress.com.

  • Subscribe Subscribed
    • swalchemist
    • Join 26 other subscribers
    • Already have a WordPress.com account? Log in now.
    • swalchemist
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...