Advice on Requirements

The IEEE has studied this topic from all angles for decades. It’s worthwhile to relay some of that to you so your “Dev Wanted” proposal comes to a mutually beneficial outcome.

For those who want to read more about this, just google “IEEE, SRS requirements specification”. Or ping me directly, I can address any questions you might have involving System Engineering (which is what this subject falls under).

Requirements First

First, start with a list of requirements. I don’t touch a software project without writing (or been given) requirements.

They don’t have to be gold-plated requirements (meaning: not released until “perfect” since there is no such thing as “perfect requirements” – software changes, it evolves. software is squishy.)

Still, I demand requirements up front (either with my help in writing them, or my help in analyzing them).

It doesn’t really matter if you adopt a SCRUM model, Spiral Development, Waterfall, Agile, and so on.

The key is to find a process that works. Some success can be had with SCRUM. Some success can be had with a lighter version of it I call “Spec a little, Build a little, Test a little, repeat”

In that model, you write a few requirements, you build them, you test them. Add more requirements. Repeat.

What is a requirement?

A requirement is the following:

  • Requirements are uniquely identified. That means they have an ID. If the text of the requirement changes, the ID changes. Never re-use requirement IDs. If only punctuation changes, or if small changes are made that do not alter the functionality in any constructive way, then the author and user can agree to leave the ID intact. Usually any change to the text implies a new requirement ID, aka RID.
  • One sentence. A Requuirement is only one sentence. It can be a long sentence, it can have lists, but it’s only one sentence.
  • A requirement contains the word shall
  • A requirement never defines HOW something is done. It defines WHAT is done.
  • A requirement never uses the words and, or, not, should, would, could, may, might, must, “etc” , support. Here’s why
    • and indicates two or more things to do. Just write two or more requirements. Never use and
    • or is worse. Do this or that? Which one fails and which one passes? Boolean logic says true or false is still true. This cannot be tested. or is bad and un-testable language in a requirement.
    • not is the grand-daddy of all things worse about requirements. “The X shall not do something”. OK to test this I hit with a sledge hammer. There, it aint going to do something. Did I test it? Yep. Did it pass? Yep. Is that what we intended? Nope. not (and negative words like it) cannot be tested. Back to Discrete Math: “if p then q”. if p is false… you know the rest of the story.
    • could, would, should. Weasel words. Hmm I “could” update the database, or not. I “would” update the database, or not. I “should” update the database, or not. See how this goes? It’s not a requirement. It’s a suggestion. We cannot test suggestions.
    • may, might, must These also are suggestions. must is not as strong as shall. shall is the word you are using.
    • etc If a requirement ever says etc the requirement is not finished. It has to be complete, conscise and correct and be mutally understood. My idea of etc and your idea of etc are different. If they are different, then the intents are different and this leads to bad things in test. If a requirement cannot be tested, how can money change hands after the development is done?
    • support can come in many forms. Good thoughts, a greeting card, a donation, you get the idea. support is not testable.
  • A requirement defines one thing, one idea, one concept. If it spans multiple ideas and concepts, make more requirements.
  • A requirement is testable. If no test can be devised to measure if the functionality works as intended, it isn’t a requirement.
  • Requirements have intent that is agreed upon among all parties. The intent envisioned by the author of the requirement matches the intent understood by the developer of the capability.

Useful phrases and their meaning

When you use the phrase “The X software shall provide the capability”, that usually means that the software called “X” either provides functional capability through a command/tool. Or, it provides the capability through an API only, or both.

How does that look?

If the requirement was:

“The software shall provide the capability to update configuration file records based on key-value pairs”

Then the software itself (the Design Solution) may be a simple tool, like:

java -jar MyTool "key" "value"

And the result is data element identified by “key” is updated to new “value”

On the other hand, if the intent of the requirement was an API then the Software provides a .jar file with an interface like:

MyTool.update("key", "value")

and subsequently, the user of the software is leveraging the API, and not some tool. Or, the intent of the requirement is both the tool and the API.

The author of the requirement can make that clear with a simple requirement:

“The Software shall provide an API for capabilities identified by the requirements written as “shall provide the capability”

Typically, when an API is not desired, then the phrase to use is simply:

“The X software shall update the database records when a player logs in”

Here’ the “provide the capability” is missing. Usually this means we don’t expect an API, the software just does what it says it will do. It goes back to the initial design of the requirements. Is your client expecting an API? Or is your client expecting functional behavior, or both? Work it out before hand so there is no confusion later.

Dealing with multiple things.

Sometimes a requirement deals with multiple things, how do you write it and avoid using or, and and the dreaded phrase etc ?

Like so:

“The Software shall provide the capability to persist data in the following external data storage mechanisms:
a) MySQL
b) H2
c) MongoDB
d) Flat-file”

One sentence.

The dreaded “etc” phrase

Never use it. Etc cannot be tested. Hey Steve, what does the requirement do? It does “ETC”. Oh OK. Sounds good. You cannot ever test etc.

There is a workaround for things like that:

Use TBD

TBD means To Be Determined, as you know. During the negotiation of what the requirements do, the TBDs can be boiled down to actual things. When the requirements are done, the TBD’s are gone and replaced with actual words that mean something.

Use TBR

TBR means To Be Reviewed. This is as close as you can get to a place holder in a requirement. Say for example the requirement reads:

“The Software shall provide the capability to disconnect a player after 900 (TBR-1) seconds of inactivity.”

Anything wrong with this requirement?

Yes, the word “inactivity”. Why? Because there’s no definition of what that means. Is being inactive meaning not moving? Not eating? Not mining? Not walking? What?

The TBR-1 is like a mini-requirement. Notice it’s numbered. All TBR are numbered with unique ID. Further, there is still a guess to the time out (900 seconds), but the value is marked TBR because the value may need to change upon further review, hence TBR

By the way, the better phrase would be:

“Unless testing the interactive behavior of the player is any of the following:
a) moving
b) AFK is false,
c) Using a block via left-click
d) Using a block via right-click
e) Using a GUI
f) TBD
then the Software shall provide the capability to disconnect a player after timeout of 900 (TBR-1) seconds”

Remove the cause for acting on the player. The core idea is that the player is disconnected. It’s a list of things that are tested, and it provides a TBD for the author and user to agree on what the full list is.

Last bits of advice on requirements

No matter how you write your requirements, just write them. Write down what is supposed to happen. Write down how the requirements are tested. Write down what the expectations are for each requirement.

If I explained anymore then this becomes an essay on Systems Engineering, which I won’t write.

Requirements First

Testing

After requirements are written, but before they are implemented, the test phase can start. How on earth can that happen, you ask?

Tests test requirements, they don’t test software, per se. Said another way, a test is validating that the software does the thing that the functional requirement says the software does. So, technically, a test can be developed without the actual software. Only when the software is done can the test procedure (designed based on the requirement) is married to the actual produced software can a result be found.

Look at what would happen if you wrote the test procedures AFTER the software was completed? First, the test procedure would be tempted to tailor the test based on the software and not the requirement of the software. This means any error or misunderstanding of the requirement on the part of the developer is passed directly onto the test. The test will probably say “PASS” and the developer will think all is fine. Yet, the test was mated to the software and not the requirement, so there’s still (technically) no way to validate the requirement.

Write the test descriptions and procedures before the software is completed. Write them based on the requirements.

Another problem that can be caused by waiting to write tests is the problem of not being done. Or, not knowing when you’re done. This is usually indicated by the situation where the test group/team is just “thinking up hard ways to test the software” without any connection to the requirements. Because those tests are not tethered to requirements, the creative “what if” can extend out and very likely not cover all the functional requirements laid down.

The result is that a gap in test will occur. Some of the requirements won’t be tested and yet the test team will beam with pride that they hit the software with all the ways they thought they could to break it. They failed.

A few more words on Testing

There are different kinds of tests. White-box, black-box, grey-box come to mind. But the kinds of tests that functional requirements need are functional tests. When testing functionally, there are three main kinds:

  • The Demonstration Method
  • The Test Method
  • The Analysis Method

Here is how they differ:

Demonstration

To demonstrate something, you do it. I can watch a demonstration of a free-throw in Basketball. I setup my chair, sit and wait, and I say “shoot the ball” and then the player shoots the free-throw. I cannot time-shift this. If I look away, I cannot see the demonstration and therefore the demonstration failed. Demonstration is witnessed, in real-time, as it happens.

A GUI requirement is prime for Demonstration. A lot of other requirements can be tested via the Demonstration method. The key is that it happens live, and unscripted, and it cannot be time-shifted.

Test

The Test method is very much like Demonstration with one small caveat. It can be time-shifted. I can write a test program to call API methods and then return later (after coffee) to inspect the results. That’s time-shifted. I can delay looking at the responses AFTER the test has concluded. I do not need to witness the test.

The other key different between Test vs Demonstration is that one may use external equipment or software to enable the test. For example, a test SpongeForge server may be required to test the functionality. That’s considered external software. The SpongeForge mod isn’t being tested, but it’s a prerequisite to the test. More abstractly, Test is the method for testing when external equipment is required for the test to work. MOST tests are done via Test method when the requirement is functional.

The same basketball shot test can be tested in Test method by setting up a camera and computer (external equipment and software) and I can program the computer to activate the camera to signal the player when to shoot the ball (over and over). Then I can write software to review the shots, or even better put more external equipment to detect the ball going down the net, detect if the shot was made from the free-throw line, detect if the ball given to the player had the right weight and density, etc… And ALL of it time-shifted.

Analysis

This is much more difficult to do, but it’s required at times when a simple Test isn’t enough to validate the requirement. Let’s suppose the requirement was:

“The software shall provide the capability to limit player login to 10 (TBR-1) players per minute when:
a) Network bandwidth has passed 50% utilization
b) Memory usage has passed 50% utilization”

Now, it could be tested if the test software causes network utilization to be elevated and memory usage to be elevated. It could be tested therefore as Test method. But it may require a bit more. What the Analysis method means is that given a certain requirement the test designer goes off and studies the model of player login, memory usage, and network usage and devises an abstract model of what could happen, on paper. They compute by hand all the factors involved and construct a working but hypothetical situation on paper of the situation in theory. Then based on that theory of operation, they make a test AGAINST THE MODEL. They run the test against the model and find results. THEN, they repeat the same test on the actual software developed from the requirement AND COMPARE results.

Does the model of what should happen mimic the actual behavior of the actual software? If not, why not? Results between the model results and the actual software-under-test results are COMPARED.

That is Analysis method. Very few software requirements undergo that level. Things that deal with sensors, telemetry, physical phenomenon, and the like are prime examples of what requires Analysis

The Analysis method of the basketball shot test is to go study Newton’s laws of motion, effects of air resistance, turbulence from rotating bodies, etc… and devise a model on paper of how much thrust was required ( and what angle and direction) is required to get the ball into the net. Study chaos theory about the quantum fluctuations that can occur as the ball rattles in the rim, etc… Devise a model of a human player with muscles, bones and use 3D transformations to mimic the motion of a human to put the ball in motion as previously described, and so on. Then run the test on the model. Then run the same Test method with a human player and COMPARE body motions and results at the net, etc… It’s possible. It’s pedantic. But it’s called Analysis method for reason.

Testability

Finally, a word about testability. Suppose the requirement was:

“The software shall provide the capability to compute the number pi”

What’s wrong with this requirement? A few things.

a) pi is a non-repeating irrational number. As far as Mathematicians know, there is no constant pi per se, it has no end of digits. So, because software must complete in a definite amount of time, the function to “compute pi” would run forever. This is fine, but it’s not testable because the test will never finish. It will never have a number result because it will take infinite time to find pi.

b) Computing is a design solution. Who says the design solution must compute anything? We know from Mathematics that pi is an irrational non-repeating number. But we also typically use approximations to some pre-defined order. We sometimes just accept 3.14 or some other number of digits as a suitable approximation. 8 digits? 10? Who cares what the precision is, the point is the number has a definite value, not infinitely long. Because of that, why does it have to compute pi. It can be hard coded? Hard coding pi, or computing pi, or guessing pi are ALL design solutions. Functional requirements don’t define HOW, only WHAT.

“The software shall provide the capability to retrieve an approximate value for pi to (8) TBR-1 digits precision.”

Then the user and customer can negotiate what TBR-1 should be, and settle on a good number for the needs of the software. The “retrieve” verb doesn’t cause the functionality to compute. It just implies that getting the number for pi is what the functional requirement does. It can get it from a constants table. Or compute it, or guess it. But exactly HOW it gets it is not in the requirement. How it gets the value (retrieves it) is a design solution.

Requirements sounds boring. It sounds like over-kill. But, as stated early, the IEEE has studied this problem and the summary judgement is that bad requirements lead to bad software. Bad requirements lead to late software. It’s still possible to write bad software from good requirements, but it’s guaranteed that suspicious and potentially buggy software is a result from bad requirements.

Summary

  • Write requirements first
  • Agree on the intent before starting development
  • Spec a little, build a little, test a little. It will save time and money.
  • If a test cannot be devised for a requirement, the requirement is invalid, start over.
  • Know how to measure when you are done by tethering all requirements to tests so a test result can be obtained for each functional part of the requirements.
  • Agree up front if the language of requirements means an “API” or a capbility without “API”.
  • Requirements are WHAT, not HOW. Design Solutions are “HOW”. Requirements drive Design Solutions.

Good luck.

8 Likes

Great post! Concise, gets the points across quickly and with impact. There’s a lot of poorly written and handled recruiting/hiring proposals out there (not to say there aren’t good ones), hopefully some of those people read this.

Updated a bit, expanded on Testing. Found a couple typos.

I flagged it for moderation action to (hopefully) pin it.