Blog

Simple Factors for Risk Based Software Testing

By Rex Black

We work with  a number of clients to help them implement risk based testing.  It's important to keep the process simple enough for broad-based participation by all stakeholders.  A major part of doing so is simplifying the process of assessing the level of risk associated with each risk item.

To do so, we recommend that stakeholders assess two factors for each risk item:

  • Likelihood.  Upon delivery for testing, how likely is the system to contain one or more bugs related to the risk item?
  • Impact.  If such bugs were not detected in testing and were delivered into production, how bad would the impact be?

Likelihood arises primarily from technical considerations, such as the programming languages used, the bandwidth of connections, and so forth.

Impact arises from business considerations, such as the financial loss the business will suffer, the number of users or customers affected, and so forth.

We have found that project stakeholders can use these two simple factors to reach consensus on the level of risk for each risk item.  These two factors are also sufficient to achieve the benefits of risk based testing that I discussed in an earlier post.  Using more than these two factors tends to make this process overly complicated and often results in the failure of attempts by project teams to implement risk based testing.

— Published


Software Testing Strategies

By Rex Black

It's important for project teams to select the right mix of test strategies for their projects.  In our training, outsourcing and consulting work, we typically see one or more of the following strategies applied:

  • Analytical strategies, such as risk-based testing and requirements-based testing;
  • Model-based strategies, such as performance testing based on statistical usage profiles;
  • Methodical strategies, such as checklists of important functional areas or typical bugs;
  • Process- or standard-compliant strategies, such as IEEE 829 documentation and agile testing approaches;
  • Dynamic or heuristic strategies, such as the use of software attacks or exploratory testing;
  • Consultative strategies, such as asking key project stakeholders about the critical quality characteristics;
  • Regression testing strategies, such as automated testing at the unit or graphical user interface levels.

It's important to remember that strategies may be combined; test managers should employ all of the strategies that they can effectively and efficiently employ on a given project.  Test managers should carefully select and tailor the strategies they use for a project.

— Published


What Is Software Quality?

By Rex Black

Since software testing is an assessment of quality, this question is not a theoretical one.  I suggest using a definition from J. M. Juran, one of the quality gurus who helped Japan achieve its astounding progress in the last 60 years.  Juran wrote that "quality is fitness for use. Features [that] are decisive as to product performance and as to 'product satisfaction'... The word 'quality' also refers to freedom from deficiencies…[that] result in complaints, claims, returns, rework and other damage. Those collectively are forms of 'product dissatisfaction.'"  Since satisfaction revolves around the question of who is (or isn't) satisfied, it's clear that Juran is referring to the satisfaction of key stakeholders.

Okay, that's pretty straightforward.  So, how do we think about quality?  I suggest there are three ways to approach this question:

  • Outcomes: What outcomes do we enjoy if we deliver a quality product or service?  Clearly, we would have customer satisfaction, conformance to requirements, etc.
  • Attributes: What attributes must the product or service have to deliver quality? These would include functionality, performance, security, etc.
  • Means: What must we do to build those attributes into the product?  These activities would include good requirements, good design, good testing, etc.

So, if we think about quality properly, and think about the approach to thinking about quality properly, then we can approach the assessment of quality properly.

— Published


Four Simple Rules for Good Software Testing Metrics

By Rex Black

Here are four simple rules for good software testing metrics:

  1. Define a useful, pertinent, and concise set of quality and test metrics for a project.  Ask yourself, about each metric, "So what? Why should I care about this metric?"  If you can't answer that question,  it's not a useful metric.
  2. Avoid too large a set of metrics. For one thing, large collections of charts and tables create a lot of ongoing work for the test team or manager, even with automated support.  For another thing, such situations lead to a "more data, less information" situation, as the volume of (sometimes apparently inconsistent) metrics becomes confusing to participants.
  3. Ensure uniform, agreed interpretations of these metrics.  Before you start using the metrics, educate everyone who will see them about how to evaluate them, in order to minimize disputes and divergent opinions about measures of outcomes, analyses, and trends.
  4. Define metrics in terms of objectives and goals for a process or task, for components or systems, and for individuals or teams.  Instead of starting with the metric and looking for a use for it, start with clearly defined objectives, define effectiveness, efficiency, and elegance metrics for those objectives, and set goals for those metrics based on reasonable expectations.

While simple to state, these rules are important.  My associates and I find that, almost every time there is a test results reporting problem in an organization, one or more of these rules is being violated.  Follow these rules for better software test metrics, and thus more effective software test management.

— Published


In Search of the Elusive Software Test End Date

By Rex Black

One of the great frustrations for testers, test managers, and other project team managers is the elusive test execution completion date.  It always seems to take longer than you'd think to get done with test execution, especially on larger projects. 

So, how do you predict when will you be done executing the tests? Part of the answer is when you’ll have run all the planned tests once.  This involves knowing three things:

  1. Total estimated test time (the sum of the estimated effort for all planned tests).
  2. Total person-hours of tester time available per week.
  3. The percentage of time per day spent executing tests by each tester (as opposed to being involved in other activities like meetings, updating test cases, etc.).

This figure sets a minimum time required to finish test execution.

However, test execution can last longer than this, because the other part of the answer is when you’ll have found the important bugs and confirmed those bugs to be fixed.  This involves using historical data (or extremely good guesses) to determine four things:

  1. The total number of bugs you'll find.
  2. The bug find rate at various stages of test execution.
  3. The bug fix rate at various stages of test execution.
  4. The average bug closure period (i.e., the time from initial discovery to final resolution).

Obviously, solid historical data from similar past projects really helps with this kind of estimation.  For our clients who have such data, and formal defect removal models, they can get quite accurate, often predicting the end date of test execution to within plus or minus 10% even on test execution efforts that last over six months.

For a simple spreadsheet that can serve as a starting point for predicting bug find-fix duration, you can take a look at this one, from the RBCS Advanced Library.

— Published


Risk Based Testing: The Elevator Pitch

By Rex Black

As some of you might know, I'm a big proponent of risk based testing.  (For example, see the podcasts and videos on the RBCS Digital Library here.)  In fact, a major RBCS client--I can't mention their name but the odds are good that you own one or more of their products--just told us that an entire division of their enormous company is adopting risk based testing, based on their understanding of the technique from our Advanced Test Manager course.

When test professionals first learn about risk based testing, one important question that often comes up is, "How do I convince skeptical testing stakeholders (outside of the test team) that risk based testing of our software is smart?"  You can give them a whole lecture in response to this question, but long answers tend to produce a severe case of MEGO ("my eyes glazed over") in non-test people. 

In business, people talk about the "elevator pitch."  If you haven't heard this phrase, here's what it means: You have a powerful executive in an elevator with you.  She's getting off in about 10 floors.  You have just a few seconds to convey to this powerful person some important piece of information.  Start talking.

So, if you find yourself in an elevator, a conference room, or an office with an influential testing stakeholder, and you want to convince them to support your efforts to implement risk based testing, here's the elevator pitch:

  • Risk based testing runs tests in risk order, which gives the highest likelihood of discovering the most important defects early (“find the scary stuff first”).
  • Risk based testing allocates test effort based on risk, which is the most efficient way to minimize the residual quality risk upon release (“pick the right tests out of the infinite cloud of possible tests”).
  • Risk based testing measures test results based on risk, which allows the organization to know the residual level of quality risk during test execution, and to make smart release decisions (“release when risk of delay balances risk of dissatisfaction”)
  • Risk based testing allows, if the schedule requires, the dropping tests in reverse risk order, which reduces the test execution period with the least possible increase in quality risk (“give up tests you worry about the least”).

All of these benefits allow the test team to operate more efficiently and in a targeted fashion, especially in time-constrained and/or resource-constrained situations.

— Published


Top Ten Factors that Increase Cost and/or Duration of Software Testing

By Rex Black

Are you a test manager estimating a new software or systems test project?  Here's a list of ten factors that will make the testing cost more, take longer, or both. 

  1. Complexity (process, project, technology, organization, test environment, etc.).
  2. Lots of test, quality, or project stakeholders.
  3. Many subteams, especially geographically separated,
  4. Need to ramp-up, train, and orient growing team.
  5. Need to assimilate or develop new testing tools, techniques, technologies.
  6. Custom hardware.
  7. Need to develop new test systems, especially automated testware.
  8. Need to develop highly detailed, unambiguous test cases (a.k.a., scripts and procedures).
  9. Tricky timing of component arrival, especially for integration testing and test development
  10. Fragile test data (e.g., time-sensitive)

Don't forget about these factors when you do your estimate.  Just one of these factors can cause some real pain and suffering when the consequences arise.

— Published


Welcome to RBCS' Blog on Software Testing

By Rex Black

Hi all--

After years of hesitation, RBCS is finally initiating a blog on software testing.  We've hesitated in large part because I've always joked--and only half in jest--that the word "blog" seems to rhyme with "blab" given most blog content. 

However, as the cliche goes, better to light a candle than to curse the darkness.  In this blog, I'm going to focus on sharing immediately useful ideas about software testing, rather than merely opinions, bloviations, or jeremiads.  I'm also going to talk about what you tell me you want to hear about, so feel free to send me requests.

In the next couple days, I'll start with a series of blog postings on some key management concepts related to software testing.  I'll be happy to see comments on those, and we can use the blog postings as the start of a discussion of each of these concepts.

Regards,
Rex Black
President, RBCS, Inc.

— Published



Copyright ® 2017 Rex Black Consulting Services.
All Rights Reserved.
ISTQB Logo ASTQB Logo IREB Logo PMI Logo ISTQB Logo
PMI is a registered mark of the Project Management Institute, Inc.