Get our free email newsletter

A Guide to EMC Test Software Validation

Complying with ISO 17025 Edition 2017, Section 7

Software has assimilated itself into almost every aspect of our lives. It resides within our homes, vehicles, phones, workspaces and so on. We find it in our televisions, speakers, light switches and on and on and on. It is everywhere. Resistance to software’s assimilation is futile. It makes our lives easier.

Perhaps not coincidentally, the negative consequences of software-related incidents has drastically increased in past few years. One of the most recently publicized incidents of software “gone bad” is software’s contribution to the Boeing 737 Max malfunctions, which led to two fatal crashes in 2018 and 2019. The Boeing 737 Max malfunction is a case where the software’s reported performance appears to have contributed directly to aircraft falling out of the sky. These and other incidents have not just gained the public’s attention. They have also served as a catalyst for changes within the industry, such as the use of quality management systems (QMS) to demonstrate that software does what it is designed to do.

But concerns about software performance pre-date the recent incidents. The National Institute of Standards and Technology (NIST) published a document nearly twenty years ago titled “The Economic Impacts of Inadequate Infrastructure for Software Testing.”  According to Table ES-4 in the document, the economic impact in the U.S. economy at that time of an inadequate software testing infrastructure carried with it a cost of nearly $60 billion annually.

- Partner Content -

VSWR and its Effects on Power Amplifiers

Voltage Standing Wave Ratio results from an impedance mismatch between a source (an amplifier) and a load (test application). This mismatch can influence the performance of the source.

In fact, I found a number of companies that track the cost of software incidents and their economic impact. One such company is Tricentis, a software testing company (https://www.tricentis.com). In their most recently available “Software Fail Watch” report from 2018, the company estimates lost revenue related to software failures in 2017 at about $1.7 trillion, certainly not a trivial amount!

Relevance

Our increased awareness of unintended software issues and the consequences of bad software design has prompted efforts to clarify software validation requirements, such as those found in Section 7.11 of the 2017 edition of ISO 17025, Testing and Calibration Laboratories. However, in point of fact, the addition of Section 7. 11 to the standard does not represent a new requirement per se. There have always been requirements in ISO 17025 to validate test methods and procedures, as well as requirements to ensure that computer software meets testing requirements for accuracy, range, repeatability, etc. (Indeed, one would unable to validate anything without including the test software within the process validation!) The 2017 edition of ISO 17025 2017 simply increased the visibility of software validation requirements in the standard due to the growing problem related to poor software performance.

But all of this leads to a single question, that is, how do you know whether your test software is actually behaving the way you think it should. So the objective of this article is to aid the reader in better understanding how to develop evidence that test software is performing within its intended design.

Software Validation/Verification: Some Definitions

You prove your test software is performing within design parameters through the use of a validation/verification (V/V) process. A software V/V process is simply the gathering of data to build a reasonable body of evidence (confidence) that your test software is performing within expected parameters and documenting your results. This would typically include a manual calculation of the test software equations, recorded evidence that test parameters were set correctly, proven test cases, system checks, etc.

There is one caveat here. I’m making the assumption that your V/V process checks produced satisfactory results. If not, you’re responsible for documenting your process deficiencies, initiating corrective actions and following your established QMS procedures regarding noncompliance findings. The more robust your V/V process, the greater the degree of confidence you’ll have that your test software is behaving the way it should.

- From Our Sponsors -

Software-based V/V processes can be as simple as a logbook recording a test case and its results, providing a source that you can reference during audits. The rigor of your V/V process could also be at the other end of the spectrum, in which you attempt to create every possible scenario and document the results. But, no matter how much you test your software, reaching 100% confidence is impractical, and may well be impossible to achieve.

My wife, Bobbi, is convinced there are people out there that will always find a way to break something. (Of course, she’s not referring to me!) It may be some folks just have a knack for finding errors, but where is the dividing line between too little and too much? That question is beyond the scope of this article, but is merely intended to show you where you can find the tools (sources) to create your own software V/V process. You get to decide where to draw the line.

Some V/V Process Resources

There is a host of material on the internet, commercial books and organizations that are readily available to help guide your V/V software validation process from concept through to final design. Much of the relevant material is outside the electromagnetic compatibility (EMC) field, but the sources really don’t matter that much since the V/V process is essentially the same.

If you’re thinking Deming Circle or Cycle Wheel, which championed the “Plan-Do-Check-Act” approach, you’re on the right path. However, I would add one more element to the “Plan-Do-Check-Act” approach. It would be “observe,” as in “Observe-Plan-Do-Check-Act” or OPDCA. You can find more information about the OPDCA model at Foresight University (http://www. foresightguide.com/
shewhart-and-deming).

If you’re comfortable checking out this and other resources and proceeding on your own, you can stop reading this article. But let me provide my own brief guide to the V/V process.

A Brief Guide to Applying V/V Processes

The first V/V process consideration is the type of software you use. Is it a commercially available off the shelf (COTS) product with limited or no ability to customize the test process, or is it modified off the shelf (MOTS) program in which the test process is more open for modification? (Two examples of MOTS products are National Instrument’s Labview and ETS-Lindgren’s TILE!.) Or is it custom-made software in which every aspect has been created to meet your exacting specifications? Your answer can directly affect the V/V process you want.

Evaluating COTS Software Using the Black Box V/V Method

We will start with the easiest approach. If you have COTS software, then I recommend using the “black box” V/V method. If performed correctly, this method allows you to use your test system’s standard checks that you are required to perform, and which will serve to validate both your hardware and setup system as well as your software processes. You generate known good inputs, measure with calibrated instruments, record your results, and then compare the recorded results with the expected standard requirements. And you apply the standard specified tolerances for frequency, amplitude, etc.

To illustrate, let’s use as an example MIL-STD-461G, radiated emissions RE102 greater than 30 MHz. First, you replace the receive antenna in the system setup with a calibrated signal source. The inputs are test conditions, test limits, transducer correction factors, receiver measured data over frequency and a known good signal from a calibrated source. RE102 requires the system check target amplitude to be the test limit minus six decibels (test limit – 6 dB). The actual calibrate signal source settings are a little different. The system check signal generator output target amplitude base equation is:

System Check Target Signal Generator Amplitude
   = (Test Limit-6 dB)-Antenna Correction Factor

Your system variability (tolerance) is required to be within +/- 3 dB of the system check target. The test conditions are dependent on the frequency range you’re at, which is dependent on the antenna you are using. The base equation for the final or corrected level is:

Final (Corrected) Level
   = Receiver Recorded Value + Antenna Correction Factor
   + Signal Path Insertion Loss – External Preamplifier gain (if required)

The amplitude results of the system check should be within +/- 3 dB from the system check target which, as I discussed earlier, is the test limit – 6dB. The antenna correction factor will effectively cancel since you will add the antenna correction back through your corrected (final) level equation, ideally using the same calculation that you used during the system check. It also applies to the ambient and equipment under test (EUT) frequency sweeps. This verifies not only the process but the test calculations and software control.

Unfortunately, we are not finished. We completed the system check’s target, frequency and amplitude V/V, but these did not cover test conditions. However, the test condition validation is much easier, and is simply the matter of recording the frequency sweep measurement test conditions versus the test standard. A simple photograph of the receiver during the sweep can be used to record start frequency, stop frequency, resolution bandwidth, frequency step size, frequency dwell, sweep time and the detector used. The photograph can be reviewed with the standard’s test conditions, and you have now completed a black box V/V process for MIL-STD-461G, RE102.

Evaluating MOTS Software Using the White Box V/V Method

The white box V/V method is best suited for MOTS and custom created software. Although you could use the black box V/V method with custom software, I don’t recommend it. Using the black box V/V method for MOTS and custom created software could save time if everything goes according to plan (green light schedule). And using the black box method for MOTS and custom software has the same disadvantages as the waterfall software design method. The feedback (test results) are delivered well down stream, and any necessary design modifications end up costing you more time and more money.

I highly recommend using a “check early and check often” philosophy for MOTS and custom software. The difference between the black box and white box methods is accessibility. With the black box method, you do not have control (access) of the inner workings of the software, but simply monitor the results of the operation and report your findings. With the white box methodology, you have access to virtually all aspects of the software, and can test the test software’s inner operation and verify its performance.

MOTS V/V requirements pertain to functions or routines you’ve created or modified. There will be a point at which you won’t be able to modify the software, since the software manufacturer is responsible for ensuring proper software operation and likely limits access to the software’s basic functions. Typically, this would include instrument drivers, basic EMC/EMI functions and maintenance actions.

You could use the black box V/V method for any of MOTS functions you cannot change. Changes you make to the software should be verified and validated prior to release, remembering always that validation and verification is simply creating evidence that the software is adhering to your process and the applicable standard. The basic differences between the black box and white box methods include the level at which you are testing and the functions/routines you modified or created. You control the lower level software functions and verify their performance. The software V/V is a process and given it’s a process.

Let me offer an example in which you create a limited selection routine within a MOTS software. You would open the routine, operate the function and verify the results. It takes no more effort. It sounds simple until you have a few hundred or more modifications to observe and validate. The complexity is within the sheer volume of the items you may need to verify.

You could take it one step further by creating different test cases where the user intentional enters incorrect information to see how the software responds. Good software should provide error handling routines and don’t forget that you have some of the same tools available to you that you do when applying the black box method. The standard required system checks are useful tools to prove that the software is doing what it is supposed to do.

Ideally, the person that created or modified the software should not be the individual tasked with validating it. You are best served by having someone with a different perspective to test the MOTS software, since the person that created or modified the software knows the software’s intricacies and their approach will likely result in a lower level of rigor in detecting errors. The goal is to “bullet proof” the product before it is released.

Evaluating Custom Software Using the White Box V/V Method

You can apply the same white box method to evaluate a V/V process described previously to custom created software. I recommend that development testing be part of your design process, and that you test and record routines as you build them. There are differences between software design development testing and software V/V. The biggest question is when within the custom software design process to apply the V/V method. The V/V process typically takes places after the custom software design freeze and before product release. Although the software should be tested as the design process moves forward, remember the “test early and test often” philosophy.

I must reiterate that testing within the design development is not part of the V/V process. It is part of maturing the product which is part of design process. The development test results should also be documented for future prosperity. It could be stored and used within the “lessons learned” database, which may help you meet other QMS standard requirements.

I recommend performing a risk analysis and creating a test case table for custom software based from the results of the risk analysis then V/V testing each test case. Seed the test cases with intentional errors as well as known good variables. Remember that the goal to

ensure the product is performing within expectations and that some people are geniuses when it comes to breaking things. Conduct the test cases and record the results while keeping in mind any noncompliance results require a failure analysis and corrective actions.

Conclusion

Software verification/validation importance has increased with the software infiltration into almost every aspect of our lives. QMS standards and processes are responding to the software intrusion with heightened scrutiny, and ISO-17025 2017 has devoted an entire section regarding software verification/validation. Further expanding the need to provide evidence the software is performing within its expected behavior. Meeting the software V/V process requirements is not extremely painful with the proper awareness. It is simply a matter of recording evidence the software is functioning within its design.

Related Articles

Digital Sponsors

Become a Sponsor

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.

Get our email updates

What's New

- From Our Sponsors -

Sign up for the In Compliance Email Newsletter

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.