Over the last few years it has become evident to me that there is a clear need for a vetting process that allows EMC professionals to select test software based on their needs. In this article, I will begin by describing the software types and software characteristics that need to be quantified, and then present a scoring method to compare various products. Software selection is a process and, since it is a process, a visual tool can be used to aid the reader. The process tool I will use is called a Turtle Diagram.
In case you are unaware of the Turtle Diagram process, the body of the turtle presents the process name. The mouth of the turtle is used to identify inputs. The legs are used to show the methods/documentation, measurements, resources, and personnel. The output is the um… let’s call it the tail. It is shown in Figure 1.
For the purposes of this article, I have relabeled the parts of the basic Turtle Diagram. Inputs will be the type of software, Measurements will be the cost of the product. Methods/documentation will be standards, Resources will be the instrument drivers the software supports, and personnel will be the software support. The output will be the results of the process displayed in table form. The modified Turtle Diagram is shown in Figure 2.
Let’s begin by describing the two types of commercially available test software. They are commercial off-the-shelf (COTS) software, and modified off-the-shelf (MOTS) software. These terms are in accordance with standards of the American Association for Laboratory Accreditation (A2LA)1. However, I prefer to think of these products as “black box” and “white box” software.
Black box (i.e., COTS) software products perform a specific test usually for a dedicated test standard. They are an excellent choice if you are regularly performing the same standardized testing with little or variation. Black box software products are also relatively easy to use. Black box measurement processes are typically invisible to the operator, who can see the instrument settings and review measurement results but cannot directly control the measurement process itself.
Black box software products are also difficult to modify. Requests for modification are typically sent directly to the original equipment manufacturer (OEM), who then constructs (or reconstructs) the software instrument drivers and measurement capabilities to meet the operator’s specific requirements.
White box (i.e., MOTS) software, on the other hand, is the polar opposite of black box software. It is easy to modify but tends to be complex to operate. Test processes are entirely controlled by the operator, who can create, observe and even modify specific measurements. However, the operator must have a greater level of measurement process knowledge to run white box software accurately and effectively.
In the end, black box and white box software products each have their own unique benefits, and you will need to determine which software type represents the best fit with your specific testing and measurement requirements.
Now we’ll move into the right front leg of our Turtle Diagram to evaluate costs, one of the primary concerns for any purchase. Here, we need to evaluate exactly what our cash will purchase for us. It sometimes comes as a surprise that what we thought was included in the purchase price is actually an “extra” that’s only available for an additional charge. If we want to avoid these surprises we need the answers a few questions. For example, what is the cost per license? What options are included, and not included, in the published cost? What start up support and initial training is included within the purchase? What are the maintenance fees? Answering these questions in advance of the final purchase decision should help to reduce your sticker shock.
Standards and Documentation
The left front leg of our Turtle Diagram is devoted to product documentation and standards. Of course, we want to know the regulatory compliance or standards issues that the software is designed to help us assess. But we’re not just talking about the international standards that the software is designed to test to. We’re also enquiring about the standard software development practices that were used to develop the software product. Specifically, has the software been developed using proven quality methods,2 and has a proven process been used to verify and validate the final product? Or, is the OEM familiar with the Software Engineering Body of Knowledge (SWEBOK)?3 The extent to which a developer follows industry-standard software development practices is a good indicator of how good the final product will be. If the software manufacturer does not know the standards or cannot describe the process used to develop their product, you can expect a higher probability of software errors.
Next, let’s take a look at support considerations. The first question to ask is whether there is local support. Issues can usually be resolved more quickly if technical support is available from within the same hemisphere as your location. Also, you’ll also want to enquire about global support, since many companies have test laboratories around the world. Next, you’ll want to know what type of support is available. A good software engineer may know how to write code, but may be less knowledgeable about EMC issues, and that could result in the need for additional time (and patience!) in resolving issues. The best technical support is most likely to come from a software development firm that has a mixture of both software developers and EMC engineers on their support staff or, even better, support engineers who have been trained in both software development and EMC-related issues.
Another support item that should be considered is software system maintenance. One of the benefits of so-called black box software is that it reduces the maintenance burden. However, maintenance is a requirement for every type of test software product. Those software development companies that allocate resources for product maintenance are likely to provide better, more advanced technology over the long term.
Test software is designed to communicate and control test equipment to perform a specific operations, conduct the necessary calculations and generate an output. The number of instruments with which a specific test software product can communicate needs to be considered. In addition, it is important to know what type of instrument communication protocols the software can support, and whether your test software only supports legacy protocols that are likely to be obsolete in the near future. In this same vein, does the test software include instrument drivers dedicated to just one instrument manufacturer or for multiple instrument manufacturers? If your testing laboratory is equipped with various instruments from different manufacturers, your test software product must be able to handle them. Finally, how are new instrument drivers planned and created, and does the software development firm have access to or partnerships with those instrument companies designing new or advanced equipment?
It is time to take a breath and wrap up everything into a nice small tight package, so that we can make a purchasing decision based on the data we’ve collected. The competing test software products need to be compared, and each product’s characteristics need to be assessed in the context of your laboratory’s unique requirements. In the end, you should select the test software product that best meets those requirements.
Table 1 illustrates a software comparison table based on the Turtle Diagram evaluation that we’ve presented here.
All business decisions should be knowledge based, and based on the available data. The data itself should be collected using proven methods and tools. As an engineer colleague of mine always said, “Conclusions without data are opinions. Conclusions drawn from data are facts.” Decisions are based on knowledge, and knowledge should be derived from data. The requirement to make data-driven decisions is even more important when a financial investment is involved, since capital is always a finite resource. Hopefully, this article has provided a method that will help to ensure that your test software purchases represent the best fit with your company’s needs.
The references, standards and helpful documents mentioned in this article include:
“R214—Specific Requirements: Informaton Technology Testing Laboratory Accreditation Program,” the American Association for Laboratory Accreditation, July 13, 2010. http://www.a2la.org/requirements/17025_IT_req.pdf.
Software Quality Engineering,
“Description of the SWEBOK Knowledge Area Software Engineering Process (Version 0.9),” Khaled El-Emam. 2001, National Research Council of Canada, Institute for Information Technology NRC, Canada. Available at http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=shwart&index=an&req=8914095&lang=en.
“Software Validation in Accredited Laboratories: A Practical Guide,” Gregory D. Gogates, June 7, 2010. ftp://ftp.fasor.com/pub/iso25/validation/adequate_for_use.pdf.
“Software Training and Consulting (SQE Training: Testing
More information on software quality engineering can be found at the American Society for Quality
(www.asq.org); the American Software Test Qualification Board, Inc.
(www.astqb.org); the Society for Quality Engineering (www.sqe.org), and at ETS-Lindgren, TILE Support (https://support.ets-lindgren.com/TILE/).
Jack McFadden is an EMC Systems Engineer with ETS-Lindgren in Cedar Park, Texas. His responsibilities include EMC test system design and integration. McFadden is an iNARTE certified EMC engineer as well as an iNARTE certified EMC technician with over 25 years experience in EMC test systems and software development. He is a certified tester foundation level (CTFL) per the American Software Testing Qualifications Board, Inc. (ASTQB). McFadden can be reached at Jack.McFadden@ets-lindgren.com.