Get our free email newsletter

Time-Saving Effects of FFT-Based EMI Measurements

In the world of RF and microwave testing, measurements required for EMI are among the most complex and time-consuming since they incorporate a wide array of specific tests that must be performed over an array of frequencies. They typically require not only many hours of test time but even more for configuring and reconfiguring the test set-up.

Fortunately, advances in the signal processing abilities of test equipment have reduced test time over the years. However, the real improvements are the result of enhancement measurement software, greater integration, automation of the test process, and increasing acceptance of time-domain techniques based on Fast Fourier Transform (FFT) for use in preview measurements of the disturbance spectrum, for example. Together they are slowly making the EMC measurement process faster, and more efficient and accurate.

Over the years, there has been a continuing trend toward greater automation of test environments that includes more fully integrating the elements of the test system, and the EMC measurement domain has benefitted from this as well. Generally speaking, any technique that can reduce the amount of human intervention can reduce errors caused by manually reading and recording measurement results. Automation also verifies and maintains the integrity of measurement settings to ensure repeatable results, and produces a verifiable test environment.

- Partner Content -

Shielding Effectiveness Test Guide

Just as interference testing requires RF enclosures, isolation systems in turn need their own testing. This document reviews some of the issues and considerations in testing RF enclosures.

At the software level, the “must have” list of features required by EMC software is a long one, the most fundamental being the ability to completely collect, evaluate, and document RFI voltage, power, and field strength in accordance with current standards. However, the complexity of EMC measurements also makes it essential that software have two clearly-defined methods of operation. The first allows less-experienced users to obtain reliable, repeatable results using predefined, standards-compliant test routines, and the second allows “veterans” to specify custom values for every parameter in order to define their own test routines.

Virtually all commercial EMC software provides these capabilities to varying degrees, along with the ability to be updated as standards evolve. Today’s EMC software is typically based on Microsoft Windows, which makes it possible to create a familiar user environment that eliminates the need to navigate the nuances of proprietary software developed in-house.

FFT Benefits and Challenges

One of the most recent and promising developments in EMC testing is the use of time-domain scanning methods based on the FFT technique to identify the disturbance spectrum. This approach has demonstrated its ability to reduce preview measurement time by a factor of 1000 or more. It is currently being evaluated by standards committees to determine whether it should be included in forthcoming modifications, but its viability has already been proven in a variety of measurement situations. To understand the benefits of this technique, it is necessary to compare it to the conventional approach, the stepped-frequency scan.

To measure an unknown disturbance quantity in the frequency domain, a test receiver or analyzer must be tuned through a frequency range as quickly as possible. This would ideally result in a refresh rate that produces a stationary spectrum display (each frequency sweep being no be longer than about 20 ms). In reality, the measurement must account for the settling time of the resolution bandwidth and the signal timing of the disturbance signals, which can be continuous and pulsed narrowband signals, or continuous and intermittent broadband disturbances.

For intermittent disturbances, proper measurement time adjustment is essential. Annex B of CISPR 16-2-1 to 16-2-3 contains a table of the minimum sweep times (fastest scan rates), from which the minimum sweep times in Table 1 for each of the CISPR bands can be calculated. Since they are minimum sweep times, they may increase depending on the type of disturbance, even with the quasi-peak detector.

- From Our Sponsors -

Table showing minimum CISPR 16 sweep times for peak and quasi-peak detection

Table 1:  Minimum CISPR 16 sweep times for peak and quasi-peak detection

Since nearly all commercial standards use quasi-peak detection for compliance with a specific limit, EMI tests usually apply time-saving procedures such as minimizing the number of quasi-peak measurements. In addition to CISPR 16, MIL-STD-461 requires minimum measurement times for analog measurement receivers and minimum dwell times for synthesized measurement receivers (Table 2). For equipment whose operation may produce emissions at infrequent intervals, times for frequency scanning must be increased to capture them.

Table showing bandwidth and measurement time specified by MIL-STD-461F

Table 2:  Bandwidth and measurement time specified by MIL-STD-461F

Preview measurements in accordance with commercial or automotive standards use a “max. peak” detector to first identify all frequencies at which emissions approach the limit values. Quasi-peak detection on only the detected critical frequencies significantly decreases measurement time for the final measurement. However, preview measurements can take hours because they must be performed between 30 MHz and at least 1 GHz.

To reliably detect a pulse-like disturbance, the observation time per frequency point must be at least as large as the reciprocal of its pulse rate. In addition, disturbance measurements must always be made at the maximum level (e.g., the worst case emission), which usually requires repositioning the antenna and test device.

For example, scanning 30 MHz to 1 GHz with an IF bandwidth of 120 kHz and a step width of 40 kHz to measure the entire spectrum (without gaps and with sufficient measurement accuracy) produces 24,250 measurement points. If the dwell time is 10 ms per frequency point, total measurement time for a single preview scan is 4 min. This time must be multiplied by a factor of 20 or more to account for the time required for positioning the turntable and antenna height, and antenna polarization switching.

Using a spectrum analyzer instead of a test receiver does not overcome the problem because the time of a single sweep must be long enough for at least one disturbance pulse event to fall into the instrument’s resolution bandwidth at each frequency.

For repetitive sweeps and maximum hold for the trace display, observation time must continue until the spectrum becomes stable, and a continuous broadband signal will require many fast sweeps to show the envelope of the broadband spectrum. Spectrum analyzers usually allow fewer sweep points than test receivers and they may not provide enough frequency resolution to measure radiated emissions, which makes it necessary to perform time-consuming partial sweeping.

Conventional EMI measurement systems can only measure the signal within the resolution bandwidth within a stated measurement time, whereas FFT-based time-domain EMI measurement systems allow a much wider part of the observed spectrum to be analyzed simultaneously. This is because the EMI test receiver samples successive sections of spectrum at the IF with a bandwidth of several megahertz rather than only 120 kHz, and each “subspectrum” is calculated simultaneously with a specific resolution using FFT.

Time-Domain Scan Considerations

However, steps must be taken when applying the time-domain technique that ensure all types of signals that can appear in a disturbance spectrum are correctly detected, even intermittent types with a very low pulse repetition frequency. If they are not considered, the frequency spectra calculated by the FFT may be displayed incorrectly in level and frequency.

Theoretically, an exact calculation of the frequency spectrum of a time-domain signal would require an infinite period of observation, and it would be necessary to know the signal amplitude at every point in time. In practice however, these requirements are unrealistic using FFT aided by digital signal processing. Analog-to-digital conversion provides a continuous input signal to be converted into an amplitude- and time-discrete signal, and applying the FFT limits signal observation time to a finite (and practical) amount. This means that calculation of the frequency spectra requires a reasonable number of discrete signals in the time domain, a process called “windowing.”

If the length of this window does not exactly correspond to an integer multiple of the periods of the frequencies contained in the input signal, it results in spreading or leakage of the spectral components away from the correct frequency and an undesirable modification of the total spectrum. The generation of spectral components that are not present in the original time-domain signal is known as the “leakage effect,” and is most severe when a simple rectangular window is used. The best way to reduce this effect is to choose a suitable window function that minimizes spreading.

The spectrum calculated by the FFT is a discrete frequency spectrum consisting of individual frequency components at the so-called “frequency bins,” which are determined by the FFT parameter. The original spectral response can only be observed at the discrete frequency bins, and there may be higher amplitudes in the original signal spectrum at frequencies between two adjacent frequency bins. The amplitude error this causes is called the “picket fence effect” (Figure 1) and is also characteristic of conventional stepped-frequency scans.

Picket fence effect

Figure 1:  A description of the picket fence effect

Time-domain measurement techniques employing FFT on intermittent disturbance signals require certain system parameters to be emphasized so that all disturbance signals are detected and measurement accuracy is maintained. For example, when an impulse-type disturbance signal is captured by the Gaussian-type FFT window, signal amplitude may be reduced at the window edges. To minimize this error while also ensuring that no signal is missed, EMI test receivers that employ time-domain scan include an overlap of the window function in the time domain.

Such receivers usually provide two settings for the step mode of the time-domain scan, the “Auto CW” mode and an “Auto Pulse” mode. In Auto CW mode, the overlap in the time domain is about 20%, which allows narrowband signals to be analyzed as quickly as possible. The “Auto Pulse” mode provides more than 90% of overlap and is intended for broadband-impulsive and mixed signals. It ensures that even very short impulse signals at the edge of the Gaussian-type time-domain window are calculated without significant amplitude error. With so much window overlap, only a small amount of ripple remains in the time domain that could result in only a small measurement error.

Worst-case amplitude errors for such receivers are typically 0.4 dB for the lowest point of the amplitude ripple referred to the maximum pulse amplitude and the resulting average error is 0.09 dB, a theoretical value for a minimal pulse width. The real error value depends on the pulse duration and is usually even less. When performing the time-domain scan with weighting detectors to CISPR (i.e., quasi-peak), correct detection of single pulses requires the data rate for internal digital signal processing to be sufficiently high to accommodate the IF bandwidths that are used, and a 90% overlap of the FFT windows is essential for proper quasi-peak detection.

Analog filtering in the signal path has an influence on the frequency response of a time domain scan, and non-ideal correction of the analog filters in the RF and IF signal path of the test receiver add to overall measurement uncertainty. The bandwidths of the preselection filters get narrower as frequency decreases (such as 2 MHz bandwidth at 8 MHz vs. 80 MHz bandwidth at 500 MHz).

To minimize the influence of the preselection filters’ frequency response, the receiver reduces the bandwidth for the time-domain scan accordingly, from 7 MHz to 150 kHz for example, depending on the scan range, and compensates for the frequency response of the analog IF filters.

Comparing Stepped-Frequency and Time-Domain Scans

In frequency bands A to E, the CISPR16-1-1 standard specifies bandwidths and tolerance masks for IF filters used in disturbance measurements to commercial standards. In contrast, MIL-STD-461 defines 6-dB bandwidths in decimal steps that must be met with a 10% tolerance. Any deviations from the specified tolerances cause amplitude errors.

To verify IF selectivity, a time-domain scan with max. peak detection was performed for sinusoidal test signals. A single measurement is insufficient for correct verification because the spacing of adjacent frequency bins is set to one-quarter IF bandwidth (Figure 2). The tests were repeatedly performed, increasing the start frequency of the time-domain scan step-by-step in small increments. All received frequency points were then merged into a single trace (Figure 3).

IF selectivity using the time-domain scan

Figure 2:  IF selectivity using the time-domain scan

Measured IF selectivity for CISPR bands A, B, and C/D

Figure 3:  Measured IF selectivity for CISPR bands A, B, and C/D

At lower levels, the inherent noise of the receiver limits the dynamic range and is specified as displayed average noise level (DANL). At higher levels, the nonlinearity of mixers and amplifiers limit the measurement range, and is characterized by the 1-dB compression or third-order intercept points. Sensitivity of 1 dB and 3 dB are analogous to these points, that is, where signal-to-noise ratio is high enough so that noise-induced measurement error is not more than 1 or 3 dB. Dynamic range usually specifies the usable level range between 1-dB sensitivity and the 1-dB compression point.

A measurement using a pulse generator for CISPR bands C and D compares the frequency responses of the stepped-frequency and time-domain scans (Figure 4). An exact evaluation of receiver measurement uncertainty is not possible with this measurement, and it does not consider errors caused by the cable and pulse generator, such as frequency response, matching, and long-term stability. However, it nonetheless shows that the differences between the two scan types are negligible.

Frequency response of the test receiver

Figure 4:  Overall frequency response of the test receiver for the time-domain scan (blue) and stepped-frequency scan (black) including the frequency response of the CISPR pulse generator

Figures 5a through 5d show the measured frequency response of the CISPR bandwidths of 200 Hz, 9 kHz, and 120 kHz, and the MIL-STD IF bandwidths 100 kHz and 1 MHz for the time-domain scan and the stepped-frequency scan. Both traces match very well and are compliant with the requirements of the standards. Table 3 shows that the time-domain scan offers a higher dynamic range than the stepped-frequency scan, generally without regard to IF bandwidth.

Resolution bandwidths with stepped-frequency scan and time-domain scan for the CISPR 16 and MIL-STD 461 standards

(a)

Resolution bandwidths with stepped-frequency scan and time-domain scan for the CISPR 16 and MIL-STD 461 standards

(b)

Resolution bandwidths with stepped-frequency scan and time-domain scan for the CISPR 16 and MIL-STD 461 standards

(c)

Resolution bandwidths with stepped-frequency scan and time-domain scan for the CISPR 16 and MIL-STD 461 standards

(d)

Figures 5a, 5b, 5c and 5d:  Comparison of resolution bandwidths
with stepped-frequency scan (blue) and time-domain scan (green)
for the CISPR 16 and MIL-STD 461 standards

Graph showing that time-domain scan offers a higher dynamic range than the stepped-frequency scan

Table 3

The evaluation of measurement times was based on the frequency bands for EMI measurement in accordance with CISPR 25 (EN 55025) for automotive products and other military and commercial standards. CISPR 16-2 requires the measurements to be long enough so that at least one signal from the disturbance source is detected. For this comparison, the measurement time per frequency step for the commercial standards was set to 10 ms or 20 ms to correctly detect impulsive disturbances down to a pulse repetition rate of 100 Hz or 50 Hz respectively.

For measurements to MIL-STD-461, the measurement time was set according to Table 2. The measured values show that the time-domain technique considerably reduces the time required to perform a frequency scan even when using quasi-peak weighting and a dwell time of 1 s. The exact reduction depends on the IF bandwidth.

In short, FFT-based time-domain scan for preview measurements allows EMI testing to be performed in accordance with CISPR 16 orders of magnitude faster than when using a stepped-frequency scan. The measurement uncertainty of the time-domain and stepped-frequency scans is nearly identical. However, the stepped-frequency scan technique remains a proven, widely-accepted method, so it makes a great deal of sense to combine both the frequency and time-domain techniques throughout the design and certification process to ensure the best possible results.

Summary

FFT-based time-domain scanning, along with the increasingly formidable capabilities of EMC software and greater process integration and automation, are transforming the EMC measurement process. As commercial and military standards evolve, these benefits will become more and more important, as will their ability to make the process easier for designers who already have a “full plate” of measurements necessary to bring a product to market. favicon

 

Related Articles

Digital Sponsors

Become a Sponsor

Discover new products, review technical whitepapers, read the latest compliance news, and check out trending engineering news.

Get our email updates

What's New

- From Our Sponsors -

Sign up for the In Compliance Email Newsletter

Discover new products, review technical whitepapers, read the latest compliance news, and trending engineering news.