Get our free email newsletter

Problems That Can Arise in a Working EMC Laboratory, and How Pre-test Verifications Can Help

1312 F1 coverThis article describes some of the everyday issues that can arise in a working EMC test laboratory which may affect the quality of the measurements made and illustrates these with real-life examples that demonstrate the importance of robust pre-test verifications. The main focus is on emissions testing, as this is perhaps the area where most problems can occur without being detected. The article also looks at how using various types of reference source during pre-test verifications can help identify those problems and prevent invalid measurements being made.

Test laboratories, in particular those accredited to quality standards such as ISO 17025, engage in regular checks to ensure that their equipment and test setups are working correctly. Having been established and proven, the test setup may be altered as individual items are replaced or reconfigured, or where equipment is shared and moved around between tests. The possibility of variation creeping into the results arises through the additional wear and tear on connectors and cables, or if the setup is configured incorrectly, right up to the equipment itself being damaged in transit or through misuse.

Equipment in an EMC test laboratory is calibrated at periodic intervals, typically on an annual basis, although the actual intervals may vary. This confirms that the equipment is operating within its published specifications and also, in the case of non-adjustable items such as cables or antennas, gives a set of values or factors that are necessary to correctly interpret the measurements subsequently made using that equipment. Such calibrations are effectively a snapshot of the equipment’s performance, which may degrade over time. The more sophisticated equipment may include self-calibration functions, but these may not be as comprehensive as a full calibration and are unlikely to verify the test system as a whole.

- Partner Content -

EMC & eMobility

For a company embarking on EMC testing for either component or vehicle-level testing of their EV products, it is necessary first to have a good understanding of the EMC regulatory situation.

The human factor cannot be overlooked either, for example applying the wrong settings or configuration for a particular test, or applying the wrong correction factors for the equipment used. Laboratory test procedures will aim to minimize the chances of this happening (e.g. checklists) or the likelihood of the wrong results being released (e.g. counter signing), whilst training such as iNARTE, and other proficiency test programs, help promote professionalism and attention to detail, but mistakes can still happen.

These and other influences on the quality of measurement results means there is a need to ensure a degree of confidence in the test environment, the test equipment and the way it is set up before it is used to measure the characteristics of unknown Equipment Under Test (EUT). This can be achieved with regular pre-test verification checks, which already feature in some test standards such as EN 61000-4-3 and Defence Standard 59-411.


Examples of problems that may be encountered in a test laboratory

Equipment failure

Like all complex equipment, there is a myriad of problems that may affect the performance of the equipment. RF test environments also face a subtle problem in that they may only exhibit a degraded performance rather than an outright failure mode. In this case, equipment failure may be defined in terms of its expected behaviour, namely some event leading to complete or partial change in equipment characteristics.

- From Our Sponsors -

Physical damage
As well as potential mechanical problems where physical knocks and other damage may overtly affect the performance of the equipment, RF connections can also exhibit a degraded performance rather than an outright open or short circuit failure mode. RF connectors are also particularly vulnerable where high frequency work pushes the use of ever-shrinking connector sizes. Problems can arise due to the damage caused by snagging a cable and compromising its screen or shield at the connector ferrule, or to parametric errors caused by tightening the connector to the incorrect level of torque. The former can be seen sometimes with external signals leaking into otherwise “sealed” anechoic room signal paths, whilst the latter may manifest as frequency dependant nulls appearing in the system response.

Equipment that is moved around is vulnerable to damage, and the spontaneous and unwanted influence of gravity could earn a chapter on its own.

Electrical damage
In common with most electrical equipment, incorrect power supplies, electrostatic discharges and transient overvoltages can cause a catastrophic failure of the equipment. While this can usually be detected, there are situations where the failure could be overlooked, for example where damage to an amplifier may only result in reduced gain. High-frequency radiated emissions testing often uses a preamplifier in the receive path between the antenna and the receiver, and for best signal-to-noise performance this should be placed in the circuit as close to the antenna as possible. Physically this could mean that the amplifier is situated underneath the ground plane or flooring on which the antenna mast is positioned, and therefore it cannot be seen or easily examined, so a loss in gain of a few dB may go undetected for some time unless the system is tested as a whole and with knowledge of the characteristics of the other equipment in the signal path. For emissions testing this loss of gain would naturally lead to lower signal strengths being recorded during testing. For immunity testing, which operates in a closed-loop system where the E-field is monitored by a frequency insensitive probe, maintaining a fixed field intensity using an amplifier with reduced gain may result in the power being distributed across harmonics of the intended signal. This would have the effect of reducing the field intensity of the intended signal frequency (under-testing) whilst simultaneously exposing the EUT to unwanted threat signals at higher frequencies (over-testing).

Similarly, overloaded inputs and unloaded outputs may also cause partial failure or loss of performance, such as selectively burned-out attenuator banks on analyzer inputs or amplifiers with reduced gain due to soft breakdown of the output drive stage.

Old age or extended “normal” use
Even in the absence of abuse, equipment aging can cause drifting characteristics that may be masked by self-calibration routines. In the immediate term this may invalidate a calibration beyond a relatively short period e.g. performing a self-calibration on power-up, on test equipment that takes several minutes to stabilize. If the rate of long-term drift increases, there may come a point at which the annual recalibration cycle may need to be shortened.

Example equipment failure – mains PSU
In this example, Figure 1 shows the output voltage from a 230V 50Hz a.c. power supply used in a EN 61000-3-3 mains flicker test setup. A fault in the power supply stability led to a slow oscillation in the voltage produced that was greater in magnitude than the flicker disturbance being measured. However, because the frequency of oscillation was so low, this only showed up in the test results as an increased Dmax value and did not translate through to the short-term flicker disturbance value Pst.

1312 F2 fig1

Figure 1: Output voltage from a faulty supply used for mains flicker testing


Repeatability and consistency

The reasonable aim for repeat measurements of the same EUT would be for them to lie within the stated measurement uncertainty of the test, assuming of course that the EUT is itself stable. Well thought-out and comprehensive test procedures help to improve the repeatability of such measurements by clearly defining parameters that could cause these changes, such as cable layouts, EUT and (if applicable) antenna positions, and these can be supported by physical constraints such as cable guides and winding formers, documentation templates such as checklists and procedural templates such as pre-programmed test routines.

Where there is room for interpretation or ambiguity in the process that could lead to variation in the results, the test setup or procedure may need to be more rigorously defined. For example, adapting a radiated emissions test procedure covering 30 MHz to 1 GHz testing to cover measurements above 1 GHz may need to consider the increased accuracy with which the EUT must be placed in order to achieve consistent results, and that bore-sighting the receive antenna becomes increasingly important. The detail to which such variables need to be commented on is especially important where the test setup is changed, through multipurpose use or regular reconfiguration such as using the same test chamber for performing conducted and radiated emissions and immunity testing on the same EUT.

As well as the effects of damage and aging on test equipment already discussed, the environment poses a threat to the stability of the measurement system. Threats to the infrastructure exist due to weather, temperature variation and proximity to other equipment; EMC test instruments are not exempt from EMC, after all. Test setups using chambers or open test sites inevitably feature some hard-to-see signal cabling that may not be subject to regular checks. Again, aging or wear and tear can affect the integrity of the test environment, a good example being the carbon-loaded foam absorber in a FAR, which is fragile and easily damaged. A few points knocked off in passing may not contribute a significant error, but the effect would clearly be cumulative.

Example test setup problem 1 – Repeatability and consistency
This example is from a fully anechoic room that was used for both radiated emissions and immunity testing, so undergoing occasional changes to the arrangement of support equipment and absorber placement. The room was known to have a more rippled response from around 600 MHz up, compared to a similar room, but what was not known was the cause or whether there was any variability in the magnitude. It was noted that an EMC hardened camera was used during immunity testing, and that a little-used floor level patch panel in the chamber wall at the EUT end of the room had not been covered with ferrite. Moving the camera and covering the patch panel with spare ferrite tiles together could be seen to have an effect of a couple of dBs, enough to raise concerns about the accuracy of the measurement being made where the permissible uncertainty for the overall system is only a few dBs.

1312 F2 fig2

Figure 2: Example of the effect of reconfiguring a Fully Anechoic Room

 

The greatest variation was noted when it was found that using a plastic table instead of a wooden one to support the EUT resulted in a reduction of around 3dB in the null at 820 MHz, and now both plastic and wooden tables used in the laboratory have been replaced by low-density polystyrene blocks as required by the latest version of certain test standards.

Example test setup problem 2 – environment
The results in Figure 3 were taken from a radiated emissions test using an Open Area Test Site (OATS). Apart from the ambient transmissions from taxis and mobile phones, radio stations and other broadcasters, the response was expected to be relatively smooth. Ripples in the response were observed at the lower end of the spectrum, which raised concerns and triggered an investigation into the cause. There had been a fair amount of rain the previous day, and a quick search found that a pair of N-type connectors coupling the antenna cable to the underground cable run had become contaminated by water and fine mud particles. Cleaning the connectors made a significant improvement, and they were replaced to be certain of no longer-term issues.

1312 F2 fig3

Figure 3: Example of environmental effect on an Open Area Test Site measurement

 

These examples further highlight the fact that problems may result in frequency dependent or otherwise restricted case symptoms, and that it is necessary to exercise as much of the system as possible when making pre-test checks.


Equipment out of calibration

Regular calibration is a key requirement of quality management systems such as ISO 17025 and ISO 9001. When it comes to applying the results to the test operation they give a snapshot of the equipment performance, and consideration of the detail of the calibration is needed, as well as a strategy for monitoring changes that may occur between calibrations.

It is worth considering what is meant by an item of equipment being “out of calibration”. To the manufacturer of the equipment it may mean that the equipment is operating outside its acceptable specification. To the user it may also mean that the equipment is operating outside its expected specification or that it has gone beyond its expected calibration period.
For example, a signal generator may have quoted level accuracy of within ±1dB and also level flatness of ±1dB across its frequency range, suggesting a worst case error of ±2dB (with relevant statistical weighting) be fed into the measurement uncertainty budget. However the former might only be calibrated at a single frequency, the latter at a single output level. A comprehensive calibration of the equipment at the factory might check across a range of both settings to ensure that each section of the attenuator bank maintained its value across the frequency range. A comprehensive recalibration might repeat this to ensure that a single attenuator bank
has not developed a frequency dependent value due to, say, a
cracked chip resistor.

Example calibration problem
This example shows the response of two receivers used for emissions measurements in a fully anechoic room.

During pre-test checks using a broadband reference generator, two steps in the response at 275 MHz and 720 MHz, both showing some frequency dependency and giving an accumulated error of up to 4dB at the higher frequencies, could clearly be seen from one of the receivers. The receiver in question was quarantined and sent for recalibration, although no error was found under the recalibration procedure and it was returned with the same response.

1312 F2 fig4

Figure 4: Example of an anomalous receiver response

 

In this case, the equipment is reported to be still within the manufacturer’s specification, even though it can clearly be seen that different results could be obtained from two calibrated devices beyond what would be acceptable for the budgeted measurement uncertainty. To carry on using the equipment either an “error factor” could be applied in the case of a known, predictable and stable variation or the uncertainty budget increased in the case of known but unpredictable variations.


Operator error

The final problem area to consider here is the human factor. Everyone makes mistakes and in spite of training, experience and competencies mapped under quality management systems the occasional error slips through. These include using the wrong settings or setup for a particular test, or leaving gaps in record-keeping during testing that lead to confusion over which correction factors to apply. Errors become increasingly possible when multi-purpose test equipment is reconfigured, new unfamiliar tests are being introduced (for example >1 GHz testing) or changes to the test standard are not implemented.

Managing the risks associated with measurements can be helped by a robust proficiency testing plan that periodically exercises all aspects of the test system, including the personnel, using blind testing of known artefacts. Inter-laboratory proficiency testing can also help to improve situations where internal consistency does not translate to consistency between different laboratories.

Errors may also be the result of unclear instructions that are open to interpretation or do not sufficiently specify certain criteria. Arguably this is not operator error, but a deficiency in the test procedure or process. It is unfortunately also possible that operator indifference may be the cause, which might be helped through training.

Operator error – example
One example encountered involved a pre-test check on an open area test site that threw up a rippled response similar to that shown in Figure 3. After checking the cables and connectors for water ingress but finding nothing untoward, it was noticed that the ripples were much less noticeable on the vertically polarized test scan. The root cause was then quickly traced to a 1.5 m metal bar used to secure the second, rear-facing doors of the EUT hut, which the operator had forgotten to take off that morning.


Verification Sources for Pre-test Checks

The purpose of pre-test verification

The purpose of a verification test is to reduce the risk of problems such as those described above being overlooked, and thus ensure a degree of confidence in the test environment, the test equipment and the way it is set up before it is used to measure unknown EUT.

A distinction needs to be made between verification tests and calibrations. Calibrations will require absolute values to be known at some point; for example, to calibrate an emissions test setup it will be necessary to be able to compare the measured signal level reported by the test equipment against a known signal level. A typical verification strategy would be to measure the output from a reference source following a full calibration of the test equipment, setup and environment, and then use this as a baseline measurement (with uncertainty budget considered) for subsequent pre-test, daily or weekly checks. This strategy, being a relative test, exploits the absolute accuracy of the initial site calibration and only requires the verification source to be both stable and strong enough to avoid signal-to-noise issues.

An example of this would be in determining the previously shown effect on emissions measurements caused by the table supporting the EUT. CISPR 16-1-4:2010 Section 5.5.2 requires this effect to be considered and describes the process for calculating it. The full calibration procedure uses a signal generator and biconical antenna to produce an E-field both with and without the support table present, and uses the difference to calculate a value to be applied in the measurement uncertainty budget. Once this has been done, subsequent verification checks can be carried out, using a reference generator fitted with a rod antenna mounted horizontally on the table, to give a quick and strong indication of any variation that might occur due to using a different table or altering its position between EUT tests.

Aside from any requirements directly stated in the test standard, to be of benefit to the laboratory the pre-test verification has four main criteria to meet:

  • must be accurate to within the order of the test measurement uncertainty
  • must be repeatable
  • should exercise as much of the complete test setup as possible
  • should ideally be quick to perform, to minimize the effective downtime of the test facility

Immunity tests may be considered simpler than emissions testing, simply because the tester is looking for a gross response from a system with a known and quantified stimulus, unlike emissions testing which is looking for a quantifiable response from an unknown stimulus. Electrostatic discharge (ESD) test standard EN 61000-4-2 requires a pre-test verification to be carried out, which involves checking that the ESD generator gun discharges into a spark gap. Fast transient and surge generators can be checked using little more than a digital storage scope to verify that the pulses produced have the correct rise/fall times, magnitude and repetition characteristics. Pre-test verification for radiated immunity tests to EN 61000-4-3 have been discussed previously [In Compliance, September 2012] and remain difficult to achieve comprehensively in practice without being significantly simplified. The remainder of this article will therefore focus on emissions test setups.

For emissions testing, substituting the EUT in a test setup with a stable reference signal or disturbance prior to the test proper, addresses these requirements, with the added benefit of minimizing extra setup or reconfiguration time. During verification checks a full or partial test can be carried out depending on the time available, which may only be a few minutes in a busy commercial environment. The results can then be checked against the baseline levels to give a degree of confidence that the setup is functioning normally. The results from these tests should also be saved and used to monitor long term trends in the test system performance or environment, and provide evidence to accreditation authorities that ongoing checks and balances are being performed.


Reference signal sources; a comparison

A reference signal source used for verification purposes should be easy and quick to set up, stable, providing clear indication of the system performance and, ideally, covering the full frequency range of the test under consideration.

Methods for generating stable signals over a wide frequency range include:

  • Adjustable signal generators
  • Harmonic (comb) generators
  • Continuous (statistical white noise) generators

One possible method is to use a calibrated radio frequency signal generator coupled into the measurement system, recording the measured signal at different frequencies. This can provide a very flexible solution, but the signal generator needs to be set up and adjusted. Together with associated cabling this may require significant user (or software) input to provide a working system.

An alternative is to use a purpose-built device. Typically these are broadband signal generators that are designed to have known, stable characteristics, and that generate feature-rich signals (usually comb or noise) in order to exercise as much of the test range as possible at the same time.

The choice between different types of reference signal for verification of radiated or conducted test environments is important from the point of view of checking as many aspects of the setup as possible within the time available. Broadband stochastic noise provides a continuous output throughout the spectrum, which helps to avoid any frequency-related features being overlooked (Figure 5). The random nature of the noise means that it can also be used to distinguish between filter bandwidths and different types of detectors.

1312 F2 fig5

Figure 5: Example noise reference source output

Alternatively, comb signal sources provide discrete frequency components that also allow the accuracy of frequency measurements to be checked, and also provides a greater signal-to-noise separation to be achieved by reducing the measurement bandwidth (Figure 6).  Where both noise and comb signals are available, this gives greater flexibility and a broader scope of verification tests than single-function noise or comb types. Some considerations for the use of each type of generator are discussed in the following section.

1312 F2 fig6

Figure 6: Example comb reference source output

 

It is also useful for the reference source to represent characteristics of the EUT, if it is to indicate variations in the test environment that may affect EUT testing. For example, perfect isotropic radiators unaffected by nearby artifacts rarely exist in real equipment. Wooden, plastic or polystyrene supports may in practice be used interchangeably to accommodate different equipment being tested in a FAR or Semi-Anechoic Chamber (SAC), so using a rod antenna laid across the EUT support table or stand will help indicate the difference in any interaction between it and the equipment, cabling or internal circuit traces. Such variations may not always have been fully accounted for in the test’s uncertainty budget, although CISPR 16-1-4:2010 now includes a requirement to do so.


Examples of using reference sources

Broadband noise reference signals
A benefit of the noise output is that because the spectral output is continuous, the presence of any features or defects in the system response can be observed across the frequency range of interest without anything being missed. The equipment can also be set to take readings with any step size.

Figure 3 (previous section) shows the benefit of using continuous noise, where the ripples associated with a failed connector on an Open Area Test Site (OATS) are instantly noticeable compared to the previously established baseline response. Even if no previous response had been available for comparison, the rippled response would have been sufficient to cause concern regarding the operation of the OATS.

The graphs below also show another feature of noise, namely that it will produce different readings on an analyzer or receiver depending on the type of detector (Figure 7) and the measurement bandwidth used (Figure 8). This property of noise can be exploited during verification to allow these parameters to be quickly evaluated. As a rule of thumb, a peak detector will give the maximum response, the average detector the lowest, with quasi-peak in between. Knowing this can help defuse some of the confusion that can arise from the different options appearing on a receiver or spectrum analyzer, such as the different averages available to the operator (for example; average voltage detector, average power detector, average of n peak detector sweeps).

1312 F2 fig7

Figure 7: Response of peak, quasi-peak and average detectors to a noise signal

 

1312 F2 fig8

Figure 8: Response of varying measurement bandwidths to a noise signal

 

Something to consider when using a noise output is that measurements may require averaging over a number of sweeps and/or video filtering, in order to reduce the “noisiness” of the signal level and thus extract the mean amplitude. Also, the signal-to-noise ratio is unaffected by a change in measurement bandwidth, but is affected by the level of attenuation used. Reducing the measurement bandwidth to increase signal-to-noise separation will not work with a noise signal.

Broadband comb reference signals
Unlike noise, a comb signal is based on a narrow pulse waveform which, when examined in the frequency domain, appears as broadband signal containing the harmonics of the repetition rate of the pulse signal.

The main advantage of using a comb signal is that both ambient noise and signal output can be viewed simultaneously. The output does not require averaging or filtering to determine the signal level and therefore a quick visual check of the level using a peak detector can be made. The energy that is present in the output spectrum is contained within the comb pickets and so the output frequency range is not limited to the same extent as for noise generators, where the spectrum is continuous. This is one reason why wide frequency range devices operating up to many GHz are predominantly harmonic generators.

The measured output level of a comb signal does not vary as much when changing the detector type or resolution bandwidth compared with the noise source, provided this bandwidth contains only one spectral peak, and so the comb signal is less helpful when it comes to verifying detector and bandwidth operation. However, the continuous-wave (CW) nature of the individual pickets in the signal allows both the frequency and amplitude accuracy of the test equipment to be verified, which cannot be achieved using noise.

Because the signal produced has gaps in between the pickets, the noise-floor of the system is visible. Hence, if the measurement bandwidth is decreased, then the signal-to-noise ratio (SNR) is increased (Figure 9). The downside is that if these gaps are too large, sharp resonances or other narrow-band phenomena may not be seen. As can also be seen from Figure 9, a stable CW signal with low residual frequency modulation can also be used to define the shape of the measurement bandwidth filter, which is of value in checking that multiple pole filters are correctly tuned and aligned together.

1312 F2 fig9

Figure 9: Effect of resolution bandwidth on signal to noise ratio with CW signal

 

It is important to note that, when measuring a comb signal, the analyzer step size and start/end frequency must be set to include the relevant harmonic peaks. A comb generator producing signals derived from a commonly available reference oscillator of, say, 64 MHz will produce signals that are harmonics of that frequency, many of which risk being missed by a receiver making spot-frequency measurements every 5 MHz using a narrow measurement bandwidth, as might be used as a pre-test check.

Harmonics and flicker
The reference sources discussed so far are applicable to most common radiated and conducted emissions EMC test environments, but the reasons given for carrying out pre-test verification apply to the other corners of the test laboratory as well.

In addition to measuring the radio frequency interference from the mains power port of an EUT as part of the conducted emissions test, lower frequency distortion of the supply current and voltage may also need to be evaluated. It would also be of benefit to verify the equipment used to carry out mains current harmonic distortion (EN 61000-3-2) and voltage “flicker” (EN 61000-3-3) tests, however fully exercising the measuring equipment is only practical during periodic calibrations.

The availability of proprietary sources for verifying and monitoring the performance of harmonics and flicker measuring equipment is limited, and some pre-test verifications have been performed using homebrew solutions based on half-wave rectifiers and resistive loads to generate harmonic rich load currents. The stability of such resistive loads is called into question when temperature sensitive or non-linear devices are used, such as filament lamps [9]. In addition, half-wave rectifiers generate predominantly even-order harmonics, whereas most mains-powered electronic equipment employs AC to DC voltage conversion, using full-wave rectification to feed a reservoir capacitor, and this topology generates predominantly odd-order harmonics. This may be significant when assessing the results or using the pre-test to exercise any standards-based software used to both run the test setup and automatically assess the performance of the EUT.

For voltage disturbance measurements, the specification of the flickermeter described in EN 61000-4-15 makes it difficult to predict the expected value of flicker from a known waveshape, without extensive calculation and analysis. Hence the verification for this test is probably best left as a simple repeatability exercise using a stable source of disturbance. favicon


References

  1. ISO/IEC 17025:2005 – General requirements for the competence of testing and calibration laboratories
  2. EN 61000-4-3:2006 +A1:2008 – Electromagnetic compatibility (EMC) – Part 4-3: Testing and measurement techniques – Radiated, radio frequency, electromagnetic field immunity test. Includes Appendix I, concerning the calibration and performance of E-fieldprobes used.
  3. Defence Standard 59-411 Part 3, Issue 1: 2007 – Electromagnetic Compatibility Part 3, Test Methods and Limits for Equipment and Sub Systems. Reprinted Incorporating Amendment 1: 2008
  4. CISPR 16-1-4:2010 +A1:2012 – Specification for radio disturbance and immunity measuring apparatus and methods Part 1-4: Radio disturbance and immunity measuring apparatus — Antennas and test sites for radiated disturbance measurements
  5. ISO 9001:2008 – Quality management
  6. EN 61000-4-3:2006+A2 :2010 – Electromagnetic compatibility (EMC) – Part 4-3: Testing and measurement techniques – Radiated, radio-frequency, electromagnetic field immunity test
  7. EN 61000-3-2:2006+A2:2009 – Electromagnetic compatibility (EMC) – Part 3-2: Limits – Limits for harmonic current emissions (equipment input current ≤ 16 A per phase)
  8. EN 61000-3-3:2008 – Electromagnetic compatibility (EMC)Limits. Limitation of voltage changes, voltage fluctuations and flicker in public low-voltage supply systems, for equipment with rated current ≤ 16 A per phase and not subject to conditional connection
  9. Hall K, “Accuracy of flickermeters –round robin results”, Interference Technology EMC Directory & Design Guide 2006, p30–35, 2006
  10. EN 61000-4-15:2011 – Electromagnetic compatibility (EMC)Testing and measurement techniques. Flickermeter. Functional and design specifications
  11. Examples of outputs from verification equipment, http://www.yorkemc.co.uk/instrumentation

 

 

author anslow-mark Mark Anslow
is a Senior Engineer in the Test Instrumentation Department at York EMC Services Ltd where he is primarily involved in the design of reference signal generators. He has 12 years of experience in electronics design with a specialism in RF engineering. He can be reached at mark.anslow@yorkemc.co.uk.
author cullen-dave  

Dave Cullen
is the Test Instrumentation Manager of York EMC Services Ltd. and holds BEng (Hons) and MSc degrees in electronic engineering and product development respectively. He has 30 years experience in the manufacture and design of electronic test equipment, focussing with York EMC Services on reference signal generators for verifying and validating EMC test environments. He can be reached at dave.cullen@yorkemc.co.uk.

 

 

 

 

Related Articles

Digital Sponsors

Become a Sponsor

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.

Get our email updates

What's New

- From Our Sponsors -

Sign up for the In Compliance Email Newsletter

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.