1940 to the Present
This article documents the role of margin demonstration in system-level EMC testing: how and why they were first instituted, and how they have evolved over time. In this context, system-level inevitably means at the level of integration of some sort of vehicle designed to transport men and/or materiel or armament on land, sea, or by air or in space. While equipment-level EMI limit margins form a part of this discussion, it is more for the sake of the contrast than as the subject.
Background – Historical
Margin requirements and demonstrations evolved from a practice of performing radio frequency interference (rfi) performance checks on each vehicle exiting the production line. In WWII, every wheeled Army vehicle coming off the production line had to be checked for interference-free operation. This was done in screen rooms, and a manufacturer had to have enough screen rooms available to support the volume of production (Cofield 1990). They were placed in a screen room built for that purpose, and the radios were tuned to frequencies across each band, listening for noise, or typically, squelch break. Over time, after the war, the number of radios increased in all manner of platforms, and the requirement of testing every single unit scaled back to ten units successfully tested interference-free. If ten in a row had quiet radios without interference the design and workmanship were considered to be proven, and further testing was not necessary. If any of the ten vehicles had issues, they had to be fixed and then ten vehicles with the fix installed had to pass the requirement.
By 1950, MIL-I-6051 required two aircraft in a row to meet the radio interference-free requirement. If one had an issue, then the rework had to be evaluated on the next two aircraft off the production line.
In part, this reflected the increasing number of radios installed as time went by. MIL-I-6051A, issued in 1955, maintained this two-in-a-row success requirement.
Originally, demonstrating interference-free radio operation was a simple matter of tuning to a few channels in each (octave) radio band and listening for squelch break in a quiet (radio signal and radio interference free) area. This was effective because all noise sources were broadband and because broadband noise has a time-varying amplitude, and these radios were amplitude-modulated receivers. Today, the vast majority of unintentional man-made noise sources are unmodulated narrowband signals, radios utilize all sorts of modulation schemes and radios receive data as much or more than voice communications, and the original technique doesn’t work at all.
What is done now (since at least the 1990s) at system-level is to instrument the receiver antenna output and look at what couples in on a spectrum analyzer, looking for signals either above the victim radio’s noise floor, or comparing observed signals to the minimum level broadcast signal that is expected, and applying a margin that provides for a required baseband signal-to-noise ratio or bit error rate, or other measure of desired signal quality.
This is all described in detail in MIL-STD-464C, section 5.2.4 and supporting appendix paragraphs (section 5.2 and supporting appendix material in earlier versions of the standard).
MIL-I-6051B (23 January 1959) and MIL-E-6051C (17 June 1960) both required 6 dB safety margins in lieu of testing multiple platforms. This was a continuation of the trend to reduce the number of tests as platforms became electronically more complex and required more time per test.
MIL-I-6051B, introducing a margin demonstration requirement, gives an example:
4.3.5 No malfunctioning. The requirement of “no malfunctioning” shall be considered to have been met when the sum of all extraneous electromagnetic energy that may be introduced into the most critical point of a subsystem is six db (sic) below that desired input which would produce operation, actuation, or functioning of the subsystem or equipment. Detailed test methods, instrumentation, monitoring point, and test procedures applicable to the functional usage of the particular subsystem shall be outlined in the test plan specified herein. For example, the key test point in a guidance subsystem is that relay which actuates a hydraulic valve for control purposes. In this case, an ammeter in the relay circuit, indicating no more than half the current required for operation, would be the no-malfunction limit.
This same paragraph is repeated verbatim in MIL-E-6051C. In September 1967, the 6 dB margin was augmented by a 20 dB margin for electro-explosive devices (EED) when MIL-E-6051D superseded MIL-E-6051C.1 EEDs are instrumented by measuring the heat rise in the bridgewire compared to the temperature resulting from direct current in the bridgewire at the advertised no-fire level.
Analysis – Two Key Points
Two aspects of the above margin discussion bear further scrutiny.
The first is that neither margin demonstrations have anything to do with equipment-level EMI performance as specified in MIL-STD-461 or forerunner standards such as MIL-I-6181. And they don’t measure any rf quantities on the aircraft, either. It was the functionality of the critical circuit itself that was being measured. But doing so did require that the necessary instrumentation did not itself perturb the circuit in such a way as to increase the amount of EMI coupling into the circuit.
One can see that the instrumentation example from MIL-I-6051B would not perturb the circuit at all, since a wire to a relay coil was unlikely to be shielded in the first place. Inserting a d’Arsonval movement (permanent magnet, moving coil) ammeter would not affect the rf coupling properties in any way. Whereas if a cable were shielded, then the instrumentation would need to preserve the rf integrity of the original wiring. In the EED example, an instrumented EED would replace the actual unit, so that the cable was undisturbed, and the instrumented EED would be a form and fit exact replacement for the original.
The second point is about the types of circuits to which these standards referred. Even though the relay and EED circuits cited in these standards are operationally very different – used for very different applications – there is one way in which they are quite similar, and that is in the way they are both very well suited for margin demonstration. Both of these circuits are termed “discretes,” meaning in the non-operational state the circuit input is ideally zero (volts, amps, watts…). When circuit operation is desired, a potential/current/power is applied which vastly exceeds the level required to operate the circuit. In particular, a typical EED might be spec’d at 1 Amp no-fire, 5 Amp all-fire and having a 1 ohm bridgewire to which is applied 28 Vdc when it is time to fire. We can apply 0.1 Adc (20 dB down from no-fire current) to the bridgewire, measure the temperature rise, and use that as our pass/fail criteria as the platform is cycled through a typical mission simulation, and/or exposed to the external EME. Or we can install a 0.1 Amp fuse plus fuse instrumentation to announce if the fuse is blown and avoid the complication of monitoring temperature rise. We can do this because the normal applied power to this circuit is zero due to safety measures (safe/arm requirements) that short both wires to ground when the circuit is inactive.
The relay coil similarly is at low or no potential when it is in its normal state (NC or NO). We can count down 6 dB from the value at which the relay manufacturer guarantees it will change state, or from some arbitrary point just below that. If we are using an ammeter, we can measure or compute the current through the relay with that potential applied, and use that as our pass/fail criteria.
Circuit Sensitization – Another Method Available for Non-linear Circuits
The original approach discussed above works well for any such discrete type circuit, but things get rapidly more complicated when the nominal state is an other than a zero-level signal.
One might consider a digital bit stream to be composed of two logical states, but the problem is that the duration of the low-level signal is too short to draw any conclusions about the coupled noise in the absence of the intentional signal. If there is more than 6 dB margin between a nominal logical high and the lowest possible guaranteed logical high state, then one can attenuate the bit stream by 6 dB, and if the data bus functions properly – bit error rate (BER) within tolerance – then a 6 dB margin has been demonstrated that way. The 6 dB attenuation must be carefully placed: it cannot be placed adjacent to the device receiving the signal, because the attenuator would then attenuate both bit stream and noise.
The following example serves to illustrate proper placement on a databus. One of the author’s first EMC-related tasks was to design a 6 dB sensitization circuit for a MIL-STD-1553 data bus. This involved providing the 6 dB insertion loss in a shielded, controlled impedance twist cable, but the real challenge (workmanship and parts cost) was to provide the proper connectors to mate with the aircraft-installed bus, and a brass enclosure to provide shielding and ease of manufacture of this one-of-a-kind box. Figure 1 shows that the box was installed adjacent to the point where the bus controller tapped into the bus, and thus it attenuated all commands from the controller to the bus terminals, and also reduced terminal responses to the controller.2 By reducing the intentional signal levels 6 dB, the theory was that successful MIL-STD-1553 operation (acceptable bit error rate) would be demonstrated with the required margin.
This approach was deemed superior to actually monitoring the traffic on the bus with a databus monitor and/or oscilloscope, because as noted there is no way to monitor just the noise in the presence of the signal. Such an approach has to be tailored to the specific circuit. The MIL-STD-1553 signal was differential, signal swinging +/- 15 Volts from zero, and 7.5 Volts from zero was still a “one,” or “true.” But if the same approach had been tried with a TTL level of 5 Volts, the 6 dB attenuated signal would have been very close to the undefined range, especially if the signal had to travel down a very long cable. And of course such intentional signal attenuation won’t work at all with a purely linear circuit in which specific information is directly proportional to the signal level, so it doesn’t work with any type of analog signal.
Another example where signal sensitization is useful is radio receivers. A radio link is characterized by specifying the rf input required to achieve a specific level of baseband signal quality. For audio outputs, the baseband output is defined as signal-to-noise ration (SNR, or {S+N}/N). AM radios have traditionally been spec’d when the audio S+N/N ratio is 10 dB, although this is threshold quality for listening purposes. If an AM radio link were determined to need a 6 dB margin demonstration, then the rf noise measured at the antenna connection point would be 6 dB lower than the rf level required to yield that baseband 10 dB S+N/N, across the entire tunable range of the radio.
For the purpose of illustration, here’s a numerical example. A performance criterion for certain airborne radios operating between 30 – 400 MHz is that they be able to copy a signal 35 nautical miles from the transmitter site, when flying 1500 feet above ground level. Typical transmitter power is 10 watts. In the absence of specific aircraft receive antenna specifications we assume 0 dBi gain (antennas used are broadcast pattern so at quarter-wave height they would have 5.15 dBi gain, but aircraft antennas are notoriously inefficient due to compromises relative to aerodynamic properties). Putting all that together, using the radar equation for a communication link, we get the plot of Figure 2, where the radios’ sensitivity is also plotted.
If it had been determined that communication at some or all of these frequencies was critical, then we could presumably count down 10 dB from the red curve for the required signal-to-noise ratio, and another 6 dB for margin and use that number as our pass/fail criterion in the spectrum analyzer noise floor survey. But it isn’t that easy. Communication antennas tend to be mounted along the top or belly of an aircraft. If the aircraft is in level flight, then the transmit and receive antennas will be lined up for maximum reception. But what if the aircraft is maneuvering such that the aircraft antenna axes are pointing towards or away from the transmit antenna? Those directions are nulls for broadcast pattern antennas. So that must be taken into account for a critical communication. In practice the noise floor or sensitivity of the radio is the baseline for comparing coupled interference signals, but that doesn’t support any sort of a margin calculation: if a signal is above the noise floor or sensitivity level, it is already a potential rfi problem in that mode of thinking, and if there is no signal above the noise floor, then there is no way to calculate a margin. Only if the sensitivity is used as a baseline and measurements may be made below that is a margin determination possible. And that margin is artificial in that absent detailed antenna pattern data, we don’t know the real minimum level signal from the transmitter.
For spread spectrum communication band radios or frequency hoppers, another approach is possible. Encode a transmit signal to occupy as much of the available radio spectrum as possible, and radiate that signal at a level 6 dB lower than the lowest expected signal throughout a mission. Note that if the signal can be set up to occupy the entire available radio spectrum, then a single test suffices for the entire band. Such a test has to be set up in a deterministic manner; one cannot allow a software-defined radio (SDR) to intelligently avoid noisy portions of the spectrum. That would defeat the purpose of the EMC test.
Finally, if a radio only has one receive frequency, it is possible to transmit the signal it is to receive at a level 6 dB below the lowest expected in use, and then put the platform through its paces (mission sequence) ensuring that the radio baseband output maintains whatever SNR or BER is required both before and during the operation of the balance of the platform. This approach is particularly useful with spacecraft having a single frequency used to communicate with Earth. It avoids any need to instrument the spacecraft, something that program managers discourage because of increased risk of damaging flight hardware.
The point is that circuits that are other than pure discretes are difficult to instrument and direct monitoring becomes quite problematical. Margin demonstration must be tailored to the circuits of interest.
The Modern Approach
With the trend to controlling more functions electronically and the electronics being digital in nature, there are both more critical circuits than previously, and they cannot be instrumented simply as envisioned in these early EMC specifications. Even since building that MIL-STD-1553 attenuator, MIL-STD-1553 has evolved from requiring three separate printed circuit boards to a chipset, and from a pair of dedicated front panel connectors (bus A and B) for the shielded twisted pairs to just two more twisted shielded pairs in a bundle, so that instrumenting that circuit today would have required the box to pass through the balance of the wires/wire pairs in a large cable while instrumenting (attenuating) only the 1553 pair. Something else is needed.
The approach of monitoring ripple on the power bus and comparing it to bench-level EMI performance such as MIL-STD-461 CS101 is instructive of the modern approach.
On a power bus, there is noise inherent on the leads due to ripple caused by the power generating system, as well as load-induced effects. On the typical military platform, power source ripple is controlled by MIL-STD-704 for aircraft, MIL-STD-1275 for ground vehicles and MIL-STD-1399, section 300A for ships. Susceptibility limits for loads on that power are controlled by MIL-STD-461 CS101, which is written so as to provide a 6 dB margin with respect to MIL-STD-704. Load-induced effects are controlled by MIL-STD-461 requirements CE101 and CE102. It is common to verify that a 6 dB margin exists between ripple on the fully integrated platform and the susceptibility limits to which loads have been qualified. This shorthand method for checking that any critical circuits are not upset by excessive power bus ripple is often mistakenly interpreted to imply that the power supplied to an equipment is itself a critical circuit. Nothing could be further from the truth. Clearly the presence or absence of power is critical, but that is hardly an EMI issue. Tolerances on bus potentials are quite wide, for instance 22 – 29 Vdc for nominal 28 Vdc power under MIL-STD-704.3
Instrumenting the power bus is easier than instrumenting the critical circuits powered by the secondary sides of power supplies fed by platform primary power, but that does not make the primary power a critical circuit. In fact, power bus ripple is a culprit, not a victim, and it is only because we have previously at the bench-level quantified the response to this culprit stimulus that measuring power bus ripple is at all useful. A concrete example is the power supplied to flight control avionics: it is the circuitry controlling the flight surfaces that are critical; the primary power input is simply a coupling path to the critical circuit. If power were indeed a critical circuit; if the precise value of the bus potential were critical as is the reference dc input to an analog-to-digital converter, then the power bus potential itself would still not be the quantity to be measured. The circuit to instrument would be the error feedback loop summing junction that regulates the bus potential and we would check that under nominal conditions that circuit would be at least 6 dB away from a value which would cause any change in the bus potential.
The basic concept is to not instrument the actual critical circuit, but instead to compare the platform culprit noise level to that to which the critical circuit has been qualified at the bench-level (MIL-STD-461 CS101, CS114, RS103, or equivalent such requirements in RTCA/DO-160, or automotive EMI standards). This approach is used with electronic flight and engine controls. It rests on an assumption not present in the original historical margin demonstration: that only the response to intentional rf transmitters is important. The original MIL-I-6051B wording quoted above referred to the totality of all electromagnetic noise that might couple to a circuit, as depicted in a notional sense in Figure 3. That would certainly include crosstalk between cables, and between various circuits within a single cable bundle. In today’s world crosstalk at the cable level is not considered an issue worthy of margin demonstration, because cabling techniques have evolved from the WWII and post-WWII era, as depicted in Figure 4.
In Figure 4, the steatite bead loaded single wire above ground wire carrying the radio signal (either a low-level signal when receiving, or a very high potential signal when transmitting) is supported above structure on porcelain standoffs and is isolated by itself to minimize crosstalk as either a victim or culprit. Today that cable would be coaxial, not a single wire above ground, and it would be much closer to a host of other wires and cables. The modern platform doesn’t have the luxury (or necessity) of providing the sort of physical isolation shown in the photo, and isn’t worried about crosstalk when even the leakage from transmit signals have to meet a stringent radiated emission limit. And of course the difference in crosstalk between a bare wire over ground terminated in a near open-circuit (vacuum tube grid) relative to 50 ohm coax is a large number even when calculated in dBs!
But margins still must be demonstrated against the full external electromagnetic environment to which the vehicle could be exposed, and the design of modern cabling takes that and stringent radiated emission requirements into account. For the record, stringent radiated emission limits are nothing new; they were imposed on the equipment installed in Figure 4, and at more stringent levels than today, due to that unshielded antenna lead acting as an antenna inside the aircraft. The difference is that the vast majority of the cable circuits weren’t susceptible, and attempts were made to segregate noisy cables from the few susceptible ones. The problem in the WWII installation is illustrated in Figure 5, from a 1946 EMC design handbook.
Crosstalk has been completely eliminated as a coupling path for EMI at the level of cables installed in a military platform. There are some possible exceptions with scientific payloads or other very low-level signals associated with various sensors used on satellites and aircraft.4 The remaining mechanism for causing rfi is that of high-level (intentionally transmitted) electromagnetic fields coupling (largely) to cables installed in the platform.
If the coupling of electromagnetic energy via intentionally transmitted radio waves were not an issue, it would be possible through the following means to demonstrate a 6 dB margin on critical circuits. A current probe could measure the totality of all common mode currents on a cable bundle, and then at each frequency twice that much could be injected via a bulk cable injection technique like that of MIL-STD-461 CS114. Because such currents are due to time domain waveforms, a frequency domain scan of these common mode currents doesn’t tell the whole story. A time domain measurement could also be made, and a margin applied to that as well, although that would be a more complex undertaking.
Luckily (from a test engineer’s point-of-view) we needn’t do all that, because the external electromagnetic environment imposed by MIL-STD-464 or an FAA HIRF certification on any modern platform will couple much larger currents to those same cables, and the coupled current at any single frequency will exceed the total time domain current inherent to the cable left to itself. It remains only to gauge the amount of current coupled to said cables when the platform is exposed to the specified electromagnetic environment (EME). The platform is illuminated by a low-level electromagnetic field, and coupled currents on cables connected to critical circuits are measured. The coupled current, scaled to the actual threat level, is compared to the connected equipment MIL-STD-461 CS114 (alternatively RTCA/DO-160 section 20 CS) level of qualification. If the measured and scaled current due to field illumination is lower than the qualification level by the required 6 dB, then proper margin has been demonstrated. If the proper margin does not exist, the equipment must be retested at a level that is 6 dB higher than the scaled current from the illumination measurement. Such testing is performed up to 200 MHz in U.S. military applications, and 400 MHz in commercial aerospace (Carter 1990).
This technique is available because it is relatively easy to inject current on cables at the required levels. At higher frequencies this test method falls apart, due to standing waves, and direct illumination of the platform is required. Even here, it is possible to do a scaled test, so that a low-level field illuminates the platform, and different sections of interest are instrumented to measure the field in that zone. Scaling the measured values based on the actual EME then yields a prediction of what the equipment in the zone would see, and that can be compared to the equipment-level EMI requirement (MIL-STD-461 RS103 or RTCA/DO-160 section 20 RS). Reverb methods are often used to ensure that the point at which the field is measured is not in a constructive peak or destructive hole. Frequency stirring as opposed to a physical paddle is best suited to what amounts to a shielding effectiveness test of the platform structure.
RTCA/DO-160 section 20 RS contains limits above 200 V/m, especially pulsed levels as high as 7000 V/m. MIL-STD-461 RS103 limits contents itself with 200 V/m. Given that MIL-STD-464 contains some environments north of 10 kV/m, it is reasonable to expect that the airframe won’t always provide enough shielding effectiveness to get the external EME below the 200 V/m equipment qualification level. What to do? A knee-jerk response to such margin issues is to demand bench-level RS qualification at levels 6 dB above the limit in the standard, but this is rarely useful when the bench-level limit is 200 V/m. Most test facilities can’t generate such intense fields. It is a possible approach when the external EME is low enough so that test facilities can in fact generate levels 6 dB above it. This is often the case for the on-orbit environment for spacecraft. But even in low earth orbit (LEO), there are “tent-pole” ground-based transmissions at much higher levels in specific locations.5
The issue of spacecraft qualified to less than 5 V/m in LEO occasionally exposed to much higher field intensities provides a framework in which to introduce and discuss the large but unquantified statistical margins inherent in how we do business across the EMC discipline.
Assume a LEO EME of 200 V/m in some frequency band. LEO is at least 100 miles from any ground-based transmitter, and any ground-based transmitter will be limited to somewhere around 10 MW transmit power based on dielectric break down in even pressurized waveguide. A calculation of the electric field 100 miles from a 10 MW transmitter with isotropic gain yields 0.1 V/m. In order to get that level up to 200 V/m requires antenna gain of 66 dBi. The half-power beamwidth of a 66 dBi antenna is about 0.002 radians, or 0.1 degrees. The illuminated spot diameter at 100 mile distance is about 0.2 miles, which is traversed at orbital velocity (5 miles per second) in 40 milliseconds. Does it make sense to qualify spacecraft equipment to 200 V/m in this band, when the baseline requirement is 1 or 2 Volts per meter? Also consider the default modulations used in MIL-STD-461 and RTCA/DO-160 section 20. If it were countenanced to perform a 200 V/m qualification, it would be sensible to limit the exposure to 40 ms. And given that consideration, one must ask what type of circuit could respond in that time period, and how would it recover when the exposure was over, and what are the system-level effects of a 40 ms exposure, even with malfunctions once per orbit.
The answers to these questions depend on the characteristics of the circuits exposed to the EME and the impact on the mission. Consider the attitude control system. The spacecraft is in a stable orbit, and the attitude control system can be a very slow circuit. Regardless of at what rate the control system samples and corrects, it could easily be programmed to ignore a short perturbation such as 40 ms, and maintain previous attitude until again receiving data that “makes sense.” In stark contrast, if the platform is a high performance fighter aircraft designed to be dynamically unstable in flight to maximize its maneuverability, it will most certainly not be able to ignore attitude and heading data for 40 ms, and its fly-by-wire control system must in fact be designed to fly through such an environment without even momentary upset.6 The topic discussed in this paragraph is often treated in a non-EMI sense as the response to a single event upset (SEU), and it is quite possible that investigation of the designed-in response to an SEU will instruct as to whether an EMI-type measurement is even necessary.
So how we specify EME requirements and associated margins must be tailored to the type of platform and type of functions a specific platform utilizes, and the setting of margins is definitely not a one-size-fits-all affair.
A recurring issue with EMEs and margins is the statistical probability of encountering a specified EME. The higher the tent pole, the more unique it is. Back when RTCA/DO-160 section 20 was being updated from levels around 1 V/m to the present day levels of 200 V/m and up, a common refrain was that if these new (at the time) proposed levels were real, then aircraft would be dropping out of the sky all the time because they were all qualified to much lower levels.
Clearly that wasn’t happening, and that pointed to the tent pole nature of such levels on the one hand, but also the fact that qualification to a limit does not mean failure at just above that limit. A device might in fact be immune to a 200 V/m exposure, but if only qualified at 1 V/m, that is all that can be knowledgeably stated about it.
So it turns out that a lot of the margin we actually have is statistical in nature; it results from the application of a world-wide EME to platforms that in most cases will never see the tent-pole environments, and also that the qualification levels of subsystems and equipments are not failure levels, but simply qualification levels.
To close this topic out, let’s look at what it takes to get to a real-world 200 V/m EME. Let’s look first at a typical high power, lower frequency emitter: an AM radio station broadcast antenna. By U.S. law, these are limited to a maximum of 50 kW EIRP, and that is for just a few stations designated “clear channels.” A separation of 6 meters from the antenna is all a far-field calculation predicts necessary to limit exposure to 200 V/m. Unless we are a ground vehicle, we really aren’t interested in near field effects, because if we are flying within 6 meters of a vertical structure, we have bigger issues than EMI. The point is, we will not see 200 V/m from that type of source. Ditto for an FM tower, where the maximum allowable EIRP is 100 kW. Even a helicopter hovering near such towers will not be close enough to see 200 V/m, unless again it is in much worse trouble than EMI. Now clearly we can posit a high power radar connected to a high gain dish and see 200 V/m several hundred meters away, but the coupling efficiency and the low duty cycle combine to make this sort of threat generally less hostile to critical circuits such as fly-by-wire and electronic engine controls. All electric field intensities are not created equal!
Problem Margins
There is a “built-in” margin that is usually not factored into the larger equation of system-level HIRF and EMP robustness. The CS114/CS116 bulk cable injection limits in MIL-STD-461 are based on the exposure of very long cables to far field plane-wave illumination along their length. How long? The 1 MHz low frequency breakpoint in CS114/CS116 corresponds to a 150 meter long cable. If a cable or platform is shorter than that, the breakpoint increases in frequency linearly. So a fifteen-meter platform such as a fighter jet or tractor-trailer rig should start rolling off below 10, not 1 MHz, and at 1 MHz and below the default limit has a built-in 20 dB margin. On the other hand, the EMP community is fond of imposing a 32 dB design margin after they have done all their test and analytical work to determine cable drives, and that would more than make up for the 20 dB we just found at low frequencies. And that in turn introduces the topic of “bad margins,” with which we will close out this problem margin topic.
It is the author’s opinion that a design requiring a 32 dB safety margin is not a good design. It means that there is an uncertainty that the design could be over a thousand times too weak. While the author has not done this type of design and does not know the rationale behind it, common sense seems to dictate that if one qualifies equipments and subsystems to a requirement such as CS116 and performs some sort of scaled drive on the finished platform, there ought to be a pretty good sense of the strength or weakness of the circuits so qualified without adding a 32 dB fudge factor, er, design margin.
A different approach, which the author also considers a poor practice, is found in spacecraft EMC design. This is tailoring a radiated emission limit to protect a specific radio receiver, and doing so in a non-optimal fashion. Strictly speaking, this is not the type of margin we have been discussing throughout this article, because of the point made at the very beginning: EMC margins verified at system-level integration vs. EMI limits applied at the equipment level. But this poor practice deserves calling out, even if it is an apple thrown in with the pears.
Before we delve into the wrong way to do things, let’s look at the right way. Look at Figure 6, which is RTCA/DO-160 Figure 21-10, a radiated emission limit from section 21.
The basic limit less the notches is quite benign – it is similar to the MIL-STD-461 RE102 limit for equipment installed in very large aircraft – the least stringent of the RE102 aircraft limits. It is relaxed 20 dB from the RE102 limit for aircraft equipment installations outside the protection of a metallic fuselage. But Figure 21-10 contains notches that dip down as much as 25 dB below the main limit line. It also contains notches that dip down 15 – 20 dB below the basic limit line. With the exception of the last notch at C-band, the “lesser” notches are generally wider than the deeper notches, and the deepest notches are very narrow. The notches are as wide as they need to be, but no wider. For instance, 108 – 152 MHz covers airport vhf communication and navigation aids. That 44 MHz range is that over which these radios can tune. The two deepest notches are 51 MHz wide, which means that one way of avoiding having to design to meet these stringent notch limits is to choose clock frequencies whose harmonics don’t fall in-band to the notch.
What this means is that a clock must operate at just above 51 MHz, and not have a harmonic that falls in these two bands. If those two simple criteria are met, the overall design only has to perform to the level of less stringent notches. And we can take the set of possible clock frequencies whose harmonics are outside the notch, and find a subset of those which work for other notches, if desired. Consider this from the point-of-view of a lifetime design or a production-run design. If we rely on shielding and filtering in order to meet the limit, these are subject to lifetime degradation and, over a large production run, parts substitutions that don’t provide original performance. Eliminating unintentional use of the protected spectrum is a guarantee over the life of the program/platform that there will not be pollution of the protected band. Note that the wider the imposed notch, the lower the probability of finding a useful clock frequency whose harmonics lie outside the notch(es).
That was the good, now on to the bad: protection of the GPS L1 (1575.42 MHz) and/or L2 (1227.6 MHz) bands. First, note that no MIL-STD-461 RE102 limit has any notches whatsoever. The RE102 limit for aircraft for items installed outside the protection of a metal fuselage, and therefore the lowest loss path to an externally mounted GPS antenna is 46 – 48 dBuV/m over the two bands. The reproduced RTCA/DO-160 Figure 21-10 has a notch to about 50 dBuV/m for L1, but no notch for L2, so about 63 dBuV/m. These measurements shall be made with a 1 MHz bandwidth in both standards. Spacecraft EMC standards (e.g., (AIAA S-121-2009) often take a different approach and carve very deep notches, such as 20 dBuV/m, that are measured using narrower bandwidths. The idea is that GPS reception could be desensitized by a narrow spur. But GPS is a direct sequence spread spectrum signal; it shouldn’t be susceptible to a narrowband clock harmonic – except that the protocol does not suppress the center frequency carrier, and it is indeed very sensitive at that specific frequency, with sensitivity rolling off with increasing separation from the center frequency. And the sensitivity is higher during the satellite signal acquisition stage then after signals are acquired and the receiver is merely maintaining lock.
Figure 7 shows the author’s measurement of his own Garmin Nuvi’s response to rfi after the constellation has been acquired. It is clear that, if one wishes to levy a limit on narrowband clock harmonics at lower levels than the default RE102 limit, this is only necessary (for the device tested) within 2 MHz of the band center frequency. Yet the cited S-121 standard includes a 25 MHz notch band at 20 dBuV/m centered on L1 and L2 and calls that a guard band. And there are other organizations that want even wider “guard bands.” A proper guard band is not imposed at the same level that protects actual radio in-band operation; a guard band is imposed to assure no intentional transmissions near the radio operation band to avoid overload, as was the case with the LightSquared-GPS controversy. An appropriate guard band is just the un-notched RE102 limit; it assures no overload even if an unfiltered wide-band low noise preamplifier is placed between the antenna and the GPS receiver front-end. The math is simple. A GPS antenna will have hemispherical coverage, hence 3 dBi gain maximum, and an antenna factor of 30 dB/m, so that the default RE102 limit of 46 – 48 dBuV/m yields 16 – 18 dBuV (~ -90 dBm) at the preamplifier input. That will not overload even the most sensitive such amplifier.
So we see here that the “feel good” impulse to “guard” our sensitive receiver ends up generating such a wide band notch that the odds of clock harmonics not falling in-band are decreased and make it less worthwhile to seek out an appropriate clock frequency. We can also see why these default limits (MIL-STD-461 RE102 and RTCA/DO-160 section 21) that control nowhere near the TOS of a spur at the center frequency still work: the chances of a clock harmonic being within 2 MHz of the L1 or L2 band center frequency are quite slim to begin with.
Another questionable practice is the application of margins to bench-level (EMI) requirements, such as MIL-STD-461, RTCA/DO-160 and automotive counterparts. As noted, margins are a system-level concern, and the lack of correspondence between a bench-level set-up and that installed on a platform, or platforms will dwarf any reasonable applied margin. There can be some utility to a bench-level EMI requirement margin in the sense of a level-of-assurance that the qualification unit is not a one-of-a-kind “golden unit” that is the only one likely to pass. If the qualification unit passes a limit with some margin, then depending on the level of quality and configuration control during the production cycle, there is some confidence that the production units will also meet the limit, or the intent of the limit.
But this is not the same thing as demonstrating margin at the installed system-level, and can never replace a required system-level margin verification.
All Margin Demonstrations Are Not Created Equal
Aside from any spacecraft-specific mission critical functions, the one subsystem whose 6 dB margin must be verified on every spacecraft is its ability to properly receive an uplink command. It is sometimes the case with space programs that there is risk-based, economic and schedule-based pressure to minimize margin demonstrations at the platform-integration level. A powerful case can be made against verifying uplink margin, but it must be defeated. Common pressures against margin demonstrations run like this:
- The spacecraft is a one-of-kind unit, and each and every essential or nonessential connection/disconnection of a cable increases the risk of many sorts of damage. Just moving around the spacecraft increases the risk of damage. If damage occurred that was not immediately apparent, and it only becomes apparent after launch, the damage done is irreparable. Minimizing such risk is a prime responsibility while planning the test program.
- On a one-of-a-kind platform, all costs are associated with that one unit, and cannot be amortized over a fleet of such vehicles, as they are in a military program and much more so for a commercial aircraft development.
- Platform integration necessarily comes at a program’s end, and it is quite typical for programs which are almost always bid optimistically with respect to both cost and schedule that they are behind schedule, increasing the program motivation to minimize testing and get the product shipped.
- Finally, the program can often make the case that (for instance) all the equipments met their equipment-level radiated emission requirement with margin in-band to a specific radio receiver (say the sole uplink/downlink radio). So there is no need to retest what has already been verified.
Against all that, the EMC engineer must steadfastly stand in the breach and tell the program that, while the bench-level EMI test results look good and there is no expectation of a problem, that is not the same thing as guaranteeing that no problem crops up at the integrated system-level. For the exact same reason that we go to great lengths to avoid inducing problems during integration and testing, we must assure ourselves that the uplink command is received with appropriate margin.
We have already discussed various ways to demonstrate the margin. The point is, one way or the other, it has to be done. There are few guarantees in life besides death and taxes, but a close third is that if the uplink command margin were not verified and it turned out there was a problem on-orbit, there would be absolutely zero collective memory that any pressure at all was brought to bear to streamline testing. All blame would devolve onto the EMC engineer working the program. This author once threatened to walk off the job if the demonstration was not made. It is that important.
Margin Demonstration by Analysis?
MIL-STD-464 through revision C says: “Safety critical and mission critical system functions shall have a margin of at least 6 dB. EIDs shall have a margin of at least 16.5 dB of maximum no-fire stimulus (MNFS) for safety assurances and 6 dB of MNFS for other applications. Compliance shall be verified by test, analysis, or a combination thereof.” Most instrumented margin demonstrations on aircraft are limited to the aforementioned EIDs and those functions with a direct impact on the continued safe flight and landing of the aircraft. Mission critical functions are usually ignored when it comes to testing; such margins if treated are performed using analytical techniques.
On the other hand, spacecraft attempt to do as much by analysis as possible, eschewing even EED margin demonstrations in many cases. In this later case, it is extremely important to ensure that the analysis is very straightforward, and demonstrates conclusively large margins using worst-case assumptions. An example here might be demonstrating that the coupling of an electromagnetic field to the shielded cable connected to an EED is 16.5 or 20 dB lower than the no-fire current. Multiplying the electric field intensity by 1.5 mA per volt per meter per the appendix of MIL-STD-461 for CS114 results a shield current. That shield current multiplied by the known or worst-case enveloped shield transfer impedance results in the common mode coupling to both conductors of the twisted pair. If that common mode potential is lower than the target differential mode signal (including margin), that is a straightforward, convincing demonstration that there won’t be any problems.
Conclusion
The whole topic of margins is one where the cookie-cutter approach of levying standardized EMI requirements that vary little, if at all, between different programs and applications is simply not available. Choosing the circuits that need margin verification, and designing that verification to work properly and minimize program impact requires unique approaches for each different program, as well as demonstrated resolve and courage on the part of the cognizant EMC engineer!
The author wishes to thank the engineers who took the time to review drafts of this article in advance of publication. Any errors of commission or omission are solely the responsibility of the author.
Selected Bibliography
Cofield Steven and O’Neill, John., “Fifty Years of EMC in the Department of the Army,” ITEM, 1990. Page 326ff.
Carter, N.J., “The Application of Low Level Swept RF Illumination as a Technique to Aid Aircraft EMC Clearance,” ITEM, 1990. P. 236ff.
Endnotes
- This is two months after the USS Forrestal disaster in which a Zuni rocket accidentally fired from a parked aircraft hit another parked but fully fueled and loaded fighter aircraft. The resulting devastation cost 179 lives. Some say the Zuni rocket and its firing cable were painted by a shipboard radar. Was the timing between the new EED safety margin requirement and the Forrestal disaster coincidence?
- The remote terminals see the full coupled noise, as desired, and read the attenuated signal-to-noise ratio as intended. Note that the 6 dB box attenuates both signals and noise received by the bus controller. This is not ideal, but unavoidable.
- Another problem crops up if it is deemed necessary to verify a “ripple margin” beyond the CS01/101 frequency range and dedicated above ground wires are provided for primary power current return (single-point ground power system). In the past, MIL-STD-461A/B/C requirement CS02 was used as the benchmark beyond the CS01/101 range. But CS02 injects and monitors ripple between each current-carrying power conductor and the ground plane, not line-to-line as the bus ripple is measured. So there is an apples-to-oranges disconnect here.
- EMI requirements for spacecraft flying scientific payloads with low-level sensors do contain crosstalk requirements. This is because these platforms don’t require stringent radiated emission controls below several hundred megahertz due to sparse use of the electromagnetic spectrum, and therefore stringent radiated emission limits at low frequencies are replaced by stringent crosstalk requirements, which are nevertheless much less stringent than typical radiated emission limits imposed by MIL-STD-461. See for instance sections 2.5.2.1.2 and 2.5.2.2.4 of the Goddard Space Flight Center’s General Environmental Verification Specification. https://standards.nasa.gov/standard/gsfc/gsfc-std-7000
- It is instructive to look at how the MIL-STD-461 CS116 (EMP) requirement deals with the wide disparity between its cable drive limits and the associated RS105 transient electric field intensity environment. The CS116 limit is 10 Amps between 1 and 30 MHz, and rolls off at 20 dB per decade below 1 MHz. The profile is similar to that for CS114, but the amplitudes are much higher. The RS105 transient field intensity is 50,000 volts per meter. If one uses the same 1.5 mA per volt per meter transfer function for CS116 as for CS114, the maximum CS116 cable drive should be 75 amps, not 10 amps, and this was pointed out during the revision process that resulted in MIL-STD-461F. Estimates for increasing drives from 10 to 75 amps were requested from test equipment manufacturers who had CS116 transient generators on the market, and the results were not pretty. It was decided to keep the 10 amp limit based on the idea that would suffice for the vast majority of cables which are protected within platform metallic structure, and that for the small minority of cables not so protected, pulling an extra layer of overbraid over the 10 amp qualified cable configuration would suffice to add the extra required protection of less than 20 dB.
- Consider that the same issues are at stake with an autonomously driven ground vehicle. It won’t drop out of the sky, but a momentary glitch can cause a catastrophic wreck, and in heavy traffic that single-event upset could cause a chain reaction.