This issue of In Compliance magazine presents the annual ode to Test Equipment. In that vein, this month’s Reality Engineering shares a brief perspective on that topic.
Early ESD
Our very first piece of test gear was a Schaffner NSG 430, which was initially used to debug some ESD problems on an early digital typewriter which had the curious trait of rattling off random characters without fingers involved. The NSG 430 had a sweet upper voltage of 16 kV, 2 ns rise and a single polarity discharge. It was a war-horse for many years in the lab and was once used to settle an altercation between two of my junior engineers.
Things have evolved in the measurement and EMC world quite a bit since then, non-the-more true than in the measurement of mobile devices. Expanded data rates, the drive for optimized spectral efficiency and the requirements for hardware performance have all been raised to a much higher bar. New technologies are generating test and measurement challenges 100 times higher in frequency since dad first brought home the NSG 430.
First generation mobile phones could deliver a few kilohertz of voice using classical frequency modulation. Although it was a bit naughty, we could dial into mobile bands and listen to conversations using our (now) venerable HP 8568, my first piece of grown-up test gear, with a top frequency of 1 GHz. Most of the conversations were shopping list-instructions from harried commuters and soccer moms. “Pick up some milk on the way home, honey.” Nothing too saucy, except for that one time…
Advanced Mobile Phone System (AMPS) replaced a funky trunked-type system, and formed the embryo of the massive mobile system we use today. Adapting features from the plain old telephone system (PSTN), AMPS used Supervisory Audio Tones (SATs) for network control: initiating and routing calls and performing handovers at cellular boundaries.
Channel control was by blank and burst techniques, that is, the voice and data used the same RF channel. The voice information would be momentarily blanked and control data sent on the same RF channel. That was it for data. If you wanted to text someone, you had to stop your car and find a fax machine.
Geez
Early AMPS systems were limited to 10 kilobits/second1. Predictions about use was for “a typically large mature system might have up to 100,000 mobile telephones, 50 cell sites and a single telecommunications switching office.” It was, in retrospect, the first generation mobile functionality or now referred to as 1G.
2G or the second generation of mobile phones featured digitally-modulated signaling (which means that my analog demodulation on nearly-mothballed 8568 now only yields a crunchy static when tuned to a cell channel). This means mobile chattering is safe (from me, at least, never mind many others, including rogue towers that are operating in the U.S., that are busy intercepting phone calls and hacking into cellular phones2).
Second generation mobile systems evolved to 2.5G and we continue to move along an arc of new paradigms in mobile networks. Since the late 1990s/early 2000s the Third Generation Partnership Project (3GPP) has been issuing specifications for new mobile functionality. We have now arguably entered the fourth-generation of mobile communications; marketeers make much hay of their systems as being 4G or LTE.
According to the 3G Partnership Project, “LTE and LTE-Advanced have crossed the “generational boundary” offering the next generation(s) of capabilities. With their capacity for high speed data, significant spectral efficiencies and adoption of advanced radio techniques, their emergence is becoming the basis for all future mobile systems.”3
So now, billions of mobile phones attach to global networks. Advanced multiplexing and modulations cram more data into a tight spectrum. It is estimated that, by 2017, data from smart phones will exceed 6 exabytes each month! 4 That is a whopping 6×1018 bytes of cat videos, recipe-crawling, online media, Tweeting, friending, un-friending, emailing and—for some—surfing for erotica.
Depending on who you ask, there is varying agreement on what G we’re in now. (Are we fully 4G? Does LTE-Advanced make it so? And by the way, where did WIMAX fit? 2.5G?) There is certainly no clear line of demarcation; the technologies and the services blend together at the edges. Cool features like MIMO and beam-forming techniques make the coming generation of technologies ever-intriguing.
At any rate, the string of Gs keeps being strummed and the sound of 5G is now being tuned. Some posit that 5G is more of a lifestyle evolution, as opposed to a performance- or specification-driven evolution that was/is 3G/4G. That is, the services that will define 5G will be the driver of the technology, rather than data rates and throughput. The notion of 5G also feeds into the concept of the Internet of Things (IoT) and Internet Everywhere where Machine-to-Machine (M2M) connections will comprise a lot of the traffic in the future.
Whatever the outcome, the frequencies and capacities of mobile networks are going to continue to rise. Using advanced Quadrature Amplitude Modulation (QAM) encoding schemes. Constellations of 1024 states can apparently achieve a gigabit of data in a single 56 MHz channel.5 You’d figure a gigabit per second would be enough for most, but the lifestyle of the 5Gers will likely demand more.
When will 5G arrive? Some say a G occurs every ten years, so if 4G happened around 2010, then 5G may be upon us in 2020. However, elements of 5G (the lifestyle part) are already upon us, notably in the expanding landscape of M2M (your fridge ordering milk from the grocery store—no more having to call your honey on their commute home.)
Backhaul
To jam that kind of information over what will be the ever-blooming internet infrastructure will require multi-gigahertz bandwidths, mostly in the backhaul/infrastructure and when burst data streams are necessary to keep the flow of Instagram pictures going.
Which brings up one of the most fascinating areas of technology development. The millimeter-wave region, for where else can we find such high bandwidths to support such data consumption? For all of the IoTs and 5G stuff to run, RF engineers are developing equipment that operate in the 50 GHz and above. These regions of the upper-upper UHF offer the GHz-wide bandwidths. Pretty tempting.
The automotive industry is already using 24 GHz for radar; microwave point-to-point links have been operating in the tens of GHz for many years. Somewhat new to these bands are other doo-dads such as Level Probing Radars (LPRs). So measurements are not a new thing, but the state-of-the-art needs to evolve to realize the 5G dream.
According to Dehos et al, a “vision of 5G networks beyond 2020 is a heterogeneous network composed of long-/medium-range macrocells that operate in the sub-3 GHz band, small cells (10-50 m radius) that use the sub-6 GHz band and 60 GHz small cells with a target peak capacity of 2-7 Gb/s” [emphasis added]. The small cells are envisioned to be mounted on traffic signs, posts and buildings and feature “automatic beam-steering and self-organizing features.” The heterogeneity is provided by legacy 2G/3G and LTE equipment for voice and the low-latency basic coverage. That is, users will do their thing on the local cell network and the 60 GHz stuff will do the heavy lifting in the background.
Tens of Billions of Hertz
The big challenge with these nether regions of spectrum is measurement and metrology. Although doable, solid calibration and measurement techniques kinda end at 40 GHz, above which standard analyzers with reasonable sensitivity peter out.
A solution does exist that employs bolt-on harmonic mixers that take the local oscillator from the analyzer and jams a mixer which creates products in the <10GHz range. These are pumped into the nose of the spectrum analyzer where they can be resolved, sort-of. The process is a bit of a brute force approach and the practical challenge is keeping the real signals sorted out from images and other spurious-type energy that is developed by the mixer action. Also problematic is the high conversion loss, upwards of 40 dB, which limits the sensitivity of the measurement.
At the recent Telecommunications Certification Body Council (TCBC) training (October 2014), the FCC’s Office of Engineering and Technology (OET) presented a solution for the 75-85 GHz and 92-95 GHz frequency range (W-Band). This type of solution could be adapted for mm-wave measurements.
In the presentation, FCC personnel discussed the development of a down-converter that can be used at these higher frequencies to bridge this measurement gap. The solution was intriguing (if you’re into this kind of thing). The notion is to wiggle a mixer and create an IF output that can be measured in the 1 to 12 GHz frequency range using a normal spectrum analyzer. This is a bit different from the existing solution in that the process doesn’t create a mess of images and has a significantly higher sensitivity. It also allows for higher measurement bandwidth.
The device and schematic of the solution is shown in Figures 2 and 3. I want one.
I was content to measure up to 1 GHz in my twenties. Now moving steadily over the hill, the action is 100 times higher in frequency. Remarkable.
References
- “Advanced Mobile Phone System: Control Architecture.” Z.C. Fluhr and P.T. Porter. Bell System Technical Journal. January 1979
- Don’t Trust That Tower. Nancy K. Friedrich. Microwaves and RF. Volume 53, Issue 10. October 2014.
- http://www.3gpp.org/about-3gpp
- “Millimeter-Wave Access and Backhauling: The Solution to the Exponential Data Traffic Increase in 5G Mobile Communications Systems” Dehos et al. IEEE Communications Magazine. Vol. 52, No. 9. September 2014
- http://en.wikipedia.org/wiki/Quadrature_amplitude_modulation