How Artificial Intelligence (AI) and Battery Emulation Can Help Reduce Risk
The Internet of Things (IoT) is quickly being adopted for use in mission-critical applications for several reasons. First, the IoT now incorporates increasingly sophisticated technologies, such as artificial intelligence (AI), augmented reality, edge computing, sensor fusion, and mesh networking to tackle problems of increasing difficulty and importance. Second, as recent supply chain challenges have demonstrated, margins for error and delay are slim at best. Third, the demand for increased healthcare, combined with resource scarcity, means many medical services must decrease in cost and become more efficient. Finally, the desire to conserve resources means devices must last longer and perform more reliably.
These trends present numerous business opportunities in fields that serve human health, safety, food production, environmental protection, and other key aspects of human flourishing. As technical challenges grow, each of the 5 Cs + 1 C of the IoT becomes more important. Some of these can use artificial intelligence (AI) as part of the solution.
The 5 Cs + 1 C of the IoT
The term 5 Cs + I C of the IoT refers to the key characteristics that apply to all types of devices that utilize the IoT for transmitting and receiving data, as follows:
- Connectivity—Refers to a device’s ability to create and maintain reliable connections, even during roaming. Mission-critical applications cannot accept delayed or lost data.
- Compliance—Means a device meets regulatory requirements for market access. Compliance problems must not delay implementation or lead to a product recall.
- Coexistence—A device’s ability to perform properly in crowded RF bands. Mission-critical devices must avoid packet loss, data corruption, and retries that drain battery charge.
- Continuity—The ability of a device to operate without battery failure. Manufacturers must ensure long battery life, especially in implanted devices and emergencies where AC power is unavailable.
- Cybersecurity—IoT devices and infrastructure must be strong and resilient against cyber threats, including denial of service, contaminated data, or interception of sensitive information. Product development teams can use AI to simulate a variety of malware techniques based on exploits that have revealed vulnerabilities in the past.
- Customer Experience—Ideally, this means that customers enjoy a flawless, optimized experience with intuitive applications that operate seamlessly from end to end on multiple platforms. The challenge is that the number of possible paths through a series of related software applications is virtually limitless, far too many to test comprehensively. Fortunately, AI can once again guide automated test systems based on how recently code has been added, how many defects have been found in particular code sections, and other pertinent factors.
Increasing Demands on Device Batteries
Ensuring that IoT devices sufficiently address each of these key characteristics increases the demands on batteries. Previously, a simple sensor device might wake up, take a few measurements, transmit data to a hub or access point, and return to sleep. Today’s mission-critical devices might incorporate multiple sensors, microcontrollers, numeric processors, six-axis accelerometers, sensor fusion logic, voltage converters, power management systems, image processors, microphones, multiple radios, memory, encryption processors, and other hardware components that drain battery life.
Furthermore, operating environments are increasingly challenging, with temperature changes, irregular duty cycles, and electromagnetically crowded spectrum. Some operate in locations that are difficult or hazardous to access, and some operate inside animal or human bodies. These factors place unprecedented demands on device batteries.
For medical devices, the quality of a device’s battery life often has health implications. Even in non-critical applications, premature failure can lead to complaints in post-market surveillance monitored by regulatory agencies. Complaints that become excessive or increase patient risk can have huge costs for the manufacturer.
Challenges for Battery Test During Product Development
Battery testing presents several challenges during product development. Using real batteries might seem ideal, but there are limitations associated with real batteries.
Difficulty in Determining Initial State of Charge
Batteries may be fully charged at the factory, but the minute they leave the charger, they begin discharging due to internal resistance. This self-discharge rate varies by battery technology; lithium-ion cells have a lower self-discharge rate than nickel-cadmium (NiCad) or nickel metal hydride (NiMH) batteries.1 The discharge rate varies as a function of time and temperature, and this performance loss is sometimes referred to as calendar fade.2 An engineer cannot assume that a new battery is at precisely 100% state of charge.
Variation Within and Across Manufacturing Lots
Like any manufacturing process, battery manufacturing has normal variations. Even within a given lot or date code, batteries vary. There is often additional variability across different factories. This does not mean that manufacturers release batteries that are out of specification, but the tolerances are there for a good reason. Battery run-down tests should be conducted with batteries from different lots acquired at different times.
Variation Due to Recharging
A re-charged battery has different discharge characteristics than a new battery. This effect, known as cycle fade, is due to mechanisms that affect the cathode or anode. For example, in a lithium secondary cell, the anode ages due to graphite exfoliation, electrolyte decomposition, and lithium plating that leads to corrosion. Similarly, the cathode undergoes aging due to several factors, including binder decomposition, oxidation of conductive particles, micro-cracking, and structural disordering.3
You can limit this variability by ensuring that the battery is fully charged and using a battery cycler that conditions the battery by cycling it from fully discharged to fully charged.
The Importance of Battery Emulation
Some test engineers attempt to use a basic DC power supply to emulate a battery for battery run-down test. This can be accurate, but only if the engineer uses a specialized battery emulator that models its output according to a battery profile. A standard power supply does not perform like a battery, but a battery emulator uses specialized features such as programmable output resistance and fast transient response to emulate a real battery.
For example, a test engineer can use an advanced battery test and emulation solution to quickly and easily profile and emulate a battery’s performance. The engineer can then use this solution to charge or discharge a battery of any chemistry to create a battery model of up to 200 points, with each point including the battery’s open circuit voltage (Voc), series resistance (Ri), and state of charge (SoC).
Battery emulation is especially important when the test engineer is changing the device’s hardware configuration or firmware program. Without a consistent battery emulation, the engineer cannot know whether the variation in run-down time is due to the intentional modifications or variability in the batteries used to perform the run-down test, as described above. Because battery life is closely related to the other “Cs” of the IoT, any AI techniques that improve overall device operation can also have a positive impact on battery life.
By using such a profile with a battery emulator, the engineer can avoid needing to use an actual battery, thereby eliminating the associated uncertainty and variability. In addition, a battery emulator lets the user quickly set the SoC to any point in the model at the beginning of a test.
For example, the engineer may want to see how the device behaves near the end of battery life by starting the test with the SoC set to 15%. To use an actual battery, one would have to discharge an actual battery to 15% and verify that it was at that level. This poses at least three challenges. First, one would have to discharge a battery to the desired SoC. This could take hours on a real battery, but one can set the SoC on a battery emulator in a fraction of a second. Second, the engineer would have to somehow determine the SoC of the battery. Third, every time you charge or discharge a battery, you change the battery behavior due to the cycle fade mentioned previously.
Using the Results
The engineer can use the information at states of charge near the end of battery life to thoughtfully degrade device performance and extend device runtime. For example, the engineer could choose to transmit data half as often as usual. In addition to extending battery life, this would alert the user that the battery is running out. The engineer could also decide to transmit only minimum and maximum data values or to only transmit when values change by more than some amount. The engineer could also choose to refuse firmware updates once the SoC falls below a small percentage. There would be little point in having a device battery fail during the middle of a firmware update.
Battery life is becoming increasingly important as the IoT moves into more mission-critical applications, including connected medical devices. Using real batteries to test these devices leads to many problems during the product development process. Test engineers can use advanced battery test and emulation solutions to create detailed, high resolution battery profiles. They can then use these profiles to emulate the battery and get fast insights into battery performance at various states of charge and then modify firmware to optimize device performance.
Leave a Reply