Adopting tried and true digital verification techniques to the validation of a mixed-signal power management device

Hang around embedded software engineers long enough and the words design for test or test-driven development will become commonplace. This is because in a world where functionality is ever-increasing in complexity, you need to be able to both verify and validate your device’s functionality such that it matches the requirements. And, while these design practices are well understood in digital systems made up of microcontrollers (MCUs) or system-on-chip devices (SoC) with functional safety driving extremely distributed systems, they are just as applicable in low level mixed signal devices. 

Gone are the days where the main MCU of the embedded module is trusted to do everything; in industrial and automotive systems where safety is critical, there now exist other devices to help test the main microcontroller to aid in the safety integrity level (SIL) of the device. These functions vary in complexity and range from helping the MCU toggle pins, to ensure stuck at faults are mitigated, to helping verify complex question and answer watchdog, and voltage monitoring functionality.

Often the next ‘smartest’ device in the system is either another low-level MCU or, in the quest to simplify a bill of material (BOM), a power management device (a PMIC) with dedicated safety functions. And since these devices don’t have flash memory, and are traditionally analog in nature, it makes the validation of these devices somewhat challenging for engineers who traditionally focused mostly on transient load responses.

To aid in helping an engineer develop a test philosophy for such a device, we will focus on a distributed system made up of both MCU and PMIC. The article will use this system to demonstrate validation concepts that system designers have been employing for quite some time, the simplest of which is an open loop test philosophy and its strengths and weaknesses. The closed or in-the-loop based test philosophy will address those weaknesses and demonstrate how one can easily expand upon their test setup by including an MCU to model the system or system device. Taking these two test philosophies together will shed light on how exactly a design-for-test philosophy can be adopted for a traditional analog-based device such as a PMIC. 

Figure 1: Example of open loop (top) and closed loop (bottom) test setups

In both these examples, we’ll introduce languages and methods needed to be able to easily implement and address. We’ll also introduce simple yet effective constructs in C that can be implemented to help with test flow and modularity. In the end, the goal is to help you come up with design patterns that not only focus the validation of basic functionality of the application specific integrated circuit (ASIC) but help emulate the system integration tests during validation to address complexity.

Modeling the System

In order to best approach how to create a scalable test architecture for an embedded ASIC, an example system needs to be defined. After definition, we then address requirements that are applicable to both the MCU and the PMIC individually, and the requirements that are applicable to both devices which ultimately make up the core of the system. 

This system is outlined in Figure 2, made up of a high-end MCU, a power management device, along with an external sensor that will monitor throttle position. 

Figure 2: Our example throttle by wire system, with some simple interfaces

The MCU is responsible for sampling the throttle position sensor and then managing the air, fuel, and spark of the engine to ensure acceleration is kept constant and smooth; this is defined as a simplified, but typical, throttle by wire system. The power management device needs to deliver monitored power to both the MCU and the throttle position sensor, assists in monitoring the MCU and system voltages, and alerts the system when it is unable to do so reliably. 

Each active device in this example, the MCU and PMIC, needs to go through two types of testing prior to starting their task. The requisite testing includes:

  1. Internal self-testing, in which each device undergoes a self-diagnosis to ensure proper operation. This usually results in an external pin being toggled to let other devices know the state of these tests. This type of test is usually defined as a self-test (or BIST).
  2. External system requirements, which includes a wide variety of system tests that require different devices to properly diagnose interfaces and different peripherals of the microcontroller. These tests are more complex and usually require a communications interface between the devices, which allows the devices to signal when a test occurs, and how it is to occur. 

Taking these two points into account, we’ll modify Figure 2 by identifying the critical interfaces that will be used to help validate not only the PMIC but any system requirements that the PMIC must perform to help test the MCU. 

Figure 3: Our example throttle by wire system, with critical interfaces

In summary, our example system functionality includes requirements such as the ability to:

  • Verify pins at startup to ensure critical low voltage pins are not shorted;
  • Verify analog to digital converter (ADC) functionality; and 
  • Verify that the serial interface and internal registers are working properly. 

At first glance, this seems like quite the task, as the PMIC is normally focused with only one goal, that is, designing a high-performance regulation loop with a focus on power integrity and not on the validation of embedded functions. Because of this, things like loads and specialized power supplies are the norm for these engineers, but not digital devices such as digital to analog converters (DACs) or MCUs. 

When it comes to testing these embedded functions, the focus must be on developing a test setup that allows the PMIC team to ensure that these functions work properly by exercising their pass-fail criteria. Because of the variety of functionality to test, validation can be done in one of the following two ways with low-cost devices that are easy to implement and program:

  • Open loop testing, in which the device is commanded to perform a required action and the device demonstrates the acceptance or failure of that test. This is often done without a ‘plant’ in the system. 
  • Closed, or in-the-loop, testing, in which the device is commanded to perform an action, such as toggle a pin, and a receiving device confirms the action. This is often done with a ‘plant,’ acting as a model of the system.

We’ll start first by explaining some methods for working with the PMIC device in an open loop fashion, where there’s no plant device that the PMIC will work with.

Introduction to an Open Loop Setup

A simple open loop test setup is a valuable tool in a validation engineer’s toolbox, since it allows them to validate communication through an external interface, such as a:

  • Input/output pin level, or 
  • A serial communications interface, such as a I2C or serial peripheral interface (SPI).

Overall, the goal of an open loop setup is to be able to communicate not only to the device under test (DUT, in this case, the PMIC), but to trigger measurement devices to automate as much of the test sequencing as possible. An example setup is found in Figure 4.

Figure 4: A common open loop bench setup for evaluating a PMIC

Traditionally higher-end test setups will utilize Labview, or Matlab, but Python (a rather new but very powerful scripting language), and lab equipment capable of receiving digital commands over a standardized interface is perfectly usable. Regardless of how you set this up, creating an environment that allows a user to communicate and control various devices in sequence is crucial in an open loop setup for automation and repeatability. 

In order to demonstrate how to evaluate the effectiveness of our open loop test setup and its ability to interface, we’ll examine two system test cases denoted by our example system; they are:

  • Test Case 1: A fault reaction and accuracy test, where the DUT is cycled through internal codes which correspond to a limit read by the ADC, and the fault reaction is measured via a GPIO pin.
  • Test Case 2: An external reset command that allows an external device to trigger an internal function inside of the PMIC. In this case, we’re validating to ensure that at least one of the three PMIC rails can be reset via SPI.

Test Case 1: Assessing Fault Reaction and Accuracy of the PMIC Device

This test case focuses on the ability of the PMIC to monitor a voltage and issue a fault reaction within a certain time. The example requires the device to react properly under a swept VIN and load to ensure the accuracy of the converter. To execute this test case, the open loop setup makes use of the following hardware:

  • An SPI addressable digital to analog controller (DAC)
  • A triggerable mixed signal oscilloscope (MSO)
  • A USB to SPI converter 
  • An addressable power supply
  • A programmable load

The sample system is outlined in Figure 5.

Figure 5: A bench setup used for evaluating ADC accuracy and fault response

While this may seem like a simple setup, that power that comes from this is using them together to run and rerun a test setup using our Python scripting language. With just a few commands in Python, you can program this test setup to:

  • Set the MSO to send a specific SPI message to the DAC.
  • At the same time, sweeping the VIN voltage with the power supply, and 
  • Later, command the load to move in and out of the allowable range as you sweep the DAC voltage. 

With this setup, we can easily swap in and out different versions of the PMIC or a new board to perform regression testing and to make sure that the design functions from revision to revision.

Case 2: Assessing Internal Behavior of the DUT

As is the case with many PMIC devices that sit in systems that adhere to functional safety standards, most DUTs have internal sensors to monitor various reference points inside the device, and a dedicated GPIO to alert the system of an error in one of those sensors. In this case, we assemble the following equipment:

  • An MSO capable of triggering off an SPI message from the computer to the DUT, and
  • A USB to SPI converter.

The setup is found in Figure 6.

Figure 6: A bench setup used for evaluating fault reaction and response

In both of these simplified cases, the validation engineer gains the ability to create an open loop regression test suite that allows them to test new devices and boards with a common test setup. A computer can script these sequences, take screen shots, and log data simultaneously. It is relatively simple to set up, provided your lab equipment includes addressable devices, your validation group opts for a suitable software license, or opts to control these devices via Python. 

However, where this setup falls short is in the modeling an actual embedded system. The assessed functionality is limited to just the DUT and the actions of the DUT in response to directed stimuli and does not include the interaction of the DUT with the accompanying MCU in response to those stimuli. For that, we turn to a closed loop setup or a system model.

Introducing a Closed, In-The-Loop Setup

Traditionally, validation engineers of ASIC type devices focus on simply validating the device in an open loop fashion. For example, for a simple regulator, they are most likely concerned with:

  • How does the control loop function in the presence of a load step? Or,
  • How does the controller function with a varying input voltage? 

Both of these examples employ a simple oscilloscope and power supply, with an engineer performing a manual evaluation of the setup. However, in a mixed signal design that is highly integrated with a microcontroller, this approach is limited. To address this, we turn to a model of the system with the MCU in it. There is a wide variety of ways to accomplish this, including:

  • Model in the loop (MIL) or software in the loop (SIL), in which a software model of the system is made and exercised based on requirements; 
  • Processor, or MCU in the loop (PIL), in which a processor is used to create stimulus and measure reactions, based on the requirements to the system;
  • Hardware in the loop (HIL), in which the actual target hardware indicative of the end system is used.

In the rest of this article, we’ll consider a closed loop PIL approach which is typical of a system validation that is now becoming more commonplace at the device level.

A MCU offers the example test system a lot of freedom in how evaluate. Among the many advantages are:

  • It can offer breakpoints in the code execution, meaning that you can look for complex interactions and break when the MCU encounters them. These can come in the form of common hardware breakpoints or test ‘assertions.’
  • It has built-in peripherals such as high-speed timers, DACs, analog to digital converters, and a wide variety of communications interfaces to help exercise the DUT. 
  • It offers memory that allows the device to sit and buffer results and store them or print them to a UART for easy data logging.
  • It is largely independent of user interaction, meaning that once programmed it can run without being monitored for long periods of time, and sometimes even indefinitely. 

Quite possibly the most attractive part of an MCU is that evaluation boards that allow access to the functionality do not necessarily need to be the end target hardware and thus can be much less expensive and easier to program.

In our example throttle-by-wire system, the addition of the microcontroller allows us to introduce more complex test scenarios. Two examples of these more complex scenarios are:

  • Like our open loop fault reaction test, we can now interact with the microcontroller and test fault reaction and recovery of the PMIC/MCU system to an externally triggered fault, while varying the input voltage to the system.
  • We can also test the interaction between the MCU’s internal watchdog and the PMIC’s ability to reset the MCU in the case of a watchdog error.

Figure 7: Modeling our system using a PIL method

Now that we have the microcontroller and a concept of what a closed loop evaluation system can offer us, we need to discuss some strategies that go into creating firmware to facilitate a state driven test environment.

How to Expand Upon a Closed Loop Setup with a Programmable Device?

Now that we’ve defined what a closed loop validation system is our MCU firmware needs to be written so that it takes full advantage of the environment. This means that the validation engineer needs to implement design patterns for testing that influence:

  • How the MCU/PMIC interaction controls execution order (here we introduce a concept called state driven testing); and
  • How the MCU/PMIC interaction can implement modularity in terms of testing modularity such that we’re able to take different execution paths in the same function. This specifically addresses requirements of ensuring that a mechanism or function demonstrates pass-fail functionality.

Taking these into account, we’ll now discuss some embedded C-level constructs that can be used in order to control execution and to address our modularity requirement. Together, these will give us a great amount of flexibility in debugging and regression testing. 

State Driven Test Environment

First, we introduce the concept of a state driven test environment through the use of a common design pattern found in digital and embedded systems, the state machine. A state machine is a design pattern that allows the designer to organize functions and behavior by defining and tightly controlling the states of a system. It is commonly found in the negotiation of ethernet handshaking, or the internals of a CPU.

An example, found in Figure 8, is a state diagram that outlines the startup and initialization tests between the two devices in our example system, the PMIC and the MCU. 

Figure 8: A state diagram depicting initialization tests ran on the MCU in a closed loop model

The diagram conceptually organizes the execution pattern into individual functions, with pass-fail criteria that allow us to control the flow of execution. This concept is extremely powerful in our validation environment because of how the system’s functionality is distributed between the PMIC and MCU. Otherwise, it would be difficult to understand where the PMIC (or any other ASIC, without a debugging environment) is in its internal processes.

This design pattern is borrowed from a universal verification methodology (UVM). With this context, we can define a finite state machine as a computational model used to simulate sequential logic in a ‘stateful’ means. It allows abstraction of a complex series of events to a series of states to control execution flow.

The implementation is usually done in C and follows a design pattern similar to that in Figure 8. An example of its instantiation is shown in Figure 9.

Figure 9: An example design pattern of a state machine in C

The state machine is in control of the order of execution and driving a test to a specific operating point, an approach best suited for forcing the PMIC/MCU interaction through a series of defined steps with each state transition gated by a test assertion. 

For example, for the system to move from the initialization state to a communications interface state, the combination has to match the expected behavior. If it does, this successfully satisfies the ‘pass’ case. Otherwise, the test would signify a failure and the ‘fail’ case would be able to be validated

Another example, shown in Figure 10, demonstrates how the state machine can control the order of execution to a finite end.

Figure 10: A state diagram depicting the startup sequencing of a PIL based system

The MCU needs to command the PMIC to dynamically change voltages (this is commonly referred to as DVFS), require the PMIC to detect a fault which would then reset the system, and then allow it to recover from a faulted state. 

By implementing this concept as a state machine, depicted in Figure 11, the system can easily validate both the ‘pass’ case, in which the system recovers, as well as the ‘fail’ case in which the system goes to an error state. 

Figure 11: A state diagram depicting the startup sequencing of a PIL based system

However powerful state machines are in controlling execution, the validation of these requirements often require the need for a pass and fail case to be tested. And while an individual can copy their firmware, if they adopt modularity in the design of their test cases, they would easily be able to reuse work need to address both the concepts of modularity and program control. 

How to Simplify the Test Case Implementation in a State Machine 

As we alluded to in the previous section, our validation setup will often need to validate both the pass and fail paths. And while there are several different ways to address this, one overlooked function in C is compile time build options.

In production environments, build options (sometimes referred to as compile switches) are a powerful tool to create modularity in firmware for building various embedded targets or multiple applications. Inside a disciplined organization, multiple people using the same compile switch can reconfigure an entire application by compiling large sections of data in and out instead of creating, tracking, and supporting new firmware variants. 

However, in a state driven test environment, the goal is not to be memory efficient but to be able to support a wide variety of test cases in a single design pattern to take advantage of the software architecture choice. To demonstrate how powerful these test cases can be, we present two test cases in which our embedded system model needs to perform a test of:

  • The watchdog (WDT), including both a simple window watchdog function and a challenge-response watchdog; and
  • An external pin toggle, in which the MCU commands the PMIC to toggle a GPIO and the MCU would be forced to acknowledge it.

In the case of the watchdog, the state diagram is depicted in Figure 12. We can easily reuse our main, non-finite state machine to step the system to the watchdog interface validation test, and then use compile switch to examine the simple window watchdog case, followed by the challenge-response watchdog case. 

Figure 12: Pseudocode for a WDT test with conditional compilation

Additionally, we could create a pass/fail compile time switch that would validate the ‘pass’ test case and the ‘fail’ test case inside of each function.

In the case of the external pin, we simply create three functions:

  • We toggle the pin high, then low, and observe the expected response.
  • We do not toggle the pin at all, simulating a pin stuck fault due to a solder short, and observe the expected response.

In each of these toggleable situations, demonstrated in Figure 13, the reaction to the fault is organically used, and the compile switch is used to create a fault. 

Figure 13: A state diagram depicting the startup sequencing of a PIL based system

However, this C construct is not without risk. The main dangers of overuse of these build switches is their complexity and the ability to document them, especially when they appear in various sections throughout the validation application. Overuse and poor discipline can create a complex set of decisions that are difficult to maintain, let alone pass to another validation engineer. To combat these, we suggest to:

  • Refactor the code often as new test cases and functionality are created, with a focus on making each function and test atomic so each compile switch is localized.
  • Document the flow of the test as you develop your validation firmware. Something as simple as a test flow chart, for example, can be invaluable when ensuring the firmware is designed properly. 


Validation of traditional, analog-based devices is becoming much more complex with the advent of highly integrated systems. Digital functions like watchdog timers, pin checking, and ADCs are finding their way into mixed signal devices and their functionality needs to be exercised as rigorously as the analog control loop. Using an open loop test setup, an engineer can get a jump on the validation using some simple scripting tools or more expensive, off-the-shelf ones. But quickly, they may find limitations in that approach, depending upon how deep their validation plan takes them. 

By implementing a state driven test concept in a closed loop system with a model of the target embedded device, you can achieve higher validation coverage and create a method to help identify and fix those hard-to-catch bugs before release! 

Leave a Reply

Your email address will not be published.