The growth of connected products and the Internet of Things (IoT) presents many challenges and risks to manufacturers and developers. As data breaches continue to grab headlines, manufacturers and consumers alike are thinking about cybersecurity to ensure the safety of a device and user data. Indeed, as technology rapidly evolves, the industry has responded to cybersecurity concerns by creating standards and regulations such as ANSI/UL 2900, IEC 62443, the NIST framework and Common Criteria.

While mitigating for cyber threats and assessing products to ensure their security is critical, in an increasingly connected world, interoperability is another component that must be addressed. The connected world relies on products to exchange, share, and interpret data. Interoperability assessments are essential for both security and performance. These tests help to ensure that systems and products can form an integrated ecosystem and communicate and interact with one another effortlessly.

But interoperability assessments can offer challenges. Each manufacturer has their own individual protocols, so it can be challenging to ensure that final products work together in their intended environments. Future updates, security patches, device upgrades, and user expectations must also be considered.


Standards

One of the biggest challenges when it comes to designing and manufacturing IoT-enabled products is that, unlike cybersecurity standards, there are no set regulations or guidelines in place that focus solely on interoperability. Many within the industry are talking about the importance of interoperability when it comes to IoT-enabled devices, but standards and regulations to address the concern are still missing. The most recent IoT standard issued by ISO/IEC focused on reference architecture, as opposed to interoperability.

With a lack of clear standards addressing interoperability, risk assessment, testing, and action for this component of connected products are left to each individual manufacturer. How exactly can this be navigated? What must be considered and how can interoperability concerns be addressed? There are several things to keep in mind as you attempt to answer these questions.


Risk Considerations

Any IoT-enabled device or connected product could encounter any number of risks once they are brought to market and introduced into their intended environment. From the network to other devices to the overall ecosystem, any single device may come into contact with several other products and components. This makes it extremely important to keep the following in mind as having the potential to impact your device:

  • Other devices connected to the network, as well as their software, origins, and reliability
  • Access control through the network and other connected devices
  • Potential disruptions to the connected eco-system
  • Default credentials or those that are hard-coded
  • Vague or imprecise paths for updating legacy firmware
  • Open ports, which may lead to data breach vulnerabilities
  • Interference from other products, signals or electronics
  • Other devices and networks that have cybersecurity issues or concerns
  • Performance as the device interacts with others in its intended environment

As this list illustrates, many factors must be considered to ensure connected products and networks work together securely, without sacrificing performance. Performance issues are key for satisfactory user experiences, as well as a device that successfully meets its goals. In live environments, a device needs to perform well to be a success. A manufacturer who is not taking all of this into consideration can seriously damage their brand and reputation by producing faulty products.

To address these risks and help ensure the interoperability of connected devices, manufacturers will need to assess and test their products following a model that leads them through a full cycle of risk assessment.


Assessment

In the world of cybersecurity analysis, information security management systems utilize a four-stage system that is used commonly in business and scientific fields. This model, referred to as “Plan, Do, Check, Act”, is a repetitive four-stage system for continuous improvement. Each of the four steps in the cycle offers valuable insights and information to help mitigate risk. Within the information security management realm, the system is used in the following way:

  • Plan: Identify improvement opportunities within a product and its systems. Evaluating the current process and pinpointing causes of failures will allow you to mitigate risk and address issues in an effective manner.
  • Do: Conduct evaluations and assessments; collect analytics and data; and document any issues and failures. It is important to keep all the information on hand for redesign considerations, as well as for use in future product development and initiatives.
  • Check: Any results from the previous stages must be reviewed and analyzed. After the analysis is complete, it is time to identify what (or whether) improvements need to be made, or whether necessary corrections from previous tests were successfully carried out. If there are still issues that need to be addressed, you will need to return to the “Plan” and “Do” phases until improvements have been met. As with other steps, documentation is important.
  • Act: Implement changes for areas that did not work and continue any practices that did. It is important to continue to reiterate the PDCA process until a product meets the stated requirements.

This same methodology can be valuable when assessing connected products for interoperability concerns. It provides an overall roadmap that allows identification, evaluation, analysis, and implementation to address key issues related to interoperability. It also allows for continued assessment and improvement over time, allowing products to be updated to better meet consumer needs and demands as the market evolves.

The “Plan” and “Do” phases can require a good deal of time and effort. The steps involved here will vary depending on your product, its intended use and the environment in which it will be used. Different considerations must be made, for example, a medical device intended for use in an operating room versus consumer electronics used in the home. Different steps would be taken for a vehicle or large server, versus a connected lighting product or tablet.

Planning is the first step in most assessments. Taking that first step and following through with the “Plan, Do, Check, Act” model will allow for a natural process that then leads to active interoperability testing.


Testing

Start the testing process by identifying the test type you’re targeting for the product and the identified risks. This might include assessments for performance, security, compatibility or a combination, resulting in an ad-hoc testing approach. Each IoT device is unique, requiring specific considerations and test plans. Make sure you are considering the specific product, as well as its intended environment and use as you develop your test plan.

Testing should include a mix of both automated and manual testing, as well as testing as a complement to positive evaluations. Use the identified risks and appropriate tests to develop a full test plan, prior to conducting any tests. A test plan should include the objectives, resources, and processes for testing the product, as well as a detailed workflow for completing the evaluations and assessments. Additionally, when developing test plans, it is a good idea to create a repository that can serve as a reference for future products, as well as a reference for future updates or upgrades for the device being tested.

When it is time to put the test plan into action, there are several evaluations that can be effective in testing for interoperability issues.

  • Simulation/Automation Testing: This method utilizes an imitation, or simulation, of operations to assess how a device’s key characteristics (behavior, function, physical properties) will perform when exposed to certain risks. During the test, a simulation emulates a real environment or hardware. A simulated environment allows you to evaluate scale, security, and reliability while accounting for other devices, traffic, interference, data loads or other concerns. This approach allows you to assess a connected product or app without using real boards or servers, making it effective for large environments. Additionally, it allows for specific test conditions to help identify bottlenecks or errors. While it requires basic programming knowledge, available templates and GUI tools can help replicate specific tests.
  • Usability Considerations: When it comes to IoT-enabled products, usability or human factors can often be overlooked during the prototyping phase. Yet, these factors can be at odds with certain aspects of a product, leading to an unsatisfying user experience and, ultimately, product failure once it hits the market. For this reason, usability evaluations, which account for the end-user and consider human interaction as opposed to machine interaction, can be a valuable tool. This testing includes running assessments for usability within a connected environment to help ensure that a product meets consumer expectations and needs, resulting in an enjoyable, seamless user experience. This will include ensuring the interoperability of the device with other connected products, networks, and overall IoT infrastructure. Often, this can be called out-of-box testing.
  • Performance: Testing for a device’s overall performance is one of the most straight-forward assessments to make when developing any product. While there is no preset standard protocol for performance testing and evaluation, this assessment consists of validating performance across networks in a simulated real-world environment. Tools such as JMeter or simply crowdsourcing a large open alpha or beta test can identify any weak points in a product which allows for the collection of real-world data without damaging your brand.
  • Regression Testing: It is important to make sure that previously developed (and tested) software performs the same once it has been altered or interfaces with other software. This is especially important with connected products as they begin to interact with each other. It is also critical when new features are added during the development phase and to ensure products will continue to operate with updates and upgrades in the future. Regression testing plays a key role in identifying new bugs that were created after updates and changes. It also ensures enhancements, configuration changes, and new components will not negatively impact a device or its operations. Regression testing can often be automated and can lead to additional testing depending on the results. Regression passes are essential in having a product that performs as expected while still receiving regular updates.
  • Security Evaluations: Data breaches are an ongoing concern for connected products. By their nature, these devices are vulnerable to cybersecurity issues due to the connected nature of IoT as well as other cyber concerns such as design flaws and unencrypted communications. For these reasons, it is important to evaluate connected products for security issues. This will not only keep data secure but also make sure the product is not infecting other devices in its environment, a key component of interoperability.

In the U.S., ANSI/UL 2900 was adopted in 2017 for cybersecurity purposes. It applies to network connectable products, testing for vulnerabilities, software weakness, and malware. The standard includes:

  • Requirements regarding the risk management process for software developers related to their product;
  • Methods by which a product must be evaluated and tested for the presence of vulnerabilities, software weaknesses and malware; and
  • Requirements regarding the presence of security risk controls in the architecture and design of a product.


Review and Analysis

Once the assessment phase is complete, the relevant data from the tests and evaluations must be reviewed and analyzed. The data may show that a product is ready for the next stage of development, possibly even production, and distribution. The information may also illustrate the need to continue fine-tuning a product, which, in turn, will require a return to the planning and testing phase. Additional assessments and testing will need to be completed until the data and test results are acceptable and have reached an acceptable range of risk. However, even if that next phase is market distribution, the task of interoperability assessment and testing is not over.

Once a product is on the market, it still requires evaluation for security and interoperability purposes. In a world of constantly changing technology, the product will need to be evaluated to ensure its interoperability status is maintained. Manufacturers

and developers must plan on issuing updates, upgrades, and patches on a regular basis to address security and performance. This will help protect against developments in the industry, new software platforms, emerging viruses, malware and other threats and new technology from competitors that may disrupt existing eco-systems.


Conclusion

From consumer electronics to healthcare to the automotive industry, connected products are part of almost every industry and will only continue to expand. As the IoT grows and more connected products are developed, it is increasingly crucial that these devices coexist in a way that is secure and safe, without sacrificing usability, performance and the user’s experience. Following a standard evaluation process are steps manufacturers can take to help protect their devices, their reputation, and their brand by offering a product that is secure and provides an effortless, smooth and enjoyable experience.

Leave a Reply

Your email address will not be published.

X