Editor’s Note: The paper on which this article is based was originally presented at the 2019 IEEE International Symposium on Product Safety Engineering in San Jose, CA. It is reprinted here with the gracious permission of the IEEE. Copyright 2019 IEEE.

Introduction

Medical devices are increasingly designed with network interfaces to support interoperability. Interoperability interfaces enable medical devices to be composed into larger medical systems that include infrastructure components supporting networking, composite displays for operators, and software applications providing workflow automation, etc. In addition, work in the research [11], [28], [8], [22], [36] and standards [1] communities is laying the foundations for safety, security, and risk management approaches for “systems of systems” of medical devices built using “medical application platforms” (MAP). As defined in [11], a MAP is a safety- and security-critical real-time computing platform for (a) integrating heterogeneous devices, medical IT systems, and information displays via a communication infrastructure and (b) hosting application programs (“apps”) that provide medical utility via the ability to both acquire information from and update/control integrated devices, IT systems, and displays. Consortia [29], [17] are being organized to help support ecosystems of manufacturers [21] that cooperate to build asset bases of reusable components and rapid system development approaches aligned with a particular architecture.

It is sometimes difficult for manufacturers and regulators to use existing safety/security standards to adequately address the above development approaches and device/system characteristics. The primary medical device standards, such as ISO 14971 (risk management), ISO 13495 (quality management), IEC 62304 (medical device software lifecycle processes), and IEC 60601 (safety and essential performance for medical electrical equipment), are focused on conventional monolithic devices and don’t explicitly address the unique challenges of interoperability, systems of cooperating components or platform-based engineering approaches. More recent medical device security technical reports, such as AAMI TIR 57 standards and security standards for connected devices such as UL 2900-1, address single devices with connectivity but do not explore system-of-system or platform concepts.

An overall challenge is that well-established concepts of risk management, quality management, security, lifecycle processes, and safety/security/essential performance objectives all need to be extended and integrated to address medical device interoperability, interoperable medical systems, and medical application platforms. However, these concepts are for the most part addressed in stove-piped fashion in individual standards (i.e., ISO 14971 address risk management, ISO 13485 addresses quality management, etc.), and it is difficult for manufacturers and regulators to see (a) how interoperability issues cut across the current standards space and (b) how existing standards should be brought together to address interoperability-related features.

Figure 1 illustrates the theme of this paper: we argue that to support conformity assessment of safety/security of interoperable medical products, lifecycle process concepts should be enhanced to (a) address the unique aspects of planning, specifying, designing, realizing, and assuring interoperable products, and (b) guide manufacturers in weaving together concepts from existing standards on risk management, quality management, security, etc. Moreover, we argue that concepts such as architecture specifications (e.g., as found in ISO/IEC/IEEE 42010), managed reuse (e.g., as found in ISO/IEC 12207 Section 7.3), and product line engineering concepts (e.g., as found in the ISO/IEC 26550 series) must be utilized in lifecycle processes for interoperable products and that these concepts should receive greater attention in medical device standards development efforts. Multi-organization development (including risk management and assurance), lifecycle activities that guide interactions between organizations, and integration and reuse of components at arbitrary levels of abstraction in the system hierarchy are additional concepts that need to be supported in interoperable product lifecycle processes.

Figure 1: Interoperable product development lifecycle integration concepts

Some justification for our proposed approach is that safety standards such as IEC 61508 and its specialization in the automotive domain ISO 26262 use a development lifecycle approach for supporting conformity assessment for safety, where the flow of lifecycle activities indicates how many issues in the preceding paragraph should be addressed in a phased fashion as a product is developed.

The specific contributions of this paper are as follows:

  • We discuss concepts for designing lifecycle processes for interoperable medical products that can guide standards development activities, design of regulatory guidance, and conformity assessment bodies in developing lifecycle process concepts for this space,
  • We identify a general structure for individual lifecycle activities that we believe is useful for supporting conformity assessment of interoperable medical products,
  • We illustrate why the presentation of lifecycle activities (which tend to follow a “waterfall” or “V-model” order in existing standards) may need to be presented in an alternative phasing to better support the topology of interoperable systems,
  • We summarize aspects of managed reuse and product line engineering processes that should be considered to address medical application platform concepts.

This paper does not propose a specific set of lifecycle activities. Rather the goal is to raise awareness of issues that might guide the development of lifecycle approaches in current standards efforts such as the AAMI/UL 2800 interoperability safety/security standards family, the AAMI HIT 1000 series, and ongoing efforts in the international standards community to address interoperable products. This goal is similar in spirit to our earlier paper [15] on challenges and directions for addressing risk management in interoperable medical devices and systems.

Lifecycle Stage Structure

As discussed in the introduction, lifecycle process descriptions are not prominently featured in medical device standards. ISO 13485 simply requires that the manufacturer “plan and develop the processes needed for product realization.” (Clause 7.3.2). IEC 62304 requires the manufacturer to document “the PROCESSES to be used in the development of the SOFTWARE SYSTEM” and “the DELIVERABLES of the ACTIVITIES and TASKS” (Clause 5.1.1). Then, the majority of the normative content of IEC 62304 consists of requirements to include various activities within the documented processes. In this way, IEC 62304 does not dictate a particular (set of) processes or development model, but it does require processes to be documented and it constrains the content of the processes (i.e., it requires certain elements to be included). This allows freedom for manufacturers to follow their own processes as appropriate for their products and organization, but it normalizes aspects of the processes deemed important for achieving safety and for supporting safety reviews.

We suggest that emerging interoperability standards take a similar approach to that of 62304 (require processes to be documented, don’t mandate particular processes, require certain activities to be accounted for in the documented processes). However, we advocate a more rigorous capture of activities, deliverables of each activity, and traceability between deliverables.

Figure 2 captures some of the important aspects of these suggestions based on the Process Reference Model (PRM) of ISO/IEC 12207 (“Software Lifecycle Processes”) Annex B. The black non-italicized text of Figure 2 is taken from Annex B of 12207; our proposed concepts are captured in the purple italicized text. The ISO/IEC 12207 PRM indicates that each primary activity within a process should have its purpose (not shown) and outputs described. Outputs can include production of an artefact (e.g., a software requirements document, an integration testing plan), a significant change of state (i.e., a security source code vulnerability has been performed on the software and all found vulnerabilities have been removed), and meeting of specified constraints (e.g., release criteria for the software has been satisfied, testing has achieved coverage goals).

Figure 2: Structure of presentation of lifecycle activity

The extent to which existing medical and safety standards format their lifecycle activities according to the PRM varies significantly. For example, IEC 62304 lifecycle requirements do not adhere to the PRM in any significant way – they simply state tasks to be performed in each lifecycle phase (for example, see IEC 62304 Section 5.3 Software Architectural Design). In contrast, Figure 3 presents the template structure of ISO 26262 lifecycle phases, which illustrates a closer alignment with the PRM. ISO 26262-4 Section 7 System Design is a good example instantiation of the template, and it provides a nice point of comparison to the presentation style of a similar topic in IEC 62304 Section 5.3 mentioned above. For each subphase of the lifecycle phase (e.g., a “specification of the technical safety requirements” within the “Product Development: System Level” phase), the “Objectives” section provides a crisp statement of the subphase objectives (usually 2-3 objectives, each written in 1-2 sentences). The “Inputs to this clause/Prerequisites” lists the ISO 26262 work products from other activities that are required for the current subphase (establishing dependences between subphases which partially constrains their temporal ordering). “Further supporting information” identifies other optional ISO 26262 work products that might inform the current subphase. The “Requirements and recommendations” has subsections that give the standard’s normative requirements for the different activities/tasks within the subphase. Finally, “Work Products” lists subphase outputs, i.e., the ISO 26262 work products that the subphase initiates, extends, or completes (accompanied by clause numbers of the section that pertain to each work product).

Figure 3: Structure ISO 26262 clauses for lifecycle processes

While in the past it may have been considered “overkill” to adhere to the ISO/IEC 12207 PRM, there are several reasons why we advocate that emerging standards presenting lifecycle processes for interoperable systems adhere to an enhancement of the ISO/IEC 12207 PRM. First, we suggest an enhancement to include an explicit statement of inputs required for the activity (i.e., reflecting dependence on other activities) as done in ISO 26262 (see the section x.3.1 in Figure 3). The inputs would typically be work products that result from earlier activities, along with any other preconditions that need to be met before the current activity could be carried out. In addition, Figure 2 indicates that the work products produced should be explicitly listed among the outputs of each activity. Other explicitly identified outputs might include the specific system element be addressed (e.g., the item, component, system, etc.) along with assurance case elements (discussed later).

It may be useful for the standard being developed to provide a summary enumeration of the various work products or information content that is expected to be produced and controlled across all of the development lifecycle phases. This is the approach taken by AAMI/UL 2800-1 (see Annex C) which also states traceability relationships between the artifacts. Some work products will be proprietary to the manufacturing organization (e.g., planning documents or the details of risk analysis) and evaluated during the conformity assessment process. Other work products (i.e., interface specifications, risk management summaries, or qualifying tests) will be disclosed to other organizations that use the product (e.g., as in AAMI/UL 2800-1 disclosures – see Annex D, or information needed to support IEC 80001 Responsibility Agreements).

Explicit statement of input and output work products is more important in the interoperability space due to the need to coordinate the exchange information between organizations; the input to an activity carried out by one organization may depend on a work product produced by another organization (e.g., risk analysis of a component being produced may depend on error propagation risk analysis of a platform that the component is being deployed on or that of a service component being relied on by a present component, design of a component’s interoperability interface may depend on an interfacing specification of another component with which it intends to interoperate). Hand-offs of information between organizations is a theme of both AAMI/UL 2800‑1 (referred to as Disclosures – Annex D) as well as AAMI HIT 1000-1. It is important to note that in many standards that present lifecycle processes, it is explicitly noted that the activities/tasks within the stated processes can occur in any order (or in parallel) as long as the dependences between the activities are observed. Thus, this relaxed order approach accompanied by an explicit statement of inputs and outputs allows manufacturers to map the required activities on to their own processes in a flexible way while achieving the rigor indicated by the input/output dependences.

Assurance cases are increasingly being required by standards as a means to provide arguments supported by objective evidence that a product achieves its assurance goals. The explicit argument structure of assurance cases aims to make a manufacturer’s product assurance presentation easier to understand and evaluate in conformity assessment. AAMI HIT 1000-1 recognizes the additional utility of assurance cases for communicating product assurance properties between different stakeholders (e.g., a component manufacturer provides an assurance case for the component to an organization integrating the component into a HIT system). The component assurance case is incorporated into and used to justify the HIT system assurance case (see AAMI HIT 1000-1 Section 6 Figure 3). Similarly, AAMI/UL 2800-1 requires release criteria (see AAMI/UL 2800-1 Annex F) to be specified to summarize the primary assurance claims about a product. Accordingly, when designing process activities for interoperable products, it seems useful to consider how each activity contributes to the product assurance case (either in producing part of the argument claims or, as is more often the case, producing objective evidence for previously established claims).

ISO/IEC 15026-1 Systems and software engineering Systems and software assurance Section 9 states the following:

Management of life cycle activities includes handling both the activities directly involving the assurance-related information and the effect that the assurance-related information has on other activities. This management is best performed when the top-level claims are considered from the beginning of concept development, used to influence all activities and systems […] and became an integral part of the overall engineering process. These activities could all be done only if the system and the body of information showing achievement of those claims were being developed concurrently.

That is, ISO/IEC 15026-1 argues that assurance cases should be built incrementally throughout the lifecycle. To support this approach, when defining lifecycle activities for interoperable products, we advocate some explicit accounting of the portions of an assurance case that are produced as an outcome of carrying out a lifecycle activity (see bottom right of Figure 2).

Additional concepts beyond those listed in Figure 2 may prove important. For example, it might be useful to explicitly list possible cross-organization interactions (categorized according to stakeholder type) needed to carry out an activity.

Topology-Oriented Lifecycle Flow

In [14], we noted that existing medical device standards often adopt a simple “topological vocabulary” to describe the abstract architecture of a medical product. For example, IEC 62304 uses the term software item to refer to “any identifiable part of a computer program”, and then has terms for the special cases of software system (an “integrated collection of software items organized to accomplish a specific function or set of functions” – note that the software system itself is a software item) and software unit (a “software item that is not subdivided into other items”). ISO 26262 uses the term item to indicate the units to which conformity assessment will be applied (i.e., items may have further internal structure, but if internal elements are not treated separately in the conformity assessment process then the item is not further decomposed into sub-items). These terms are also used to indicate the granularity at which development lifecycle processes are described, e.g., the development phases recognized by AAMI/UL 2800-1 include the “(software) item development phase”, and the “(software) item integration phase”.

We discussed in [14] that documenting and planning for hierarchical/containment relationships is made more challenging in modern medical systems because a product may be conceived as an interconnected collection of constituent sub-products, but the product itself may be incorporated as a component in a larger product context – sometimes in ways that were not anticipated when the product was produced. In some cases, these notions can be understood using concepts related to “systems of systems”, nested to an arbitrary depth. Accordingly, topological vocabulary for interoperable products needs to be recursive in nature to support the characterization of products with nesting of interoperable components to an arbitrary depth – enabling what may be considered a “system” at one level to be viewed as a “component” at another level.

We, therefore, suggested [14] that for conformity assessment purposes, the term interoperable item or simply item be used for an interoperable product that is either (a) a unit element with respect to assessed interoperability (i.e., it is not decomposed further into interoperable components) or (b) it is an integration of interoperable items with a specific purpose (e.g., it is an integration of interoperable components to form an interoperable medical system). Notice that this definition of item is recursive: an item can include (sub)-items, which in turn can include other items to an arbitrary level of nesting.

Conventional presentations of development models such as the V-model, even though they may actually support decomposition to an arbitrary depth, tend to emphasize two levels: a component level and a system level, where the complete functionality to be assessed for safety is known at the system level. Based on the reasoning presented above, we believe that for interoperable products it is more effective to take a slightly more abstract approach and present lifecycle development activities in terms of “item development” (where the item may be occurring at an arbitrary level in an architectural hierarchy) and then consider as options in the lifecycle activity descriptions the special cases where an item is either comprised of (sub)-items or is a unit (no further decomposition). This contrasts with the approach of IEC 62304 (see Section 5) which organizes activities in terms of software units and the software system as a top-level concept (i.e., IEC 62304 does not emphasize a recursive structure, though a careful reading and creative interpretation could accommodate it).

Figure 4 presents one possible arrangement of lifecycle activities that follow the recursive structure of the interoperable item concept described above. Several important activities such as planning, etc. do not show up explicitly here because the intent for this diagram is to emphasize the key activities of specification, implement/realization, and assurance. The outer level of the diagram presents item development activities. In the case where an item is an interoperable unit, the inner (sub)-item integration activities are not relevant. However, when the item is comprised of sub-items, then the inner item integration activities are followed.

Figure 4: Lifecycle activity flow oriented to abstract interoperable product topology

Note that in interoperable products, getting things to “plug together correctly and talk to each other” is often viewed as an engineering activity distinct from the concept of integrating components to achieve some combined system functionality. For example, one may simply aim to get an interoperable product communicating with a hub or platform without concern to the medical use case (system purpose); indeed, there may be multiple medical use cases supported by the connected components. The suggested treatment of item integration activities (right bottom of Figure 4) as a first-class concept rather than just a subactivity of “system integration” supports these observations.

Within the item development activities, a concept activity and a specification activity lead to the development of a specification of an item’s interoperability capabilities and associated safety and security properties. This includes the conventional concept phase (e.g., see IEC 61508-1 Table 1 and ISO 26262-3) notions of gathering user needs and requirements engineering, but it places a greater emphasis on specifying the interface architecture of the product and decomposing requirements to contracts (interaction constraints) on interfaces. In addition, risk analysis information should also be captured on (or traced to) product interfaces to enable integrators of the product to leverage the risk analysis and risk controls of the product. As noted above, the item implementation phase consists of two cases – the case where the item is a unit or the case where the item consists of sub-items. In either case, the goal of the implementation phase is to produce a product whose behavioral properties and functional safety characteristics conform to the item specification. The item assurance activity demonstrates that an item implementation meets its specification. Ideally, the demonstration is supported by structured arguments and objective evidence in the form of an assurance case.

Within the item integration activities, a concept for the integration and an engineering-oriented architecture description for the internal interoperability contained in the item is developed. This includes developing testing/verification plans for the integration of the sub-items. Sub-items may originate within the manufacturing organization of the item or they may be acquired from external sources. In the case of an externally sourced item, information exchange between the item manufacturer and the sub-item manufacturer is necessary. Internally sourced sub-items are developed by recursively following the item development activities. In both cases, confirmation that the sub-items meet their specifications and that the specifications align with the integration specification of the enclosing item is necessary, but this is especially important for externally acquired sub-items due to the greater potential for misalignment of specifications when products cross organizational boundaries. Finally, the integration assurance activity demonstrates that interactions between sub-items can be carried out as required by the integration specification. As with item assurance, this demonstration is ideally supported by arguments and objective evidence in the form of assurance case elements. The elements of the assurance case presented in the integration activity may be incorporated into the assurance case for the enclosing item.

Product Line and Reuse Processes

When a manufacturer designs a component such as a medical device for interoperability, an implied goal is that the component should be (re)usable in different system contexts. This is especially true in the platform approach to system development, in which domain-relevant infrastructure and services are also designed for reuse across multiple system contexts. The software and systems engineering communities have developed lifecycle processes and development paradigms that specifically target planning and designing for reuse as an activity that is distinct from developing a specific application/system from a collection of reusable assets (see, e.g., [4], [31]).

  • Activities associated with planning for reuse and developing reusable platforms and components are typically referred to as domain engineering. These activities are typically undertaken by a manufacturer of a platform or by a consortium of manufacturers that jointly agree to cooperate to build a platform and to contribute to the collection of reusable assets.
  • Activities associated with using those reusable assets to develop a specific system are called application engineering.

Unfortunately, the distinction between domain engineering and application engineering is not explicitly recognized in most safety standards, including those within the medical device community. As one example of the many gaps that this leaves, the absence of such standard content means there are no standard guidelines for performing hazard analysis, designing risk controls, or developing assurance arguments for platform components that by themselves have no specific medical intended use, but would benefit from having these tasks done once and for all and then shared and instantiated in system integration activities across different products built within the platform. In addition, the regulator pathway for systematic reuse of platform assurance is currently not clear – leaving regulators in doubt as to how much “credit” should be given for a previously-used and regulatory-approved platform. Moreover, manufacturers and regulators are unclear about processes to be followed to ensure that platforms assurance is being reused properly and not “mis-reused” in a manner that would lead to safety/security problems (See Section 6 in [16]).

Figure 5 (inspired by diagrams of [31]) illustrates the distinct processes of domain engineering and platform engineering. Domain Engineering processes are associated with planning for reuse including the development of a platform and its associated reusable asset base. The family of systems to be built using the reusable assets is called the product line. Within the product line, some system components and functions will remain the same across all systems. For example, for a product line associated with a particular medical application platform [11], all systems might be built using the same middleware, the same communication hub, the same process for defining interfaces, etc. These are called the product line commonalities. On the other hand, the systems may differ according to the specific medical devices they include, the specific application logic, the specific intended use, etc. These are called the product line variabilities. The systematic documentation of the commonalities and variabilities of a product line is referred to as the variability model of the product line. In the interoperability context, the points in an architecture at which systems can vary are typically the points where interoperability is designed. For example, to enable the platform to easily support varying sets of medical devices across different systems, the platform will be designed to support network-based interoperability interfaces for medical devices that enable medical devices to be plugged and unplugged from the platform.

Figure 5: Structure of presentation of lifecycle activity

We argue that adequately addressing safety and security in the context of platform-based reuse and interoperability depends on clearly distinguishing the above concepts in lifecycle activities.

One important justification for this point of view stems from the fact that application engineering directly aligns engineering activities with a system’s medical intended use – and the intended use drives the identification of safety/security-related hazards associated with the intended use as well as much of the top-down risk management process. These “single system intended use” concepts connect easily with the processes and goals of existing medical safety and risk management standards. In contrast, domain engineering involves planning for not just one system with a single intended use, but an entire family of systems with possibly different intended uses that may eventually incorporate the reusable components or infrastructure. Keys aspects of domain engineering in a safety/security-critical context include (a) identifying the scope of system intended use across many possible systems, (b) within this scope, identifying generic forms of hazards associated with system contexts and generic forms of faults that arise from the platform and component infrastructure, (c) designing and assuring architectural approaches and safety services that provide general fault identification, fault containment, fault notification, and mitigation solutions, and (d) defining methods and processes by which these general safety/security-related approaches are instantiated so that the previously generated generic assurance can be reused in the context of a specific system.

A second important argument for explicit domain engineering and variability modeling is that it is typically the variabilities in a product line that lead to unanticipated emergent properties as different systems are built. For example, if a common middleware or hub is used across all systems, that middleware can be tested once and for all and the assurance that specific communication capabilities are supported can be reused. However, when the middleware is configured with various medical devices or applications in different systems, one must be careful to assess whether unanticipated interferences between devices and applications arise and contribute to hazardous situations related to the overall system behavior. In particular, the domain engineering safety analysis process should seek to analyze the variability model to determine the possible ways in which unanticipated interferences might arise in different system variations and to design architectural and implementation approaches for the platform that either eliminate or notify of unanticipated inferences via dynamic checking. Here are some example strategies (that vary according to assurability and effectiveness of controls): the middleware may be designed to ensure that communication associated with one device doesn’t interfere with that of other devices, the possible combinations of devices could be constrained (i.e., the variability reduced) by whitelisting individual devices or sets of devices that can be used together, the current communication latencies on the network could be monitored dynamically to raise an alert if the quality-of-service requirements for application-to-device communication are not being satisfied, etc.

In the standards context, simple notions of reuse processes are presented in Clause 7.3 Software Reuse Processes of ISO/IEC 12207, which defines three different lifecycle processes that address many of the aspects of domain engineering described above: 7.3.1 Domain Engineering Process,7.3.2 Reuse Asset Management Process, 7.3.3 Reuse Program Management Process. A much more expansive presentation of product line and reuse concepts is given in the ISO/IEC 26550 series. Neither of these sources addresses safety and security issues, nor are they oriented to conformity assessment. However, they provide valuable standardized content that can form the basis of introducing (a) standardized lifecycle concepts for interoperability-based reuse and medical application platforms and (b) safety and security concepts within product line development.

We advocate that lifecycle concepts in the previous sections (in particular, those sketched for item development/item integration) be complemented by and linked to standardized lifecycle activities, artifacts, and assurance objectives for platform-based interoperable medical systems, drawing on the existing standard sources above for resources. In some areas in this space, there is already a good foundation of work. For example, Habli, Kelly, Oliveira, Braga, Papadopoulos, and colleagues have a sustained line of research related to safety analysis and assurance cases in the context of product lines and platform-based development (e.g., see [9], [6], [5]).

Goals for Development of Lifecycle Processes for Interoperable Medical Devices

In this section, we summarize the discussions in previous sections in a list of goals for the development of lifecycle processes for interoperable medical products. Not all of these issues need to be addressed in detail in the specification of process steps; notes, rationale, and other forms of guidance may be used
to lead stakeholders to fully explore/address supporting issues.

Presentation of Process Phases

  • Consider a presentation of lifecycle stages that explicitly identifies information (work products) that are produced in the process of carrying out stages. Indicate how specific clauses/tasks contribute to (initiate, extend, complete, etc.) specific work products (consider ISO 26262 as an example).
  • Consider a presentation of lifecycle stages that explicitly captures work product inputs and outputs to clarify dependences between stages and information that may flow across organizations. Link work products to disclosures and responsibility agreements that indicate the sharing of information across organizations.
  • Consider a presentation of lifecycle stages that explicitly identifies assurance case elements (claims, evidence) planned or produced in each stage. Tie work products to evidence needed to support claims in assurance cases.

Architecture Issues

  • Support the organization, flow, and decomposition of lifecycle stages with vocabulary appropriate for a high-level description of topological relationships between products in an interoperable medical system [14]. The vocabulary should enhance the conventional notions of software system and item, as presented in IEC 62304. Organize lifecycle stages for products and their decomposition based on that vocabulary.
  • Ensure that lifecycle stages and flows are presented in such a way that enable products to be addressed at an arbitrary level of an architectural hierarchy (e.g., supporting notions of systems of systems and the idea that when a product is released there may be no way of knowing how deeply it will be nested in a broader interoperable medical system context).
  • Incorporate steps leading to the development of a detailed architecture description that captures the details of interoperability interfaces and the structure of internal interoperability in terms of architecture views (e.g., as presented in ISO/IEC/IEEE 42010). Consider concepts from the architectural views defined in ISO/IEC 10746 standard series on the Reference Model for Open Distributed Process (RM ODP) [27].
  • For platform-oriented products [11], [18], [3], [30], incorporate steps leading to the notion of a reference architecture [31, Chapter 11], and steps for establishing traceability from products that are instantiations of the platform to the platform reference architecture.
  • Work to identify and normalize patterns of interaction between interoperable products (e.g., [33], [27, Section 4.4], so that process steps related to interaction risks, behavior specification, and testing can speak to interaction types with a shared understanding of those interaction types across stakeholders.
  • Incorporate steps that decompose system/product requirements down to interface contracts that capture constraints on interactions between products and indicate the behavior of a product’s interoperability functions and interface-related risk controls. Consider incorporating guidance on using behavioral interface specification languages [13], [26] to precisely capture interaction constraints.
  • Incorporate consideration of design and risk control principles that use resource partitioning (e.g., the emerging use of micro-kernels and hypervisors [3], [10]) and safety architectures [25] to avoid unanticipated interference and emergent properties when individual products are integrated to form a system (an idea that goes back almost forty years to the foundational work of Rushby [35]).
  • Provide guidance on the use of architectural modeling to more precisely characterize medical product architectures (e.g., [12]).

Risk Management Issues

Some of the biggest needs are to help the community develop a better awareness of how the proliferation of interoperable products will necessitate risk management activities to be distributed across organizations [15] and how to address products that may not have a specific medical purpose. Better descriptions of lifecycle processes for interoperable medical products can clarify how distributed risk management tasks are interleaved with other tasks through the product lifecycle.

  • Include steps that address the development of product medical purpose and technical purpose as well as the product’s role in supporting interoperability (i.e., one needs to move beyond the conventional and limited focus on a product’s medical intended use – an infrastructure product may not have a specific medical purpose, but will be (re)used in system(s) with medical purposes – which necessitates risk management of the incorporated infrastructure product). Such information should feed into an expanded version of the product’s intended use description as required in ISO 14971 Section 4.2.
  • Include steps that specify the boundaries of the product and the scope of the product risk management in terms of the product’s architecture description and variability model.
  • Include steps that support risk analysis (including various forms of hazard analysis) for an interoperable product to be performed by the product manufacturer and then results shared (e.g., focusing on risk-related aspects of the interoperability interfaces) to other organizations that integrate the product into an interoperable medical system. Tie the interface-related risk analysis information to interoperability interfaces as documented in the product’s interoperability architecture description. Include steps that help evaluate the extent to which some risk information may be held as proprietary while ensuring that information needed by integrators is not omitted in sharing.
  • Include steps guiding system manufacturers in the use of the risk analysis results of incorporated subproducts, the assessment of the completeness and trustworthiness of those results.
  • Include guidance that aids stakeholders to develop a common understanding of common faults and errors associated with interoperability and variability mechanisms. Provide guidance on how these notions might drive bottom-up risk analysis of interoperable products and their integration.
  • Include steps leading to the identification of the product’s contribution to risk controls. In the case of platform infrastructure, this may include partial elements of risk controls (e.g., mechanisms for monitoring the timely delivery of data) that are then integrated with application-specific risk controls (e.g., monitoring data delivery information to ensure that a particular control signal for actuation of a patient’s state is carried out in a timely fashion, where the acceptable latencies are determined by the application requirements).
  • Include steps leading to the identification of how an interoperable system’s risk controls may be dependent on the risk controls of subcomponents and the assessment of the reliability specification of the subcomponent risk controls upon which the system depends.
  • Include steps that lead to an assessment of how all the different variabilities within a product (as indicated by its architecture description and variability specification) are addressed in the risk management process.
  • Include guidance on how risk analysis and risk control information may be captured in or traced to the interoperable product’s architecture description [32], [24].
  • Provide an approach that either integrates safety and security risk management into a unified risk management process or that clarifies the dependences and information flow between distinct safety risk management and security risk management processes.
  • Develop steps to ensure the trustworthiness and integrity of the shared risk management information (e.g., in the presence of product evolution/updates – the update cycle of the system may proceed at a different tempo than the update cycles of the incorporated interoperable products).

Quality Management Issues

  • Includes steps leading to how safety management for the interoperable product and all of its variabilities is linked to quality management goals (e.g., in ISO 13485 Section 5).
  • Includes steps leading to appropriate defect reporting and monitoring for interoperable products, tied to the architecture description and variability specification of the interoperable product. This includes a “reporting out” to stakeholders that may include the product and monitoring of reports from manufacturers that supply constituent products on which the interoperable product depends.
  • Includes steps linking the planning of the development process (e.g., in ISO 13485 Section 7) to the development of interoperability architecture descriptions, the planning of assurance case construction, the tracking of shared risk management information, and other aspects distributed development issues as discussed previously.

Assurance Case Construction

  • Includes steps throughout the development lifecycle (as suggested by ISO/IEC 15026-1 Section 9) leading to the planning of assurance case structures, the development of assurance case claims, and construction of objective evidence supporting those claims. Tie the production of evidence to the work products indicated in lifecycle stage inputs and outputs.
  • While AAMI HIT 1000-1 indicates that assurances cases may be used to share safety/security-related information between stakeholders in integrated medical systems, this may create tension with a manufacturer’s need to protect proprietary information. Develop concepts that help manufacturers identify assurance case elements that need to be disclosed versus information that may be kept private.
  • Distributed development of assurance cases for interoperable products (especially across multiple organizations) inevitably leads to the need for a manufacturer to explicitly identify (a) that specifications/assumptions about other products that the product under consideration is relying on and (b) the guarantees that a product is providing to other products that incorporate it. Lifecycle processes should include steps that explicitly identify these assumptions/guarantees (and ties to the notion of Information for Safety in ISO 14971), the representation of assumptions/guarantees in assurances, and steps that ensure that assumptions made by products are discharged (i.e., guaranteed satisfied) when products are composed into a system (see [19], [20], [7], [34]). The notion of “safety element out of context” in ISO 26262-10 Section 9 may also provide inspiration.

For further discussion of assurance case considerations in interoperable medical systems see [37], [23]. The work of Birch al. on assurance case structures for ISO 26262 may also be useful [2].

Product Line Concepts

The overarching challenge for this topic is that both product line process concepts and safety process concepts are very well-developed, but to date there has been very little integration of the two in general (and almost no integration in medical product domain). Therefore, the primary objective can be simply stated: take product line processes and inject into them the different process concerns from the medical space including quality management and risk management (for both safety and security).

  • Synchronize medical domain product topology vocabulary [14] with vocabulary from the product line space including reference architecture, variability model, commonalities, variabilities, and product instances.
  • Assess how concepts from each medical development lifecycle phases such as concept, requirements, design, implementation, verification & validation, etc. should be generalized to obtain domain engineering activities in which one is aiming to address not a single product but a family of products.
  • Include lifecycle steps that establish refinement and traceability links between a product line reference architecture and the architecture of a product instance. Include steps that address criteria for the domain engineering assets (e.g., risk management and assurance artifacts and results for the generic product family) to be instantiated and reused in a particular product. Specifically, platform assurance must not be reused in situations where that is not warranted – one must show that a product properly aligns with a platform before reuse of platform assurance is appropriate.
  • Develop specific risk analysis techniques for reusable assets that can address the issue that a specific intended use may not be known.
  • Develop steps leading to the development, specification, and verification of general-purpose risk controls in platform infrastructure and appropriate instantiation/configuration of those controls for product instances.

Conclusion

This paper has argued that including content related to interoperable product development lifecycle activities in emerging standards on interoperability can be a useful means to convey to manufacturers and conformity assessment bodies how cross-cutting issues (currently addressed independently in separate standards) such as quality management, risk management, usability, security controls, architecture specifications, cross-organization information disclosure, and assurance arguments/evidence should be integrated to address safety and security of interoperable components, systems, and reusable platform-based infrastructure. This paper has focused on development lifecycle issues, but clearly other lifecycle dimensions across the entire use lifecycle such as deployment, operation, and maintenance need to be addressed.

We have identified several issues that interoperability standards development activities should consider carefully:

  • Presenting lifecycle activities in a manner that supports the interoperability challenges including (a) more deliberate tracking of information and work products and dependences that arise between activities and organizations due to production/consumption of work products, and (b) increased emphasis on incremental production of assurance case content throughout lifecycle activities;
  • Rethinking the phasing and flow of lifecycle activities to better accommodate the recursive structure of solution topologies as things trend more towards “systems of systems”;
  • Explicitly incorporating notions of domain engineering and product line engineering to support significant trends to platform-based engineering approaches to medical systems.

Throughout the discussions, we have indicated existing standards content in other domains that may be useful as resources for developing lifecycle-related normative content and conformity assessment concepts for interoperable medical products.

This work was supported in part by the U.S. National Science Foundation (NSF) FDA Scholar-in-Residence award CNS 1565544.

References

  1. ASTM. F-2761: Medical Devices and Medical Systems – Essential safety requirements for equipment comprising the patient-centric integrated clinical environment (ICE) – Part 1: General requirements and conceptual model. Standard, 2009.
  2. J. Birch, R. Rivett, I. Habli, B. Bradshaw, J. Botham, D. Higham, P. Jesty, H. Monkhouse, and R. Palin. “Safety cases and their role in ISO 26262 functional safety assessment.” In F. Bitsch, J. Guiochet, and M. Kaâniche, editors, Computer Safety, Reliability, and Security, pages 154–165, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
  3. T. Carpenter, J. Hatcliff, and E. Y. Vasserman. “A reference separation architecture for mixed-criticality medical and IoT devices.” In Proceedings of the ACM Workshop on the Internet of Safe Things (SafeThings). ACM, November 2017.
  4. P. Clements and L. Northrop. Software Product Lines. Addison Wesley, 2002.
  5. A. L. de Oliveira, R. T. V. Braga, P. C. Masiero, Y. Papadopoulos, I. Habli, and T. Kelly. “Model-based safety analysis of software product lines.” IJES, 8(5/6):412–426, 2016.
  6. A. L. de Oliveira, R. T. V. Braga, P. C. Masiero, Y. Papadopoulos, I. Habli, and T. Kelly. “Variability management in safety-critical software product line engineering.” In New Opportunities for Software Reuse – 17th International Conference, ICSR 2018, Madrid, Spain, May 21-23, 2018, Proceedings, pages 3–22, 2018.
  7. E. Denney and G. Pai. “Towards a formal basis for modular safety cases.” In International Conference on Computer Safety, Reliability, and Security, pages 328–343. Springer, 2015.
  8. L. Feng, A. L. King, S. Chen, A. Ayoub, J. Park, N. Bezzo, O. Sokolsky, and I. Lee. “A Safety Argument Strategy for PCA Closed-Loop Systems: A Preliminary Proposal.” In V. Turau, M. Kwiatkowska, R. Mangharam, and C. Weyer, editors, 5th Workshop on Medical Cyber-Physical Systems, volume 36 of OpenAccess Series in Informatics (OASIcs), pages 94–99, Dagstuhl, Germany, 2014. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
  9. I. Habli and T. Kelly. “A safety case approach to assuring configurable architectures of safety-critical product lines.” In H. Giese, editor, Architecting Critical Systems, pages 142–160, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.
  10. S. Harp, T. Carpenter, and J. Hatcliff. “A reference architecture for secure medical devices.” Association for the Advancement of Medical Instrumentation (AAMI) Biomedical Instrumentation and Technology, September 2018.
  11. J. Hatcliff, A. King, I. Lee, M. Robkin, E. Vasserman, A. MacDonald, S. Weininger, A. Fernando, and J. M. Goldman. “Rationale and architecture principles for medical application platforms.” In ICCPS, 2012.
  12. J. Hatcliff, B. R. Larson, J. Belt, Robby, and Y. Zhang. “A unified approach for modeling, developing and assuring critical systems.” In T. Margaria and B. Steffen, editors, Leveraging Applications of Formal Methods, Verification and Validation. Modeling, pages 225–245, Cham, 2018. Springer International Publishing.
  13. J. Hatcliff, G. T. Leavens, K. R. M. Leino, P. Müller, and M. Parkinson. “Behavioral interface specification languages.” ACM Comput. Surv.,44(3):16:1–16:58, June 2012.
  14. J. Hatcliff and E. Vasserman. “Topological vocabulary for supporting conformity assessment of interoperable medical products.” 2019. Submitted for publication.
  15. J. Hatcliff, E. Y. Vasserman, T. Carpenter, and R. Whillock.Challenges of distributed risk management for medical application platforms.” In 2018 IEEE Symposium on Product Compliance Engineering (ISPCE), pages 1–14, May 2018.
  16. J. Hatcliff, A. Wassyng, T. Kelly, C. Comar, and P. L. Jones. “Certifiably safe software-dependent systems: Challenges and directions.” In Proceedings of the on Future of Software Engineering (ICSE FOSE), pages 182–200, 2014.
  17. ICE Alliance. http://www.icealliance.org.
  18. M. Kasparick, M. Schmitz, B. Andersen, M. Rockstroh, S. Franke, S. Schlichting, F. Golatowski, and D. Timmermann. “OR.NET: A service-oriented architecture for safe and dynamic medical device interoperability.” Journal of Biomedical Engineering/Biomedizinische Technik, 2018.
  19. T. Kelly. “Concepts and principles of compositional safety case construction.” Contract Research Report for QinetiQ COMSA/2001/1/1, 34, 2001.
  20. T. Kelly. “Using software architecture techniques to support the modular certification of safety-critical systems.” In Proceedings of the 11th Australian workshop on Safety critical systems and software-Volume 69, pages 53–65, 2006.
  21. Y. J. Kim, S. Procter, J. Hatcliff, V.-P. Ranganath, and Robby. “Ecosphere principles for medical application platforms.” In IEEE International Conference on Healthcare Informatics (ICHI), 2015.
  22. A. King, S. Chen, and I. Lee. “The MIDdleware Assurance Substrate: Enabling strong real-time guarantees in open systems with openflow.” In IEEE Computer Society symposium on Object/component/service-oriented real-time distributed computing (ISORC).
    IEEE, 2014.
  23. A. L. King, L. Feng, S. Procter, S. Chen, O. Sokolsky, J. Hatcliff, and I. Lee. “Towards assurance for plug & play medical systems.” In Proceedings of the 34th International Conference on Computer Safety, Reliability, and Security – Volume 9337, SAFECOMP 2015, pages 228–242. Springer-Verlag New York, Inc., 2015.
  24. B. Larson, J. Hatcliff, K. Fowler, and J. Delange. “Illustrating the aadl error modeling annex (v.2) using a simple safety-critical medical device.” In Proceedings of the 2013 ACM SIGAda Annual Conference on High Integrity Language Technology, HILT ’13, pages 65–84, New York, NY, 2013. ACM.
  25. B. R. Larson, P. Jones, Y. Zhang, and J. Hatcliff. “Principles and benefits of explicitly designed medical device safety architecture.” Biomedical Instrumentation & Technology, 51(5):380–389, 2017. PMID: 28934584.
  26. B. R. Larson, Y. Zhang, S. C. Barrett, J. Hatcliff, and P. L. Jones. “Enabling safe interoperation by medical device virtual integration.” IEEE Design Test, 32(5):74–88, Oct 2015.
  27. P. Linington, Z. Milosevic, A. Tanaka, and A. Vallecillo. Building Enterprise Systems with ODP. Chapman and Hall, 2011.
  28. MDPnP Program. “Openice – open-source integrated clinical environment.” 2015. https://www.openice.info.
  29. OR.NET e.V. OR.NET e.V. – safe, secure, and dynamic networking in the OR, 2017.
  30. J. Plourde, D. Arney, and J. M. Goldman. Openice: “An open, interoperable platform for medical cyber-physical systems.” In 2014 ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS), pages 221–221, April 2014.
  31. K. Pohl, G. Böckle, and F. van der Linden. Software Product Line Engineering: Foundations, Principles, and Techniques. Springer, 2005.
  32. S. Procter and J. Hatcliff. “An architecturally integrated, systems-based hazard analysis for medical applications.” In 2014 Twelfth ACM/IEEE Conference on Formal Methods and Models for Codesign (MEMOCODE), pages 124–133, Oct 2014.
  33. V. Ranganath, Y. J. Kim, J. Hatcliff, and Robby. “Communication patterns for interconnecting and composing medical systems.” In 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2015, Milan, Italy, August 25-29, 2015, pages 1711–1716, 2015.
  34. J. Rushby. Modular certification. Technical report, Computer Science Laboratory, SRI International, Menlo Park, CA 94025, US, September 2001.
  35. J. M. Rushby. Design and Verification of Secure Systems. Operating Systems Review, 15, 1981.
  36. E. Y. Vasserman and J. Hatcliff. Foundational security principles for medical application platforms. In WISA. 2014.
  37. Y. Zhang, B. Larson, and J. Hatcliff. “Assurance case considerations for interoperable medical systems.” In B. Gallina, A. Skavhaug, E. Schoitsch, and F. Bitsch, editors, Computer Safety, Reliability, and Security, pages 42–48, Cham, 2018. Springer International Publishing.

About The Author

John Hatcliff

Dr. John Hatcliff is a University Distinguished Professor and Lucas-Rathbone Professor of Engineering in the Computer Science Department at Kansas State University. His research addresses challenges in safety- and security-critical systems, interoperable medical systems, component-oriented and platform-based development, and static analysis & verification of critical systems. His recent research has been funded by US Army, US Air Force, DARPA, US Department of Homeland Security, Collins Aerospace, and the Software Engineering Institute.

Related Posts

Leave a Reply

Your email address will not be published.

X