Get our free email newsletter

Incorporating Ethical Considerations into the Design Process

Prioritizing Responsible Innovation Via End-User Values

We live in a time during which new products and systems are being introduced at a dizzying pace. Many of these products and systems rely on artificial intelligence and machine learning (AI/ML) techniques for analysis, decision-making, and operation, leading to more functional, autonomous, and capable systems that are used in all spheres of human activity.

We can hardly make a phone call or a financial transaction, perform our work, attend classes remotely, travel, watch television, or do much else without using systems that rely on AI, at least to some degree. As systems using AI are predominant in our daily lives, a level of trust in them is needed, and therefore they must reflect our human values and respect regulatory obligations.

However, the growing use of artificial intelligence systems brings with it a growing potential for misuse and mistakes, which can harm users in disparate ways and lead to a lack of trust in these systems. A reason for this is that AI is driven by algorithms that, while invisible to users, deeply affect end-user data, identity, and values, leading to potential ethical conflicts.

- Partner Content -

A Dash of Maxwell’s: A Maxwell’s Equations Primer – Part One

Solving Maxwell’s Equations for real-life situations, like predicting the RF emissions from a cell tower, requires more mathematical horsepower than any individual mind can muster. These equations don’t give the scientist or engineer just insight, they are literally the answer to everything RF.

Ethics are the broad principles that govern behavior and drive individuals’ interpretations of the world. The ethics related to AI technologies are complex, but they are key to preventing harm and ensuring end-user trust in a system.

Trust is degraded when, despite the best intentions of a manufacturer, inadequate efforts are made during product development and systems design to analyze and test how end-users will interpret a product, service, or system versus end-user values. In these cases, its design will, in all likelihood, prioritize the values of its creators over those of end-users.

The Importance of Values-Based Engineering in Building Trust in Product Design

How to construct a rigorous methodology to reduce harm to users, earn their trust, and thereby promote innovation has been challenging. That’s because products such as voice assistants in our homes, driver assistance systems in our cars, automated production systems in our factories, and a litany of other autonomous and intelligent systems have become so pervasive that their growth is outpacing society’s ability to identify and address the ethical concerns which accompany their perception and use.

Compounding this is the fact that they are also being used by diverse populations and in geographies with differing values and levels of technological sophistication.

Therefore, there is an urgent need to ensure that the design and development of new products and systems take place using a rigorous values-oriented, applied ethics methodology that complements traditional systems engineering practices while also supporting human and social values.

- From Our Sponsors -

Such a values-based engineering (VbE) methodology will help enable all relevant human-centric, values-driven issues to be defined, prioritized, addressed, and integrated with functional requirements during systems design. It can help ensure that these non-functional requirements are treated as key performance indicators (KPIs), which are just as relevant and important as are traditional KPIs like operating speed, energy efficiency, and so forth.

Further, when values-based methodologies are used as KPIs for success, the opportunity arises to assess and certify conformance based on these kinds of results. Thus, in much the same way that people will buy a certain toothpaste because it has a mark from the American Dental Association (in the U.S.), end-users and customers will begin to buy things like voice-assisted audio systems because they have been certified as “trustworthy AI” by an accredited organization.

Told By a Machine That He Would Die

An example that illustrates the need for applied ethics in systems design comes from the field of healthcare. Over the past few years, telemedicine has enabled many patients to interact with their healthcare providers remotely and safely. But telemedicine went terribly wrong in March 2019 when 78-year-old Ernest Quintana was taken to a hospital emergency room in Fremont, CA, suffering from chronic lung disease and unable to breathe.

Later, after he had been transferred to the intensive care unit, a robotic machine rolled into his room while his daughter was sitting with him, and a doctor used its video capabilities to tell Quintana matter-of-factly that he would die within days. It was bad enough to be given this news via a machine, but to make matters worse, his daughter had to repeat to him what was said because he was hard of hearing in one ear and the machine couldn’t get to the other side of his bed.

The family was devastated. The man’s daughter was quoted in local news media as saying, “If you’re coming to tell us normal news, that’s fine. But if you’re going to tell us there’s no lung left, and we want to put you on a morphine drip until you die, that should be done by a human being and not by a machine.”1

In this instance, a simple geofencing technology could have been used to trigger an alert to Quintana’s family that a robotic device, and not a doctor, was about to enter their loved one’s room, giving them the chance to assert that only minor health-based reports should be given directly to their father.

That way, the family’s wishes also could be transmitted to the doctor so that the doctor could recognize them and respond appropriately.

New IEEE Standard Addresses Ethical Concerns in Systems Design

To provide a path forward to help ensure that such ethical considerations are considered right from the outset in systems design, the IEEE Standards Association has published a new standard, IEEE 7000™-2021, “IEEE Standard Model Process for Addressing Ethical Concerns During System Design.”

A first-of-its-kind standard, IEEE 7000-2021 describes how to pragmatically apply a VbE approach to elicit, conceptualize, prioritize, and respect end-user values in system design. It establishes a well-defined process model and a standard approach for integrating human and social values into traditional systems engineering and design; sets forth processes engineers can use to translate stakeholder values and ethical considerations into system requirements and design practices; and details a systematic, transparent, and traceable approach to address ethically oriented regulatory obligations as well.

The standard guides technology developers through an extensive feasibility analysis, consisting of questions and exercises designed to identify the values and ethical biases of the end-users who may buy or use their product or system. Once these answers are identified, they are “translated” into design characteristics and then assimilated into traditional systems engineering and design processes.

The new standard uses the example of airport security scanners as a guide to users who are seeking to address core issues of identity and privacy in system design. This example was chosen because, in the past, these scanners were built to optimize speed and efficiency over privacy. As a result, scans of people’s bodies were often considered invasive by passengers and distasteful by people with gender-normative issues or by physically limited individuals.

Part Of a Larger Ethics Toolset

IEEE 7000-2021 is one of the latest standards published in the IEEE Standard Association’s IEEE 7000-series of standards and recommended practices that encompass various aspects of ethics in engineering such as data privacy, children’s/student’s data governance, algorithmic bias, privacy, and more. While more traditional standards focus on technology interoperability, functionality, safety, and trade facilitation, the IEEE 7000-series addresses specific issues at the intersection of technology and ethics, with the goals of empowering innovation across borders and enabling societal benefit.

For example, IEEE 7010™-2020, “IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent systems on Human Well-Being,” establishes well-being metrics relating to human factors that are directly affected by intelligent and autonomous systems. The standard establishes a baseline for the types of objective and subjective indicators these systems should analyze, honor, and include at the outset of their programming and functioning in order to proactively align with and increase human well-being.

The IEEE 7000 series complements the IEEE Standard Association’s Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS), which is creating specifications for certification and marking processes that advance transparency, accountability, and reduction in algorithmic bias in autonomous and intelligent systems. The IEEE CertifAIED certification has great value in the marketplace and for society at large, as it helps consumers and citizens understand whether a system is deemed “safe” or “trusted” by a globally recognized body of experts who have provided a publicly available and transparent series of marks.

The IEEE 7000-series also aligns with the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the mission of which is to ensure that every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.

A foundational element of that initiative is the comprehensive report series, “Ethically Aligned Design,” that combines a conceptual framework addressing universal human values, data agency, and technical dependability with a set of principles to guide creators and users of autonomous and intelligent systems through a comprehensive set of recommendations. 

Created by more than 700 global experts focused on the pragmatic application of human-centric, values-driven design, “Ethically Aligned Design” is intended for a wide range of audiences and stakeholders. The report series content identifies specific verticals and areas of interest and helps provide highly granular and pragmatic papers and insights. It provides guidance for standards, regulation, and/or legislation for the design, manufacture, and use of autonomous and intelligent systems and can serve as a key reference for the work of policymakers, technologists, and educators.

Ethical Design as a Pathway to Innovation

A key benefit of IEEE 7000-2021 is that it offers alternative ways to address risk. Whereas traditional evaluations of technological risk may focus largely on areas of physical harm, the VbE methodology at the heart of IEEE 7000-2021 provides a broader lens that can be used to consider potential harms to the ethical values that are associated with product or systems design. This helps make the standard a catalyst for the adoption of applied ethics methodologies in emerging technologies such as AI.

This is a hugely important issue for our times, given the pervasiveness of these technologies. As the “Ethically Aligned Design” report puts it, 

“Ethics considerations are often treated as impediments to innovation, even among those who ostensibly support ethical design practices. In industries that reward rapid innovation in particular, it is necessary to develop ethical design practices that integrate effectively with existing engineering workflows.

Those who advocate for ethical design within a company should be seen as innovators seeking the best outcomes for the company, for end-users, and for society. Leaders can facilitate that mindset by promoting an organizational structure that supports the integration of dialogue about ethics throughout product life cycles.”

Sustainability as the Future of Ethical Design

Whatever the organization, the future of ethical design must focus on pragmatic ecological sustainability to ensure people and the planet can live safe and meaningful lives. While there are numerous laudable sustainability initiatives designed by and for organizations to utilize, most plans focus on identifying potential harms to the environment after a product, service, or system is created. While these are essential scenarios to plan for, they need to adopt broader metrics of success at the outset of design in the exact same way end-user values must be considered for AI or other ethical considerations as mentioned above.

Fortunately, there is a renewed focus on organizations utilizing environmental, social, and governance (ESG) metrics to specifically identify what areas of their business operations can move from a reliance on fossil fuels to green alternatives. While this may seem like a daunting task, reports like “Winning the Race to Net Zero: The CEO Guide to Climate Advantage,” written through a partnership between the Boston Consulting Group and the World Economic Forum, points out that an initial pass at creating a sustainability program is like any other cost-cutting program to eliminate short- or long-term expenses. As the report further notes, examining financial issues around sustainability also means identifying immediate benefits coming from green tax or loan access to longer-term necessities like complying with growing regional and global regulations. 

The IEEE has created a standard that provides a “societal impact assessment” methodology for any organization to begin to get a grasp on which ESG and other metrics or indicators that can pragmatically move the needle on their sustainability programs. The previously mentioned IEEE Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being provides an introduction to multiple sustainability (planet and people-based) metrics that should be used at the outset of all technology processes (not just AI systems) to help establish that products, services or systems that are created have KPIs showing how a technology, once released, probably increases environmental flourishing. A key distinction here is to recognize that, just because a technology doesn’t harm the environment overtly via established emissions or other standards, without restoring the larger ecological systems and reducing overall carbon emissions, individual products may harm the planet from a systems-level perspective. 

Where ethically aligned design prioritizes human values and well-being, sustainability elevates ecological systems to have the same rights as people to protect and increase planetary flourishing now and for the future. 

Endnote

  1.  “California man learned he was dying from a doctor on robot’s video screen,” published on the website of WTOL11, March 8, 2019.

Related Articles

Digital Sponsors

Become a Sponsor

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.

Get our email updates

What's New

- From Our Sponsors -

Sign up for the In Compliance Email Newsletter

Discover new products, review technical whitepapers, read the latest compliance news, trending engineering news, and weekly recall alerts.