A Look at How the Past Has Shaped Today’s Approaches
In our last On Your Mark column, we explored key components of a comprehensive product safety strategy – from risk assessment to safety labels and manuals – and ways those elements work together to improve safety and reduce risk. How did we get here – why are certain perceptions and directives in place – and how do they differ across the globe? History is very telling in influencing the trajectories of product safety and liability. To get the details, we turned to insight from Doug Nix, Managing Director of Compliance inSight Consulting and lead author of the Machinery Safety 101 blog, with over 30 years of industrial safety experience specializing in machinery safety and risk assessment methods. Read our interview with Nix for context on how the past has shaped the ideology and approaches that we employ today.
Give us a brief historical overview of product safety and liability.
In the early part of the Industrial Revolution, there really was no product safety. The basic workplace safety premise was that workers had the right to negotiate their contracts of employment with their employer to get a fair deal. That ended up giving employers all the power and workers none. From the workplace safety perspective, generally speaking, the attitude was that if a worker was injured using a piece of equipment, they were lazy or incompetent; the equipment wasn’t blamed. In historical photos of machines in the workplace at the end of the 19th century and the beginning of the 20th century, you’ll see open flywheels and other issues recognized today as obviously dangerous. But, that wasn’t the prevailing thinking at that time.
During this same time period came the rise of electrical technology. The World’s Fair: Columbian Exposition, also known as the Chicago World’s Fair, held in 1893, was an important historical event for safety. The fair’s insurers, worried about the likelihood of fire, hired William Henry Merrill to form a team and to do an engineering evaluation of the safety of the electrical equipment that was being installed. That spun off into Underwriters Laboratories, or UL, and the first technical standards that were written for electrical product safety.
Once standards for electrical equipment came about, the need for additional standards for other types of products with prevalent dangers, like steam power, followed. UL and the focus on electrical safety was really the beginning of that safety and certification process.
How did those early trajectories create differences in safety and liability – the perception, approaches, and directives – in the U.S. and Canada?
In the U.S. and Canada, there’s a reliance on tort law as the basis for product liability. Manufacturers have an obligation to provide safe products and to warn people about any hazards related to the product. Those requirements have risen up out of the original product safety/liability cases, some of which happened in the same timeframe as the Chicago World’s Fair, the middle to late 19th century, with many more to follow.
The assumption in U.S. liability law, and also typically if a case is brought in Canada, is that the manufacturer of the product is guilty and has to prove that they did everything necessary to provide a safe product. That includes warnings, user instructions, and other elements. Today, that continues to be the basic concept in product liability, that the burden lies on the manufacturer to prove that they did everything possible to make their product safe.
In Canada, there’s a basic test for causation called the but for test that requires the plaintiff to show that the injury would not have occurred but for some negligence on the part of the manufacturer; it’s up to the court to decide whether that statement is true and whether the manufacturer did everything that was necessary. Normally, there’s never a zero-liability situation for the manufacturer in either Canadian or U.S. law. It’s rare for a manufacturer to get off completely scot-free. Some portion of liability is always given to them – it comes down to being a question of how large.
Was it a similar trajectory in Europe?
The trajectory in Europe took a very different path than in North America and was much more modern, coming about largely in the 1970s and 1980s. In 1975, the Council of the European Communities (now the European Union) adopted a resolution for a preliminary program on consumer protection and information technology. After a decade of deliberation and debate, the Product Liability Directive was adopted in 1985. Essentially, it states that the manufacturer is only permitted to sell products that are safe and that the products have to provide a level of safety that a person is entitled to expect. That level of safety that a person is entitled to expect is set by the individual product and by safety directives, hence the importance of meeting those requirements. The assumption in European courts is that the manufacturer is selling safe products and that they have done everything necessary to provide a safe product to the customer. The obligation, therefore, is on the consumer to prove that there’s a cause and effect relationship between the injury that they sustained and a defect in the product, which is a much harder mark for a plaintiff to meet than the other way around, as it’s done in North America.
Europe sees North America’s approach as a sort of Wild West of product safety. From the European point of view, they have a very structured process that comes from the manufacturer or through providing safe products to the end-user; by the time products get into the workplace, in theory, they’re already safe.
In Europe, many different types of safety directives must be met depending on the product – the Machinery Directive, Low Voltage Directive, Toy Safety Directive, and many others. Therefore, no matter what the product is, there’s generally considered to be some level of conformity assessment that has to be done. In North America, there’s no similar requirement or structure that’s placed on a manufacturer of machinery; you can build any kind of machinery you like and put it on the market. Fundamentally, if it passes an electrical inspection and can get a label on it, there was really no reason why it couldn’t be placed in use. Consumer products have stricter provisions for safety in many cases. The U.S. Consumer Product Safety Commission and Health Canada’s Consumer Safety Directorate oversee consumer product safety, including risk assessment.
The obligation for safety on that equipment – and the real takeaway here – is that as individual people – both at home and in our workplaces – we’ve become comfortable with the idea that the products that we buy and use are safe. When it comes to our workplaces, however, unless the employer has taken the time to actually have machinery or equipment evaluated and verified that it actually meets the safety requirements, either in the OSHA standards themselves or in ANSI standards that might be applicable to the product, there’s nothing that says that product is safe. A lot of workers get hurt because they’re working under the assumption that the product they’re using is already safe. They may not be aware of all the hazards that are present.
The U.S. OSHA laws and the Canadian workplace safety laws place the responsibility on the employer to ensure that equipment used in the workplace is safe and suitable for its application. It’s a reactive system, meaning that it’s typically not going to be checked unless somebody has been hurt, even though the employer should, in theory, be doing something proactively. But the reality is, many small and medium-sized enterprises don’t have the staff or in-house resources with product safety experience or knowledge to prioritize this. In larger organizations where they typically have an OH&S department, they’re typically more focused on dealing with issues with people that have already been hurt, as opposed to trying to dig out the issues that might exist.
Can you tell us more about the tie-ins with risk assessment, especially in terms of its history and current requirements?
Risk assessment started to become a concept in engineering circles after World War II. That time period saw the rise of the aviation and nuclear power industries. In the nuclear sector, there was a clear understanding of the hazards related to nuclear power and reactors. In the aviation sector, the potential for mass-casualty incidents was clear. These understandings brought about the development of a more formalized approach to risk assessments in these sectors, which then spread into process industries like refineries and oil and gas production. Risk assessment and the layers of protection concept began to be used. The concept of the hierarchy of controls began to be formalized in the late 1980s and early 1990s, with a number of North American standards published at that time. The whole process of looking at a structured way of dealing with hazards came out of general safety thinking at that time. If those methods are applied well, you end up with a product that is as safe as you can reasonably make it. It’s that reasonableness, that’s always a question at the end. There’s always a lot of discussion and challenges around defining tolerable, intolerable, and acceptable risk. This debate continues to rage in safety standards development committees today.
At the end of the day, it comes down to what local laws require, what the social environment is, and how the product is being used – in the workplace or at home, with children or with the elderly, etc.
All of those different social factors have impacts on what is going to be deemed to be acceptable or tolerable given the circumstances. Risk is one of those topics that we’d like to make really cut-and-dried, and it just isn’t, no matter what we do. My work and my obsession with trying to understand risk, and understand how we analyze it, and how we think about it began almost 30 years ago. And I can’t say that I’ve yet satisfied my curiosity on the topic.