FDA issued a Safety Communication on January 31, 2019, (see Safety Communication Link) warning of the risk of air being introduced in a blood vessel (air-in-line) and air embolism for infusion pumps, fluid warmers, rapid infusers, and accessory devices. This communication is directed toward users (both clinical and service personnel) and patients. However, what can system architects, system engineers, software engineers and developers, verification and validation specialists, quality assurance and other product development staff learn from these communications? We also post recalls and warning letters that involve software on this site … why? So we all can learn from it and consider safety and risk as we design product updates and next generation products.
When this safety communication posted, I reached out to one of our expert affiliates, Stan Hamilton, to see how he analyzes a published safety communication such as this to consider software implications:
“First, I would consider the relationship between software hazard causes and controls, and usability causes and controls. Are there designed-in risk controls included in the device software to help prevent users from setting the air detection sensitivity to inappropriate levels for the patient or procedure they are performing? We would call this a “software risk control for a usability cause.”
It is common for a software subsystem to perform the actual air detection, separate from the user interface processor. What if the value programmed by the user is not accurately communicated to the sub system actually doing the air detection? Can the software facilitate a user risk control, such as requiring a complete loop confirmation of the value being used and reporting that back to the user to verify? We would call this a “risk control involving the user for a software cause.”
The risk scenarios implied in the safety notice clearly involve multiple points where users are involved, and possibly different users, as well as different software units and functions that can ultimately affect proper air detection. We can certainly see how that working through software risk analysis or usability risk analysis in isolation would not be helpful for understanding these interactions, and this should encourage us to do a thorough analysis with all these factors considered holistically. Fault trees, or maybe we should say “cause trees”, that consider more than just hardware faults or generalized software faults can be great tools to work this out, when usability causes and risk controls are integrated into the analysis.”