How Fatal Can Be Medical Device Biases For Human Life?
Some pandemics last a few years, while others continue to evolve into more lethal forms, causing ever-increasing damage. The case of medical device biases is one of those everlasting pandemics. We can thank COVID-19 for bestowing the issue with the needed limelight.
According to a recent estimate, healthcare-based racial disparities cost $93 billion in extended costs and approximately $42 billion worth of lost productivity per year. The FDA approved an unprecedented 191 new AI-based devices in the first half of 2024 alone, bringing the total of approved devices to 882. The rapidly rising adoption of such devices, which are inherently biased, reflects the need to acknowledge and address the systemic nature of biases across the AI model lifecycle.
Another research from the Centers for Disease Control and Prevention (CDC) disclosed huge disparities in mortality rate, life expectancy, pregnancy-related deaths, chronic conditions’ prevalence, and degradation of mental & physical health in colored individuals as compared to their white counterparts.
For example, a high inaccuracy has been registered in bilirubin measurement devices for newborns. Newborn babies often show high bilirubin levels due to the breakdown of red blood cells. If left unchecked, it may lead to permanent brain damage. The bilirubin levels are detected through light-based treatments that examine the skin and eye color of the infant. It was found that the accuracy of this test was low in black newborns, making proper diagnosis and treatment difficult for a considerable part of the population. Since high bilirubin levels are quite common in the black population, a biased, inaccurate apparatus could become an instrument of havoc any day.
In the upcoming paragraphs, we will discuss this matter’s significance, plausible solutions, and bottom line, along with some important examples that deserve attention and correction without delay.
Medical Device Biases: What’s The Accuracy Level Of Health Monitors?
Inaccuracies in hospital monitoring devices are not new. Almost all healthcare professionals verify false alarms sent by the most accurate machines sometimes. But those are once in a hundred cases, as high-stakes medical devices are vetted and extensively tested by their makers and regulatory bodies. They ensure that there is no room for mistakes. However, things are different when it comes to home-based health monitoring devices. Most personal health monitoring devices don’t undergo extensive testing or vetting processes by their makers or regulatory bodies.
A study conducted in this context found that while the accuracy level of heart rate monitors for phone apps was close to 95%, the same could not be said for energy expenditure calculators. It also concluded that the accuracy levels were subject to the race of the person being monitored. The precision level of many instruments was found to vary according to the user’s skin color. The most prominent machines included in the list were:
- Pulse Oximeters: Numerous tests and examinations verified that the accuracy levels of pulse oximeters were relatively low for colored individuals. The device detects the color of blood by shining light through the skin, where the color of blood varies according to oxygen level. Skin pigment closely influences how light penetrates the skin to reach the blood vessels, leading to inaccurate results. A study by the University of Michigan Hospital found that blood oxygen levels were overestimated for patients with darker skin, which delayed required treatment and placed their lives at risk
- Respirator Masks: Respirator masks act as a barrier between health workers and contagious infections like COVID-19. The protection is ensured by a good fit of these masks’ Filtering Face Piece (FFP). However, studies published in 2020 and 2021 found that the masks didn’t fit well on health professionals from Asian backgrounds. The fitness level of these masks varied from 40 to 90 percent, especially among Asian females. The 2021 study also found that the fitting of these masks specifically focused on Caucasians, leaving people of other races aside and endangering their lives.
- Spirometers: Racial biases have also been reported when interpreting data collected by spirometers. These instruments are used for measuring lung capacity. Experts raised questions on how most spirometers showed a lower lung capacity for Asian and black people compared to white individuals.
- Remote Plethysmography: It is a medical technology used to scale pulse rates by noting variations in skin color. Achuta Kadambi, a computer scientist and electrical engineer at the University of California, states that the visual cues noted in this process are highly susceptible to biases due to variations in subsurface melanin quantities.
- Artificial Intelligence Systems: AI applications in medical devices and diagnostics are rapidly expanding worldwide. However, some research raised concerns overA how most image databases used for developing AI for skin cancer detection were devoid of valid ethnicity or skin tone labels. Several institutes developing AI projects for different diseases have acknowledged this default error and are now working to develop “inclusive and diverse” datasets for AI systems.
How are Regulatory and Technological Advancements Solving This Problem?
Regulatory Efforts and Standardization Initiatives
- Expanded FDA Oversight
- Stringent Pre-Approval Requirements: Rigorous testing protocols mandate manufacturers to demonstrate representative demographic performance prior to approval. As of 2024, manufacturers cannot proceed without submitting extensive bias mitigation reports that clearly demonstrate how their device performs across ethnic, gender, age, and racial groups.
- Post-Market Surveillance Framework: Following the last effort, a comprehensive surveillance system has been established to mandate continuous monitoring of AI-enabled devices post-deployment.
- Global Regulatory Initiatives
- International Harmonization Initiatives: FDA, WHO, Health Canada, and the European Commission are working hand-in-hand to intensify global regulatory alignment on medical device bias standards. Their current focus is on more stringent unified frameworks for AI in healthcare, centering on upholding equity, explainability, and fairness standards.
- Standardized Assessment Tools: Standardized tools like PROBAST (Prediction model Risk of Bias Assessment) and PRISMA frameworks for systematic evaluation of bias risk in medical AI studies are being increasingly adopted by regulatory bodies worldwide.
Technological Advancements in Bias Detection and Mitigation
Advanced Bias Detection Tools
- AI-Powered Bias Detection: Sophisticated ML algorithms have been developed to identify subtle biases in medical AI frameworks. They can detect complex interaction effects between demographic variables that might be invisible to the human eye.
- Real-time Monitoring Systems: Continuous bias monitoring systems are increasingly being implemented to track device performance across different patient demographics.
Algorithmic Fairness Techniques
- Advanced Debiasing Algorithms: More sophisticated debiasing techniques have been developed, including adversarial debiasing methods that actively remove protected attributes from feature representations, as well as reweighting innovations that autonomously adjust sampling based on demographic distributions.
- Explainability and Transparency Tools: Significant improvements in model explainability techniques are helping researchers understand the root cause of AI biases. Methods like counterfactual explanations and SHAP (SHapley Additive exPlanations) are now more accessible, enabling more accurate identification of bias sources.
Final Word
Computational biases can infiltrate medical technologies when research and studies are conducted on a homogeneous group of individuals. Deliberate efforts must be made to recruit people from different ethnic backgrounds to design and test medical technologies and devices.
Lastly, universities and other higher education institutions need to work on building diversity within their research teams. There should be mandatory training on the fundamental effects of racism on health and the overall healthcare system. Altogether, such efforts may change the scenario in the coming years by eliminating the basic causes of bias in human nature and device mechanisms.
Let's Take the Conversation Forward
Reach out to Stellarix experts for tailored solutions to streamline your operations and achieve
measurable business excellence.

