Updated February 2026 · 6 min read
Artificial intelligence is transforming healthcare at a pace most of us didn’t anticipate. AI-powered tools now assist with diagnosing conditions, triaging patients, predicting sepsis risk, and even recommending treatment protocols. But here’s the problem: many of these algorithms carry the same biases that have plagued healthcare for decades — and in some cases, they’re making disparities worse.
For Michigan healthcare professionals, understanding AI bias isn’t just an academic exercise. It’s directly connected to the implicit bias training that LARA requires for license renewal — and it’s becoming one of the most important patient safety issues of our time.
What Is AI Bias in Healthcare?
AI bias occurs when algorithms produce systematically unfair results for certain patient populations. This typically happens because the data used to train these systems doesn’t represent everyone equally. When training datasets overrepresent certain demographics and underrepresent others, the resulting algorithms perform better for some patients and worse for others.
The consequences are real. Research published in PLOS Digital Health (2025) found that AI technologies being deployed across cardiology, ophthalmology, and dermatology — while improving diagnostic accuracy overall — also pose significant ethical challenges related to data and algorithmic bias that can create disparities in healthcare delivery.
Real-World Examples You Should Know About
Pulse Oximeters and Skin Tone
One of the most well-documented cases involves pulse oximeters — devices nurses use every single shift. Research presented at ACC.25 (the American College of Cardiology’s 2025 Scientific Session) confirmed that pulse oximeter readings vary significantly based on skin pigmentation. The EquiOx Study — the largest prospective real-world study of its kind — found that the proportion of dangerously high readings (where the device overestimates oxygen saturation, potentially masking hypoxemia) was higher in patients with darker skin.
In January 2025, the FDA released new draft guidance proposing updated testing standards, including increasing study participants from 10 to 150 and requiring performance evaluation across skin tones. As of early 2026, a follow-up study commissioned by the FDA has added complexity to the conversation rather than resolving it — underscoring that this is an evolving area where clinical awareness matters.
Predictive Algorithms and Race
A widely cited study found that a major healthcare algorithm used to allocate resources systematically underestimated illness severity in Black patients. The algorithm used healthcare costs as a proxy for health needs — but because Black patients historically received less expensive care due to systemic barriers, the algorithm interpreted that as meaning they were healthier. The result: Black patients had to be significantly sicker than white patients before the algorithm flagged them for additional care.
Clinical Decision Support Tools
AI models trained primarily on data from the United States and China may not generalize well to diverse patient populations. A 2025 review in npj Digital Medicine found that half of the healthcare AI studies evaluated demonstrated a high risk of bias, often due to absent sociodemographic data, imbalanced datasets, or weak algorithm design. Only 1 in 5 studies were considered low risk.
Why This Matters for Michigan’s Implicit Bias Requirement
Michigan’s implicit bias training requirement under R 338.7004 exists because the state recognizes that unconscious biases affect clinical decision-making and patient outcomes. AI bias is simply the technological extension of this same problem — except now, biased assumptions can be encoded into systems that affect thousands of patients simultaneously.
As a nurse or healthcare professional, you’re increasingly interacting with AI-powered tools in your daily practice: EHR alerts, clinical decision support systems, patient risk scores, and monitoring devices. Understanding how bias operates in these systems isn’t optional — it’s essential for safe, equitable patient care.
That’s exactly why Renew Now CE updated our Michigan Implicit Bias Training in January 2026 to include new content on AI bias and healthcare algorithms. We believe implicit bias training should reflect the reality of modern clinical practice — and in 2026, that means understanding how bias shows up not just in human decision-making, but in the technology you use at the bedside.
What Nurses Can Do Right Now
Question the algorithm. When a clinical decision support tool gives you a recommendation that doesn’t match your clinical judgment, speak up. AI tools are meant to support your expertise, not replace it. Your training and experience with individual patients provides context that algorithms simply don’t have.
Know your devices’ limitations. If you’re using pulse oximetry on a patient with darker skin, be aware that the reading may overestimate oxygen saturation. Consider arterial blood gas confirmation when clinical presentation doesn’t match the numbers.
Advocate for transparency. Ask your facility what AI tools are being used and what’s known about their performance across different patient populations. Healthcare organizations increasingly need nurses who can ask informed questions about the technology being deployed.
Stay current on your training. Implicit bias isn’t static — it evolves alongside the tools we use. Completing your required Michigan implicit bias CE with content that addresses current issues like AI bias ensures you’re not just checking a box, but actually strengthening your clinical practice.
Complete Your Michigan Implicit Bias Requirement
Michigan Implicit Bias Training — 2 Hours | Updated January 2026
Our LARA-compliant course now includes content on AI bias and healthcare algorithms — the only Michigan implicit bias course updated for 2026 with this critical topic. ANCC accredited. Instant certificate. 24-hour CE Broker reporting.
Meets all R 338.7004 requirements including mandatory pre-test and post-test.
Start Your Implicit Bias Training →
Frequently Asked Questions
Does Michigan require implicit bias training?
Yes. Under R 338.7004, Michigan requires implicit bias training for all healthcare professionals licensed under the Public Health Code. For nurses (RNs, LPNs, NPs), the requirement is 2 hours per renewal cycle (2 years).
Why does the course include AI bias content?
Because AI-powered tools are now embedded in clinical practice — from EHR alerts to monitoring devices. Understanding how bias operates in these systems is essential for providing equitable care, which is the core purpose of Michigan’s implicit bias mandate.
Is this course approved by LARA?
Yes. Our course meets all Michigan LARA R 338.7004 requirements. Renew Now CE is ANCC accredited (Provider P0557) and approved by the Michigan Board of Nursing.
How long does the course take?
2 hours. You can complete it online at your own pace, on any device, and save your progress. Your certificate is issued instantly upon completion and reported to CE Broker within 24 hours.
Sources: ACC.25 EquiOx Study (Hendrickson et al., 2025), npj Digital Medicine Vol. 8 (2025), PLOS Digital Health (Chinta et al., 2025), FDA Draft Guidance on Pulse Oximeters (January 2025), STAT News (January 2026)