Authors: Feng Chen, Liqin Wang, Julie Hong, Jiaqi Jiang, Li Zhou
Published on: October 30, 2023
Impact Score: 8.2
Arxiv code: Arxiv:2310.19917
Summary
- What is new: This study is a comprehensive review of methods to detect and mitigate bias in artificial intelligence (AI) models using electronic health records (EHRs), detailing six types of biases and various strategies for their mitigation.
- Why this is important: Bias in AI models developed using EHR data risks worsening healthcare disparities.
- What the research proposes: The study outlines strategies for detecting and mitigating bias in AI models, focusing on diverse forms of bias and emphasizing the importance of fairness in healthcare.
- Results: Identified key biases and strategies for mitigating them, using fairness metrics to evaluate effectiveness. Six major types of bias were pinpointed, and strategies mostly focused on data preprocessing techniques like resampling and transformation.
Technical Details
Technological frameworks used: Systematic review following PRISMA guidelines
Models used: AI models for predictive tasks in healthcare settings
Data used: Articles from PubMed, Web of Science, and IEEE published between 2010 and 2023
Potential Impact
Healthcare providers, health insurance companies, health IT companies, and policymakers could benefit or need to adjust due to the insights on mitigating AI bias in healthcare.
Want to implement this idea in a business?
We have generated a startup concept here: FairHealthAI.
Leave a Reply