NLM IRP Seminar Schedule
UPCOMING SEMINARS
-
April 25, 2024 Ermin Hodzic
Condition-Aware Cell Type Deconvolution of Bulk Tissues -
April 30, 2024 Wenya Rowe
The conformal central charge of the spin-1/2 XX model derived from long-chain asymptotics -
May 2, 2024 OPEN
TBD -
May 7, 2024 OPEN
TBD -
May 9, 2024 Pascal Mutz
TBD
RECENT SEMINARS
-
April 25, 2024 Ermin Hodzic
Condition-Aware Cell Type Deconvolution of Bulk Tissues -
April 23, 2024 OPEN
TBD -
April 16, 2024 Jaya Srivastava
Regulatory plasticity of the human genome -
April 11, 2024 Sergey Shmakov
Comprehensive survey of the TnpB RNA-guided nucleases -
April 2, 2024 Yifan Yang
Fairness and Bias in Biomedical AI
Scheduled Seminars on Jan. 12, 2023
Contact NLM_IRP_Seminar_Scheduling@mail.nih.gov with questions about this seminar.
Abstract:
Although the powerful applications of machine learning (ML) are revolutionizing medicine, current algorithms are not resilient against bias. Fairness in ML can be defined as measuring the potential bias in algorithms with respect to characteristics such as race, gender, age, etc. In this paper, we perform a comparative study and systematic analysis to detect bias caused by imbalanced group representation in sample medical datasets. We investigate bias in major medical tasks for three datasets: UCI Heart Disease dataset (cardiac disease classification), Stanford Diverse Dermatology Image (DDI) dataset (skin cancer prediction), and chestX-ray dataset (CXR lung segmentation). Our results show differences in the performance of the state-of-the-arts across different groups. To mitigate this disparity, we explored three bias mitigation approaches and demonstrated that integrating these approaches into ML models can improve fairness without degrading the overall performance.