This title appears in the Scientific Report :
2022
Please use the identifier:
http://hdl.handle.net/2128/32112 in citations.
Cross-ethnicity/race generalization failure of RSFC-based behavioral prediction and potential consequences
Cross-ethnicity/race generalization failure of RSFC-based behavioral prediction and potential consequences
Machine learning (ML) plays an important role in precision medicine. However, algorithmic biases that favor majority populations pose a key challenge to ML applications (Chouldechova 2018; Martin 2019; Obermeyer 2019). In neuroimaging, there is growing interest in the prediction of behavioral phenot...
Saved in:
Personal Name(s): | Li, Jingwei (Corresponding author) |
---|---|
Bzdok, Danilo / Tam, Angela / Ooi, Leon Qi Rong / Holmes, Avram / Ge, Tian / Patil, Kaustubh / Jabbi, Mbemba / Eickhoff, Simon / Yeo, Thomas / GENON, Sarah | |
Contributing Institute: |
Gehirn & Verhalten; INM-7 |
Imprint: |
2022
|
Conference: | INM & IBI Retreat 2022 "Molecular neuroscience meets brain function", Jülich (Germany), 2022-10-18 - 2022-10-19 |
Document Type: |
Poster |
Research Program: |
Multilevel Brain Organization and Variability |
Link: |
OpenAccess |
Publikationsportal JuSER |
Machine learning (ML) plays an important role in precision medicine. However, algorithmic biases that favor majority populations pose a key challenge to ML applications (Chouldechova 2018; Martin 2019; Obermeyer 2019). In neuroimaging, there is growing interest in the prediction of behavioral phenotypes based on resting-state functional connectivity (RSFC; Finn 2015, 2021; Greene 2018). But prediction biases/unfairness in this context were not assessed in the literature. Especially, predictive models were typically built by capitalizing on large cohorts with mixed ethnic group, in which the proportions of certain ethnical groups, e.g. African Americans (AA), are limited. Whether the models perform equally well across different ethnic groups was unclear. By using two large-scale neuroimaging datasets from the United States, we compared the prediction accuracy between AA and white Americans (WA) when ML models were trained on different composition of ethnic groups. We observed larger prediction errors in AA than WA for most behavioral measures, which was only limitedly affected by the composition of training population. We also investigated potential downstream consequences of biased predictions of behavioral phenotypes if they were used uncritically. |