Event Recap: Overcoming Bias in Healthcare

BY: NATALIE GOLD

It is essential that people have access to quality healthcare and treatment. Unfortunately, racial, ethnic, and gender biases are found throughout healthcare. Socioeconomic factors, like food availability or the affordability of housing, also play a big role in people’s health. To ensure that everyone has equal access to good treatment and care, social factors must be addressed by medicine. Artificial intelligence (AI) and data science can be used to help doctors mitigate these issues.

On March 12, 2021, the BU Hariri Institute for Computing hosted the workshop, “AI in Health – Bias, Misinformation, and Social Determinants of Health.” The workshop was a part of the Focused Research Program (FRP), Leveraging AI to Examine Disparities and Bias in HealthCare. This workshop brought together four FRP researchers from across BU’s Charles River and Medical campuses to share what they’ve been working on in their FRP.

Kate Saenko spoke about bias in systems and how to use domain adaptation to address the biases.

Kate Saenko, Associate Professor of Computer Science, spoke about the problem of bias in systems and networks. When a network is trained with a limited dataset, what the network can recognize in the real world is limited. This happens with systems trained on people-centered data. If training data is predominantly white and male, the system has difficulty recognizing faces of people of color and women. Because of time and money constraints, researchers cannot just gather more data for the dataset. Instead, Saenko says researchers can use domain adaptation, applying an algorithm trained in one domain to another domain. This significantly improves system accuracy. Hospitals can use this tool to reduce the bias in systems they utilize, such as medical imaging scanners.

Dr. Pablo Buitron de la Vega talked about how BMC’s THRIVE program has helped connect patients to social resources.

Dr. Pablo Buitron de la Vega, Assistant Professor of Medicine, spoke about the THRIVE program at Boston Medical Center (BMC). Under the program, patients are screened on their social needs, their answers and needs are documented, and, if requested, they are connected to social resources. The screen asks about things such as the ability to pay for medicine and access to transportation, housing, and food. BMC found that more patients get resources and receive help when using the THRIVE program. Now, using data gathered from the program, AI algorithms can be used to explore  the associations between social need, health outcomes and patients’ utilization of resources to improve medical practices and treatments.

Shahabeddin Sotudian shared his team’s work to predict when people would miss imaging appointments.

Shahabeddin Sotudian, PhD Candidate in Systems Engineering, shared his work on missed care opportunities, or any scheduled appointment that is cancelled or rescheduled. These are costly for care organizations and can result in diagnosis delay,  posing a great risk  to patients . Sotudian’s research looks specifcally at missed imaging appointments. They found that appointment timing is the biggest indicator of missing appointments, with early appointments more likely to be missed than later ones. Another indicator was people’s access to reliable transportation. There are ways to address these variables. One example discussed was creating a program that sets people up with transportation to get to appointments.

Yuping Wang explained how health misinformation is spread online, and how misinformation research must be global to be accurate.

Yuping Wang, PhD Candidate in Systems Engineering, spoke about the spread of health misinformation online, especially on social media sites. Wang and their fellow researchers download tweets and then analyze the tweets using fact checking websites. This allows the team to track down the origin of  misinformation, like where conspiracy theories about COVID-19 stem from.. Wang explained that misinformation research must be global and multilingual. This way, social media posts from across the world can be included in datasets and analyzed. This global approach is the only way to accurately trace misinformation back to the original source 

This workshop gave researchers from across BU the opportunity to share their work at the intersection of AI and healthcare. The collaboration of researchers and ideas through the FRP  facilitates convergence across these different areas of research. The next FRP workshop will be held in May. For more details on this FRP, please visit here.