Data Science: Explaining the Unexplainable!

Thursday, April 20 at 10:00 AM - 11:00 AM CT
South Building, Level 1 | S105A

The health sector generates more than 19 terabytes of data yearly ‒ and that's just the clinical data! Buried in that data is critical information that can help us guide our wellness, inform our healthcare professionals, and lower the cost of care. But we must be better at digesting and interpreting this data. Enter Machine Learning. The models have proven themselves to offer higher levels of accuracy compared to their traditional statistical counterparts. Nevertheless, the latter options have remained the default choice for many. This is likely due to a lingering fear that we won’t be able to understand how and why machine learning models are so effective at enabling more accurate forecasts. This presentation will explore two current methods used to explain machine learning models: LIME (local interpretable model-agnostic explanations) and SHAP (SHapley Additive exPlanations). Participants will "see" and understand the basics of the functional forecasting of these models, and gain the same levels of confidence and ease they previously experienced with the models’ statistical counterparts. The session will be enhanced with discussion and walk-through of real-world examples to demonstrate how using these tools can yield such highly accurate predictions ‒ and explain the unexplainable!

Learning Objectives

  • Compare machine learning models against traditional statistical models and explain advantages and disadvantages
  • Describe how LIME and SHAP are used, as tools, to help explain a machine learning model
  • Identify 2 - 3 use cases where machine learning tools could be used to answer critical clinical and business questions
Credits
CAHIMS, CME, CNE, CPHIMS, ACHE
Status
Active
Audience
Chief Data Officer, Chief Quality Officer and Chief Clinical Transformation Officer, Clinical Informaticist
ID
193

Speakers

Joshua Jorgensen
Data Scientist, Health Industry
CGI