Machine Learning Can Classify Vital Sign Alerts as Real or Artifact in Online Continuous Monitoring Data - Robotics Institute Carnegie Mellon University

Machine Learning Can Classify Vital Sign Alerts as Real or Artifact in Online Continuous Monitoring Data

M. Hravnak, L. Chen, A. Dubrawski, D. Wang, E. Bose, G. Clermont, A. M. Kaynar, D. Wallace, A. Holder, and M. R. Pinsky
Journal Article, Intensive Care Medicine Experimental, Vol. 3, pp. 550, December, 2015

Abstract

Introduction
Alarm hazards continue to be the top patient safety concern of 2015. Machine learning (ML) can be used to classify patterns in monitoring data to differentiate real alerts from artifact.

Objectives
To determine the degree to which ML, specifically random forest (RF), can classify vital sign (VS) alerts in continuous monitoring data as they unfold online as either real alerts or artifact.

Methods
Noninvasive monitoring data from 8 weeks of admissions in a 24-bed step-down unit (heart rate [HR], respiratory rate (RR; bioimpedance), oscillometric blood pressure (BP), peripheral oximetry (SpO2)) were recorded at 1/20Hz. VS deviation beyond stability thresholds (HR 40-140, RR 8-36, systolic BP 80-200, diastolic BP < 110, SpO2>85%) and persisting for 80% of a 5 min moving window comprised alerts. Of 1,582 alerts, 631 were labeled by a 4-member expert committee as real alerts, artifact, or unable to classify. Alerts were: RR 132 real, 25 artifact; BP 45 real, 40 artifact; SpO2 181 real, 93 artifact (HR alerts too few to analyze). Following feature extraction from expert-annotated alerts, we constructed a series of 10 moving windows of 3 min width each, and ending at 0, 20, 40, 60, 80, 100, 120, 140, 160, and 180s from the time the VS first crossed alert threshold. The experiment is performed within a leave-one-alert-out setup. In each iteration, one of the alerts is the test alert, and the rest are used as the training alerts. We trained the model using only the windows ending at 180s after the time VS crossed the alert threshold from the training alerts (one for each VS), and then made predictions from each of the sliding windows on the test alert. We then computed area under the curve (AUC) scores by aggregating prediction at each test window.

Results
The RF classifier was able to discriminate between real BP alerts and artifact using information from the prior 3 min with an AUC of 0.8 in the 0s window, which improved to 0.86 for the window ending at 180s into the alert. SpO2 has an AUC of 0.88 for the 0s window, and improved to 0.96 at 180s window. RR discrimination has an AUC of 0.73 at the 0s window, and improved to 0.92 at the 180s window.

Conclusions
A RF model trained on a small set of expert-annotated data was able to accurately classify RR, BP and SpO2 alerts in monitored data as they are unfolding online as real or artifact to a helpful degree. BP and SpO2 did not improve much with more information gained after alert onset, while information gained as the alert continued to unfold improved RR discrimination. This approach holds promise to improve monitor alerting technology and clinical care.

Notes
Grant Acknowledgments: NIH NINR R01NR013912; NSF 1320347; NHLBI-K08-HL122478

BibTeX

@article{Hravnak-2015-121705,
author = {M. Hravnak and L. Chen and A. Dubrawski and D. Wang and E. Bose and G. Clermont and A. M. Kaynar and D. Wallace and A. Holder and M. R. Pinsky},
title = {Machine Learning Can Classify Vital Sign Alerts as Real or Artifact in Online Continuous Monitoring Data},
journal = {Intensive Care Medicine Experimental},
year = {2015},
month = {December},
volume = {3},
pages = {550},
}