Sepsis alerts – False alarms and alert fatigue – Jun 2019

As we prepare to test our new Sepsis Alert system (recently added to our Rapid Assessment and Treatment – RAT app SortED), both these topics deserve thought.

 

SortED tablet is intended for Emergency Department (ED) nurses and for GP or nurse-practitioner ‘streamers’ who select appropriate pathways for patients presenting at Urgent Care Centres (UCC). Two trials of SortED, one in ED and another in UCC, have taken place, run alongside the present systems at St Mary’s and Charing Cross Hospitals. Both trials showed that serial processing of patients (triage or streaming > clinician assessment > investigations > treatment) are still prevalent despite attempts to implement facets of RAT. The SortED process assists with much earlier identification of the investigations and treatments needed. The median times to order IV antibiotics in the control arms of the ED trial was 112 mins. In the UCC, patients needing IV fluids or antibiotics were sent to ED further delaying prompt administration. In contrast, in less than the time it took for either triage or streaming, nurses using SortED had placed simulated ‘orders’ for this and other treatments. There was close agreement between the test (simulated) investigation or treatment orders and the control orders made by ED staff.

 

We have now added a sepsis alert system to SortED which assists the nurses’ awareness of the possibility of sepsis and encourages use of relevant point of care tests (POCT). In their systematic review of the literature on automated sepsis alarm systems, Makam et al (J Hosp. Med 2015; 10:396-402) showed relatively poor results with positive predictive values of 20.5-53.8% and negative predictive values of 75.6-99.7%. They found modest evidence of improvement in process (antibiotic escalation) and no corresponding improvements in mortality or length of stay. They found minimal data on potential harms due to false positive alerts although their analysis suggests such false alarms would have been relatively frequent.

 

We know from pilot tests that the automated sepsis alerts triggered by the Electronic Health Records (EHR) system at Charing Cross and St Mary’s hospitals throws up relatively frequent false alarms. Alert fatigue is therefore a worry. Past attempts to increase the specificity of sepsis identification (Sepsis-3) beyond the ‘old’ criteria agreed in 1991 (reviewed by Sterling et al. Crit. Care Med. 2017 45:1436-1442) missed a large number patients who gained a significant mortality benefit when given prompt treatment.

 

Finding the sweet-spot between too tight a definition and too loose, is the difficulty now being faced. Understanding the patient groups which trigger false positives is useful, but one needs to be cautious about over zealous exclusion of such patients. With exciting new POCT being developed (such as the biosensor for IL-6 from the University of Strathclyde’s Dr Damion Corrigan) hopefully more reliable tests for sepsis will emerge. In the meantime, an early warning measure with appropriate prompting for POCT lactate, white blood cell differential counts, combined with prompt IV fluids and antibiotics should improve sepsis recognition and treatment in our EDs and UCCs.

 

Finally we need consideration about how to run trials of tools like SortED and how to test them during development. Artificial intelligence (AI) and intelligence augmentation (IA) are at the heart of innovation in healthcare and as yet the journals have not developed well worn pathways (nor indeed formal check lists) for one of the more useful types of study. Fully developed tools may be tested in quality improvement studies and papers submitted with SQUIRE check lists. In early development bench tests with vignette studies are often used, but these have fundamental drawbacks, not least the unrealistic nature of the test and the small sample size of vignette collections. Something between these extremes is essential.

 

The form of IA used in SortED needed testing alongside the current processes used in ED and UCC. This was a ‘simulated clinical use’ trial with the ED or UCC team processing the same patients and the SortED operator, guided by a SortED tablet, able to see real patients, record observations, then choose from short-lists of investigations and treatments, without affecting their clinical management. Having test and control arms allows one to make direct comparisons with respect to the quality of the ‘decisions’ and the their relative timings. It also provides key insights into the current processes operating in the hospitals where the trials take place. Statistical analyses can use standard comparative techniques. At present there is controversy over how some AI and IA systems have been tested. Once we think of a cunning acronym and develop a check list, matters might improve.

 

Gillie Francis – Jun 2019