Our latest paper investigates factors which influence failure of triage to identify critically ill patients who need an acuity set to priority 1 (Red) or 2 (Orange).

World-wide the main triage systems give worrying 1 & 2 false-negative rates. We show that for most patient arrival rates acuities 1 & 2 are sub-maximal.

We are delighted to announce that Sort-ED Partnership has entered a collaboration agreement with Dr Arian Zaboli – Coordinator of non-medical reseach at IRTS – Innovation, Research and Teaching Service – a division of The South Tyrol Health Authority. IRTS deals with the research of 7 hospitals in Northern Italy.

Arian Zaboli joined our Academic Advisory Panel last week and has already given us an excellent challenge to work on. He is interested in getting safer triage for syncope patients and has results from a recently published observational study.

Barbara’s November blog mentioned that we looked at the potential impact of SortED on breaches of the 4 hour target in our ED trial, so I thought I would summarise what the data tell us. You may not be familiar with this type of clinical trial. It was a simulated clinical use trial where SortED and conventional triage were tested side by side. One nurse used SortED and the other the standard Imperial College Healthcare Trust’s ‘triage only’ system.

Our trust has recently hired a management consultant firm to help improve our EDs performance (we’ve done this a few times over the last few years). One of the issues we’ve been focussing on is how to avoid the non-admitted breaches in the ED. Put simply, breaches of the 4 hour target cost trusts money. I think it’s intuitive to most Emergency MedicineEM clinicians that if you get the patient started on the right journey right at the get go things go smoothly. The initial assessment of patients is key to getting it right first time – as opposed to the DIRE (doing it right eventually) situation we often find ourselves in.

Last month’s coding of the new routes went well and regression testing (where we test that the new code does not disturb the pre-existing programing), only produced one very minor headache.

However headaches come in all degrees and this month, I started coding up the analgesia selection part of the OUCH™ pain scoring system.

Having had releases 1.3 and 1.4 of SortED® tablet behave themselves remarkably well in the two clinical trials, I’ve been putting off the daunting task of getting back to coding. This is essential to make the next set of improvements, but difficult after such a long interval testing the system and analysing data.

One function of SortED tablet has not really been put to the test during our simulated clinical-use trials in the EDs at Charing Cross and St Mary’s, or in the UCC at Charing Cross Hospital. We’ve focused on the main functions, but there is an interesting part of the toolkit we have yet to put to the test with real patients.

I’ve been writing up a detailed report on every facet of our Urgent Care Centre trial for Health Education England who kindly provided some very welcome financial assistance. The benefit of this process is that it is a good point to step back and think about what the results are telling us. In May I wrote about the GP streaming process. In order to evaluate how successful GPs had been, an ED consultant reviewed all 904 patients giving us a ‘gold standard’ streaming outcome.

At long last after all that abstraction work, we are beginning to get results and can examine the outcome of GP streaming in the UCC. We’ve focused analysis on the five main streaming outcomes. Each outcome was appraised by an ED consultant examining the patient’s UCC notes. We were able to obtain 904 evaluable records where we had both a completed SortED record and an equivalent UCC record with confirmed identity.

Devices which use either high-tech AI based on machine learning, or simpler algorithms like SortED, to perform tasks like triage, or which make recommendations for investigations, fall under the control of the MHRA (Medicines and Healthcare Products Regulatory Agency). The Medicines part of the regulatory framework has extremely well established analytical methods and tools: the classical clinical double-blind trial, and a peer-review system well versed in the relevant techniques. The same is the case for simple medical devices.

Whilst the abstraction on our UCC trial grinds on, I am taking advantage of the analytical ‘lull’ pending data to review the literature on Sepsis. Sepsis is a life-threatening response to infection that can lead to tissue damage, organ failure and death. For years, sepsis has been underdiagnosed and The Sepsis Trust estimates that every 3.5 seconds somebody dies of sepsis.

While the huge task of medical record abstraction on our Urgent Care Centre trial drags on, we are sharing the frustration of all researchers, that widespread access to pseudonymised patient records does not exist. Protecting the anonymity of the patients who have kindly consented to participate in our studies is essential.

Validity is often confused with reliability but it is important to distinguish between the two concepts. Validity relates to how well a device or procedure measures what it is intended to measure. The source of the confusion is that one cannot have validity if there is little reliability and over repeated measurements where one keeps getting different answers. I will return to the issue of reliability in later blogs, but for now let’s focus on validity and how to test it for the new pain scoring component of SortED tablet (OUCH™).

In last month’s blog, ED was shown as a pressure vessel with the caption ‘As pressure mounts negative factors impair performance.’ Software designed by folk who have never experienced this pressurised and distracting environment first-hand often makes matters worse.

Stepping back from the practicalities of the ED and Urgent Care Centre trials it is good to consider the big picture and what we are trying to achieve. As with any engine, significant improvements in performance can be achieved by multiple changes, even minor ones. It’s the same principle used to tune Formula 1 cars, sadly without the same resources!

We have many more patients to analyse in this UCC trial so this will be the first of many sessions. The primary task is to make sure that the right SortED record is matched to the correct Electronic Patient Record (EPR) from the UCC. On SortED, we were given permission to collect the patient’s 5-digit EPR number, their age and gender. The date/time seen was automatically recorded. 962 Sort records were available for abstraction. 20 records had an invalid EPR (< or > 5 digits) leaving 942 candidates for abstraction.

SortED tablet gives nurses a novel toolkit for initial patient assessment. We have, of course, examined performance of most components individually, but now we need to summarise the first ED trial. Journals these days require that you chose and fill in a checklist which pigeonholes your study neatly into one category or another.

Rapid Assessment and Treatment (RAT) involves the early identification of treatments at initial patient assessment. SortED displays treatments and procedures in a combined Rx list. These lists contain 1-15 Rx (median 6 Rx) depending on presentation. One important procedure recommended for any presentation where IV treatment is likely is obtaining IV access. Identifying patients who should be ‘nil by mouth”, or who need cardiac monitoring or isolation, are also procedure options.

Yesterday I was complaining about how labour-intensive our tests on the appropriateness of selected investigations had been.
Actually, we already had another avenue to explore. The Emergency Departments of ICHT use Electronic Patient Records (EPR) provided by FirstNet/Cerner. These records show what investigations and treatments are ordered and, usually, add a time-stamp when those orders were placed.

One of the main objectives for SortED was to move away from triage to a process more like Rapid Assessment and Treatment (RAT) where a doctor gets investigations and early treatments (Rx) underway during the initial assessment. The control arm of our ED study showed that serial processing still prevails, usually with 4 waits before treatment is given.

The practical part of research on Emergency Department systems grinds to a standstill when the winter rush starts. Nurses and Consultants are far too busy dealing with the winter influx of patients to help. Fortunately both the clinical trial work involving volunteer nurses and the expert review of patients’ records by our three ED consultant volunteers have been completed so the Somerset team are now able to analyse the data while ED staff focus on their hectic ‘day job.’

The accurate and early evaluation of pain in Emergency Departments is the key to prompt and appropriate analgesia administration. At the time of our first ED trial (July 2016-April 2017), both SortED® and the Imperial College Healthcare NHS Trust’s (ICHT) software used a simple 1-10 ‘pain ladder’ approach to record patients’ description of their pain.

Abstraction is the tedious process of obtaining data from medical records. We need key information about presenting complaints, observations, streaming descisions and timings of investigations, treatments to compare with SortED® records. Inspection of the UCC patient’s records showed inconsistent use of fields so that automated abstraction was not likely to produce accurate results.

Our Urgent Care Centre (UCC) trial of SortED® tablet (part funded by Health Education England) enrolled the last patient on 23 June. Our nurse volunteers were present at GP streaming of 962 patients, simultaneously evaluating patients using the SortED app. Comparison of SortED records with the UCC’s own patient records is in progress.