Published on 27.08.21 in Vol 5, No 9 (2021): September
Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/28345, first published Mar 02, 2021.
Original Paper
Automated Size Recognition in Pediatric Emergencies Using Machine Learning and Augmented Reality: Within-Group Comparative Study
ABSTRACT
Background: Pediatric emergencies involving children are rare events, and the experience of emergency physicians and the results of such emergencies are accordingly poor. Anatomical peculiarities and individual adjustments make treatment during pediatric emergency susceptible to error. Critical mistakes especially occur in the calculation of weight-based drug doses. Accordingly, the need for a ubiquitous assistance service that can, for example, automate dose calculation is high. However, few approaches exist due to the complexity of the problem.
Objective: Technically, an assistance service is possible, among other approaches, with an app that uses a depth camera that is integrated in smartphones or head-mounted displays to provide a 3D understanding of the environment. The goal of this study was to automate this technology as much as possible to develop and statistically evaluate an assistance service that does not have significantly worse measurement performance than an emergency ruler (the state of the art).
Methods: An assistance service was developed that uses machine learning to recognize patients and then automatically determines their size. Based on the size, the weight is automatically derived, and the dosages are calculated and presented to the physician. To evaluate the app, a small within-group design study was conducted with 17 children, who were each measured with the app installed on a smartphone with a built-in depth camera and a state-of-the-art emergency ruler.
Results: According to the statistical results (one-sample t test; P=.42; =.05), there is no significant difference between the measurement performance of the app and an emergency ruler under the test conditions (indoor, daylight). The newly developed measurement method is thus not technically inferior to the established one in terms of accuracy.
Conclusions: An assistance service with an integrated augmented reality emergency ruler is technically possible, although some groundwork is still needed. The results of this study clear the way for further research, for example, usability testing.
JMIR Form Res 2021;5(8):e28345
doi:10.2196/28345
KEYWORDS
Introduction
Background
The results in pediatric emergencies are not satisfactory. Too few children survive such emergencies with favorable neurological outcomes [
, ]. There are several reasons for this. First, this type of emergency is very rare; in Germany, for example, there are only 1000 prehospital resuscitations for 30,000 emergency physicians per year, that is, on average, one emergency physician resuscitates a child every 30 years [ ]. This observation deliberately ignores that there are specially trained pediatric emergency physicians in urban areas, as pediatric emergency physicians are not common. Second, emergency physicians find it difficult to remain calm in a child emergency. In a survey of 104 emergency physicians, conducted by Zink et al [ ], 88% said that they had already felt anxiety or excessive pressure at work. When asked for the reason, 84% said they had experienced these feelings in a pediatric emergency, followed by polytraumatized patients (20%) and obstetric emergencies (18%). Multiple answers were possible. Apart from the fact that the patient is a child and the psychological consequences that may result, it is mainly the anatomical differences between children and adults and the associated peculiarities of resuscitation that cause problems for emergency physicians. Although the resuscitation of an adult is quite standardized, there are individual differences in every child. The choice of using different processes as well as equipment is influenced by the size or weight of the child. For example, the size of the endotracheal tube and depth of insertion are specified [ ] or the dosage of medication is calculated individually based on the patient’s weight [ ]. Especially in drug dosing, mistakes happen rather frequently, sometimes with life-threatening consequences [ - ]. This is because it is difficult to determine a child's weight and thus the correct dose. There are various methods to determine a child’s weight. As Young and Korotzer [ ] describe in their systematic analysis, the most precise method is parental estimation. If the parents are not present, the state of the art is to derive the weight from the height of the child using a so-called emergency ruler (eg, Broselow tape [ ]). These tools are important, because the medics’ estimations are not very accurate, according to the systematic analysis mentioned earlier [ ]. Despite these aids, emergency physicians repeatedly express a desire for technical aids [ , ]. Therefore, one idea is to create a ubiquitous assistance service that uses modern wearables (eg, a smartwatch for measuring the compression depth [ ], head-mounted displays [HMDs] as screens or for telemedical scenarios [ , ]) to provide a service that requires as little attention as possible while still providing great assistance (principle of calm technology [ ]). To accomplish this, a high degree of automation must be achieved in addition to a high level of usability. The idea is to recognize the patient with computer vision algorithms and to be able to measure the patient directly using a depth camera. All other parameters can then be automatically derived, calculated, and displayed on an HMD, for example, as integrated into the process steps of the American Heart Association [ ] or the European Resuscitation Council (ERC) [ ] guidelines. After literature research, expert interviews, and initial research results [ ], an app was programmed and evaluated using a comparative study to apply this level of automation.State of the Art
There are several approaches to replace emergency rulers using technical support, for example, with a smartphone or a tablet [
- ]. Promising studies have already confirmed that the use of an app can minimize errors [ - ]. However, the problem with most apps (all mentioned earlier but one) is that there is no automation of size recognition, that is, manual entries are necessary. Apart from usability, there is also the problem that these values (age, weight, or height) must be known first. For inhospital cases, it can be assumed that the weight of the child is known; however, this does not apply to prehospital cases. For some apps, even the now-obsolete age-based formula for calculating the dosages is used [ ], which is inferior to the length-based method [ ]. A very interesting app is Optisizer developed by Wetzel et al [ ]. A 20×20-cm tag placed next to the child is used as a reference value for the size. A first clinical trial looks promising [ ]. However, the tag must be at the same level as the child, and the measurement must always be taken at a 90° angle. This is simply because a camera without additional sensor technology has no relationship to angle and depth, and therefore, an accurate calculation cannot be made automatically. A revised version of this app, which should solve this problem, is announced by the authors in the outlook [ ].The aim of this paper is therefore to fill this gap. An app is programmed and evaluated that uses augmented reality (AR) and a depth camera to provide a simple, fast, and safe way to automatically determine a child’s weight and thus the medication dosage.
Methods
Background
The evaluated app is based on a prototype in which the measuring accuracy of the Asus ZenFone AR’s depth cameras has already been proven [
]. This does not deviate significantly from the measurements made with the aid of an emergency ruler. However, the handling was problematic; the individual measuring points (head and foot of the child) had to be marked manually. To address this problem, the app was further developed so that it recognizes the child using machine learning and then performs an automated measurement. Furthermore, dosages for adrenalin and amiodarone are calculated and made available to the user. The calculations are based on the data of the KiGGS (Studie zur Gesundheit von Kindern und Jugendlichen in Deutschland [German Health Interview and Examination Survey for Children and Adolescents]) study of the Robert Koch Institute (RKI) [ ] and the formulas stated in the ERC guidelines [ ]. It must therefore be evaluated whether the new functionality of the app can repeat the good values of the previous study. The decisive factor here is how well the process of machine vision (recognition of the child, head to foot) and the size recognition work in combination.App Design and Technology
The size recognition of a patient and the dose calculation basically consist of three steps. In the first step, a person is detected in the camera’s field of view using an object recognition algorithm and is classified as a human being. In addition, the area in the image in which the child is located must be delimited as precisely as possible from the surroundings, and the measurement points (upper and lower limits) must be defined. In the second step, the distance between these two points and the camera is measured. This defines two points in 3D space, and the size can be calculated. In the final step, based on stored data, the respective dosages must be loaded and displayed.
Object Recognition
As soon as the app is started, it is ready to detect objects. The object detection is performed using the TensorFlow Object Detection API [
] and the TensorFlow Detection Model Zoo [ ]. Based on different parameters, such as GitHub activity, Google searches, books, or job descriptions, it can be said that TensorFlow is the leading deep learning framework [ ].Recognizing people is one of the standard tasks of machine vision, so it should not be necessary to train the entire functionality from the beginning. To simplify the clarity of the app and to save resources, only the functionality for recognizing persons is activated. If a person is recognized, a bounding box is placed around this person and the confidence is displayed. With a confidence of 98% or more, the coordinates of the bounding box are stored in variables. The respective y-coordinates of two diagonally opposite points of the four corner points of the box indicate the size of the person.
Size Measurement
The size is measured using the Google Tango framework. Switching between the activities is done using the Intent class of Android. Tango is the predecessor of ARCore [
] and was developed for mobile phones with depth cameras. By default, Tango uses touch to set the measurement points manually. To automate this, the points from before are adopted. To prevent diagonal measurements, the x values of the two points are averaged and used as x-coordinates for both measuring points. It is important that both the object detection and the size measurement work with the same resolution (in this case, 1920×1080 pixels).Dose Calculation
Depending on the size, the corresponding weight is loaded from a store and the appropriate dose is calculated and displayed using the formulas specified in the guidelines of the ERC [
]. The size to weight ratio is, as already mentioned, created using data from the KiGGS study of the RKI [ ] and the formulas given in the guidelines of the ERC [ ].Study Design and Measurements
The study design was a within-group setting in which each of the children was anonymously measured first with the app, installed on a ZenFone AR, and then with an emergency ruler (Pediatape [
]). The measurements (S1, S2) were performed in a room of a kindergarten during daylight. The children were lying on a wooden floor. For the measurement with the app (S1), the person taking the measurement stood in front of the child; the angle or the height of the camera was not specified. The process was similar to taking a photo with the smartphone. The only important factor was that the person being measured was captured as a whole by the camera (see ). When measuring with the emergency ruler (S2), the beginning of the measurement was placed at the head and the size was then read at the foot (see ). For both measurements, it was important that the person did not curve their legs. The parameters of age, height, weight, and gender of the children were not known from the beginning and were therefore randomly selected. In conclusion, there was one independent variable (measuring device) that took two values (app, emergency ruler).