Burn care varies from supportive medical treatment to aggressive surgical intervention. These diametrically different treatment strategies are dependent on the clinical evaluation of burn depth. However, the diagnostic accuracy of clinical assessment via visual and tactile inspection is only 50-80% and lacks standardization. Burns are traditionally divided into three depths of tissue injury: epidermal, partial-thickness, and full-thickness. Partial-thickness burns are subdivided further into superficial-partial and deep-partial thickness burns. Early diagnosis of burn depth dictates the treatment planning, with superficial-partial thickness burns receiving medical management and deep-partial thickness burns often benefitting from early surgical excision and grafting. However, difficulty in assessing burn depth in the early period delays excision by days and remains a bottleneck in burn care. Hence, we propose data-driven approaches for real-time burn depth determination using B-mode ultrasound images. A machine learning (ML) approach is first developed to identify the burn depth. The limitation of this approach is the need to select the features of the images explicitly. To overcome this limitation and improve classification accuracy for the more challenging case of in situ burned skin tissues, a convolutional neural network (CNN)-based deep learning (DL) model has been developed. For the ML approach, a grey-level co-occurrence matrix (GLCM) computed from the ultrasound images of the tissue is employed to construct the textural feature set. The classification is performed using a nonlinear support vector machine (SVM) and kernel Fisher discriminant analysis. A leave-one-out cross-validation is used for the independent assessment of the classifiers. The model is tested for pair-wise binary classification of four burn conditions in ex vivo porcine skin tissue: (i) 200ºF for 10s, (ii) 200ºF for 30s, (iii) 450ºF for 10s, and (iv) 450ºF for 30s. The average classification accuracy for pairwise separation is 99%, with just over 30 samples in each burn group and the average multiclass classification accuracy is 93%. However, the classification accuracy deteriorates for the in situ burned skin samples obtained from freshly euthanized postmortem pigs.
To overcome the limitations of the ML approach for the in situ specimens, we have developed a DL model using a CNN architecture as a two-step process – we first initialize the network using the ex vivo burned skin samples, which are then used to classify the in situ burn samples. The performance metrics obtained from 20-fold cross-validation show that the model can identify deep-partial thickness burns, which are the most difficult to diagnose clinically, with 97% accuracy, 91% sensitivity, and 98% specificity. The diagnostic accuracy of the classifier is further illustrated by the high area under the curve values of 0.95 and 0.92, respectively, for the receiver operating characteristic and precision-recall curves. A post hoc explanation indicates that the classifier activates the discriminative textural features in the B-mode images for burn classification. The difference in the activation level with respect to burn depth highlights the network’s ability to discern variation in the burn depth. This aspect of the model can be utilized to predict the wound healing time by tracking changes in the speckle pattern during the healing process. The preliminary results show that the network can identify various stages of healing process with an average accuracy of 77%. The proposed model has the potential for clinical utility in assisting the clinical assessment of burn depths using a widely available clinical imaging device.;
August 2022; School of Engineering
Dept. of Mechanical, Aerospace, and Nuclear Engineering;
Rensselaer Polytechnic Institute, Troy, NY
Rensselaer Theses and Dissertations Online Collection;
Restricted to current Rensselaer faculty, staff and students in accordance with the
Rensselaer Standard license. Access inquiries may be directed to the Rensselaer Libraries.;