Pour savoir comment effectuer et gérer un dépôt de document, consultez le « Guide abrégé – Dépôt de documents » sur le site Web de la Bibliothèque. Pour toute question, écrivez à corpus@ulaval.ca.
 

Personne :
Hains, Alexandre

En cours de chargement...
Photo de profil

Adresse électronique

Date de naissance

Projets de recherche

Structures organisationnelles

Fonction

Nom de famille

Hains

Prénom

Alexandre

Affiliation

Université Laval, Département de génie électrique et de génie informatique

ISNI

ORCID

Identifiant Canadiana

ncf13701465

person.page.name

Résultats de recherche

Voici les éléments 1 - 1 sur 1
  • PublicationAccès libre
    Tracking and predicting COVID-19 radiological trajectory on chest X-rays using deep learning
    (Springer Nature, 2022-04-04) Potvin, Olivier; Le-Khac, Huy; Lemieux, Simon; Chartrand‑Lefebvre, Carl; Hains, Alexandre; Dieumegarde, Louis; Forghani, Reza; Tang, An; Lévesque, Marie-Hélène; Duchesne, Simon; Hornstein, David; Archambault, Patrick; Gagné, Christian; Gourdeau, Daniel; Duchesne, Nathalie; Martin, Diego; Vecchio, Fabrizio; Yang, Issac
    Radiological findings on chest X-ray (CXR) have shown to be essential for the proper management of COVID-19 patients as the maximum severity over the course of the disease is closely linked to the outcome. As such, evaluation of future severity from current CXR would be highly desirable. We trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from an open-source dataset (COVID-19 image data collection) and from a multi-institutional local ICU dataset. The data was grouped into pairs of sequential CXRs and were categorized into three categories: 'Worse', 'Stable', or 'Improved' on the basis of radiological evolution ascertained from images and reports. Classical machine-learning algorithms were trained on the deep learning extracted features to perform immediate severity evaluation and prediction of future radiological trajectory. Receiver operating characteristic analyses and Mann-Whitney tests were performed. Deep learning predictions between "Worse" and "Improved" outcome categories and for severity stratification were significantly different for three radiological signs and one diagnostic ('Consolidation', 'Lung Lesion', 'Pleural effusion' and 'Pneumonia'; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between 'Worse' and 'Improved' cases with a 0.81 (0.74-0.83 95% CI) AUC in the open-access dataset and with a 0.66 (0.67-0.64 95% CI) AUC in the ICU dataset. Features extracted from the CXR could predict disease severity with a 52.3% accuracy in a 4-way classification. Severity evaluation trained on the COVID-19 image data collection had good out-of-distribution generalization when testing on the local dataset, with 81.6% of intubated ICU patients being classified as critically ill, and the predicted severity was correlated with the clinical outcome with a 0.639 AUC. CXR deep learning features show promise for classifying disease severity and trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.