RESUMO
Lung ultrasound (LUS) has become a widely adopted diagnostic method for several lung diseases. However, the presence of air inside the lung does not allow the anatomical investigation of the organ. Therefore, LUS is mainly based on the interpretation of vertical imaging artifacts, called B-lines. These artifacts correlate with several pathologies, but their genesis is still partly unknown. Within this framework, this study focuses on the factors affecting the artifacts' formation by numerically simulating the ultrasound propagation within the lungs through the toolbox k-Wave. Since the main hypothesis behind the generation of B-lines relies on multiple scattering phenomena occurring once acoustic channels open at the lung surface, the impact of changing alveolar size and spacing is of interest. The tested domain is of size 4 cm × 1.6 cm, the investigated frequencies vary from 1 to 5 MHz, and the explored alveolar diameters and spacing range from 100 to 400 µm and from 20 to 395 µm, respectively. Results show the strong and entangled relation among the wavelength, the domain geometries, and the artifact visualization, allowing for better understanding of propagation in such a complex medium and opening several possibilities for future studies.
Assuntos
Pneumopatias , Artefatos , Humanos , Pulmão/diagnóstico por imagem , UltrassonografiaRESUMO
Deep learning (DL) has proved successful in medical imaging and, in the wake of the recent COVID-19 pandemic, some works have started to investigate DL-based solutions for the assisted diagnosis of lung diseases. While existing works focus on CT scans, this paper studies the application of DL techniques for the analysis of lung ultrasonography (LUS) images. Specifically, we present a novel fully-annotated dataset of LUS images collected from several Italian hospitals, with labels indicating the degree of disease severity at a frame-level, video-level, and pixel-level (segmentation masks). Leveraging these data, we introduce several deep models that address relevant tasks for the automatic analysis of LUS images. In particular, we present a novel deep network, derived from Spatial Transformer Networks, which simultaneously predicts the disease severity score associated to a input frame and provides localization of pathological artefacts in a weakly-supervised way. Furthermore, we introduce a new method based on uninorms for effective frame score aggregation at a video-level. Finally, we benchmark state of the art deep models for estimating pixel-level segmentations of COVID-19 imaging biomarkers. Experiments on the proposed dataset demonstrate satisfactory results on all the considered tasks, paving the way to future research on DL for the assisted diagnosis of COVID-19 from LUS data.