Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
Heliyon ; 10(11): e31510, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38841458

RESUMEN

Background: Acute exacerbation of idiopathic inflammatory myopathies-associated interstitial lung disease (AE-IIM-ILD) is a significant event associated with increased morbidity and mortality. However, few studies investigated the potential prognostic factors contributing to mortality in patients who experience AE-IIM-ILD. Objectives: The purpose of our study was to comprehensively investigate whether high-resolution computed tomography (HRCT) findings predict the 1-year mortality in patients who experience AE-IIM-ILD. Methods: A cohort of 69 patients with AE-IIM-ILD was retrospectively created. The cohort was 79.7 % female, with a mean age of 50.7. Several HRCT features, including total interstitial lung disease extent (TIDE), distribution patterns, and radiologic ILD patterns, were assessed. A directed acyclic graph (DAG) was used to evaluate the statistical relationship between variables. The Cox regression method was performed to identify potential prognostic factors associated with mortality. Results: The HRCT findings significantly associated with AE-IIM-ILD mortality include TIDE (HR per 10%-increase, 1.64; 95%CI, 1.29-2.1, p < 0.001; model 1: C-index, 0.785), diffuse distribution pattern (HR, 3.75, 95%CI, 1.5-9.38, p = 0.005; model 2: C-index, 0.737), and radiologic diffuse alveolar damage (DAD) pattern (HR, 6.37, 95 % CI, 0.81-50.21, p = 0.079; model 3: C-index, 0.735). TIDE greater than 58.33 %, diffuse distribution pattern, and radiologic DAD pattern correlate with poor prognosis. The 90-day, 180-day, and 1-year survival rates of patients who experience AE-IIM-ILD were 75.3 %, 66.3 %, and 63.3 %, respectively. Conclusion: HRCT findings, including TIDE, distribution pattern, and radiological pattern, are predictive of 1-year mortality in patients who experience AE-IIM-ILD.

2.
Med Phys ; 51(3): 1997-2006, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37523254

RESUMEN

PURPOSE: To clarify the causal relationship between factors contributing to the postoperative survival of patients with esophageal cancer. METHODS: A cohort of 195 patients who underwent surgery for esophageal cancer between 2008 and 2021 was used in the study. All patients had preoperative chest computed tomography (CT) and positron emission tomography-CT (PET-CT) scans prior to receiving any treatment. From these images, high throughput and quantitative radiomic features, tumor features, and various body composition features were automatically extracted. Causal relationships among these image features, patient demographics, and other clinicopathological variables were analyzed and visualized using a novel score-based directed graph called "Grouped Greedy Equivalence Search" (GGES) while taking prior knowledge into consideration. After supplementing and screening the causal variables, the intervention do-calculus adjustment (IDA) scores were calculated to determine the degree of impact of each variable on survival. Based on this IDA score, a GGES prediction formula was generated. Ten-fold cross-validation was used to assess the performance of the models. The prediction results were evaluated using the R-Squared Score (R2 score). RESULTS: The final causal graphical model was formed by two PET-based image variables, ten body composition variables, four pathological variables, four demographic variables, two tumor variables, and one radiological variable (Percentile 10). Intramuscular fat mass was found to have the most impact on overall survival month. Percentile 10 and overall TNM (T: tumor, N: nodes, M: metastasis) stage were identified as direct causes of overall survival (month). The GGES casual model outperformed GES in regression prediction (R2  = 0.251) (p < 0.05) and was able to avoid unreasonable causality that may contradict common sense. CONCLUSION: The GGES causal model can provide a reliable and straightforward representation of the intricate causal relationships among the variables that impact the postoperative survival of patients with esophageal cancer.


Asunto(s)
Neoplasias Esofágicas , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Fluorodesoxiglucosa F18 , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/cirugía , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Estudios Retrospectivos
3.
Med Phys ; 51(4): 2806-2816, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37819009

RESUMEN

BACKGROUND: Chest x-ray is widely utilized for the evaluation of pulmonary conditions due to its technical simplicity, cost-effectiveness, and portability. However, as a two-dimensional (2-D) imaging modality, chest x-ray images depict limited anatomical details and are challenging to interpret. PURPOSE: To validate the feasibility of reconstructing three-dimensional (3-D) lungs from a single 2-D chest x-ray image via Vision Transformer (ViT). METHODS: We created a cohort of 2525 paired chest x-ray images (scout images) and computed tomography (CT) acquired on different subjects and we randomly partitioned them as follows: (1) 1800 - training set, (2) 200 - validation set, and (3) 525 - testing set. The 3-D lung volumes segmented from the chest CT scans were used as the ground truth for supervised learning. We developed a novel model termed XRayWizard that employed ViT blocks to encode the 2-D chest x-ray image. The aim is to capture global information and establish long-range relationships, thereby improving the performance of 3-D reconstruction. Additionally, a pooling layer at the end of each transformer block was introduced to extract feature information. To produce smoother and more realistic 3-D models, a set of patch discriminators was incorporated. We also devised a novel method to incorporate subject demographics as an auxiliary input to further improve the accuracy of 3-D lung reconstruction. Dice coefficient and mean volume error were used as performance metrics as the agreement between the computerized results and the ground truth. RESULTS: In the absence of subject demographics, the mean Dice coefficient for the generated 3-D lung volumes achieved a value of 0.738 ± 0.091. When subject demographics were included as an auxiliary input, the mean Dice coefficient significantly improved to 0.769 ± 0.089 (p < 0.001), and the volume prediction error was reduced from 23.5 ± 2.7%. to 15.7 ± 2.9%. CONCLUSION: Our experiment demonstrated the feasibility of reconstructing 3-D lung volumes from 2-D chest x-ray images, and the inclusion of subject demographics as additional inputs can significantly improve the accuracy of 3-D lung volume reconstruction.


Asunto(s)
Pulmón , Tórax , Humanos , Rayos X , Pulmón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos
4.
Med Phys ; 51(4): 2589-2597, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38159298

RESUMEN

BACKGROUND: Most of the subjects eligible for annual low-dose computed tomography (LDCT) lung screening will not develop lung cancer for their life. It is important to identify novel biomarkers that can help identify those at risk of developing lung cancer and improve the efficiency of LDCT screening programs. OBJECTIVE: This study aims to investigate the association between the morphology of the pulmonary circulatory system (PCS) and lung cancer development using LDCT scans acquired in the screening setting. METHODS: We analyzed the PLuSS cohort of 3635 lung screening patients from 2002 to 2016. Circulatory structures were segmented and quantified from LDCT scans. The time from the baseline CT scan to lung cancer diagnosis, accounting for death, was used to evaluate the prognostic ability (i.e., hazard ratio (HR)) of these structures independently and with demographic factors. Five-fold cross-validation was used to evaluate prognostic scores. RESULTS: Intrapulmonary vein volume had the strongest association with future lung cancer (HR = 0.63, p < 0.001). The joint model of intrapulmonary vein volume, age, smoking status, and clinical emphysema provided the strongest prognostic ability (HR = 2.20, AUC = 0.74). The addition of circulatory structures improved risk stratification, identifying the top 10% with 28% risk of lung cancer within 15 years. CONCLUSION: PCS characteristics, particularly intrapulmonary vein volume, are important predictors of lung cancer development. These factors significantly improve prognostication based on demographic factors and noncirculatory patient characteristics, particularly in the long term. Approximately 10% of the population can be identified with risk several times greater than average.


Asunto(s)
Sistema Cardiovascular , Neoplasias Pulmonares , Enfisema Pulmonar , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Fumar/epidemiología , Tamizaje Masivo , Detección Precoz del Cáncer/métodos
5.
Cancers (Basel) ; 15(13)2023 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-37444581

RESUMEN

The accurate identification of the preoperative factors impacting postoperative cancer recurrence is crucial for optimizing neoadjuvant and adjuvant therapies and guiding follow-up treatment plans. We modeled the causal relationship between radiographical features derived from CT scans and the clinicopathologic factors associated with postoperative lung cancer recurrence and recurrence-free survival. A retrospective cohort of 363 non-small-cell lung cancer (NSCLC) patients who underwent lung resections with a minimum 5-year follow-up was analyzed. Body composition tissues and tumor features were quantified based on preoperative whole-body CT scans (acquired as a component of PET-CT scans) and chest CT scans, respectively. A novel causal graphical model was used to visualize the causal relationship between these factors. Variables were assessed using the intervention do-calculus adjustment (IDA) score. Direct predictors for recurrence-free survival included smoking history, T-stage, height, and intramuscular fat mass. Subcutaneous fat mass, visceral fat volume, and bone mass exerted the greatest influence on the model. For recurrence, the most significant variables were visceral fat volume, subcutaneous fat volume, and bone mass. Pathologic variables contributed to the recurrence model, with bone mass, TNM stage, and weight being the most important. Body composition, particularly adipose tissue distribution, significantly and causally impacted both recurrence and recurrence-free survival through interconnected relationships with other variables.

6.
Med Image Anal ; 89: 102882, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37482032

RESUMEN

We present a novel computer algorithm to automatically detect and segment pulmonary embolisms (PEs) on computed tomography pulmonary angiography (CTPA). This algorithm is based on deep learning but does not require manual outlines of the PE regions. Given a CTPA scan, both intra- and extra-pulmonary arteries were firstly segmented. The arteries were then partitioned into several parts based on size (radius). Adaptive thresholding and constrained morphological operations were used to identify suspicious PE regions within each part. The confidence of a suspicious region to be PE was scored based on its contrast in the arteries. This approach was applied to the publicly available RSNA Pulmonary Embolism CT Dataset (RSNA-PE) to identify three-dimensional (3-D) PE negative and positive image patches, which were used to train a 3-D Recurrent Residual U-Net (R2-Unet) to automatically segment PE. The feasibility of this computer algorithm was validated on an independent test set consisting of 91 CTPA scans acquired from a different medical institute, where the PE regions were manually located and outlined by a thoracic radiologist (>18 years' experience). An R2-Unet model was also trained and validated on the manual outlines using a 5-fold cross-validation method. The CNN model trained on the high-confident PE regions showed a Dice coefficient of 0.676±0.168 and a false positive rate of 1.86 per CT scan, while the CNN model trained on the manual outlines demonstrated a Dice coefficient of 0.647±0.192 and a false positive rate of 4.20 per CT scan. The former model performed significantly better than the latter model (p<0.01). The promising performance of the developed PE detection and segmentation algorithm suggests the feasibility of training a deep learning network without dedicating significant efforts to manual annotations of the PE regions on CTPA scans.


Asunto(s)
Aprendizaje Profundo , Embolia Pulmonar , Humanos , Embolia Pulmonar/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Arteria Pulmonar/diagnóstico por imagen , Angiografía
7.
Ophthalmol Ther ; 12(5): 2479-2491, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37351837

RESUMEN

INTRODUCTION: To evaluate the ability of artificial intelligence (AI) software to quantify proptosis for identifying patients who need surgical drainage. METHODS: We pursued a retrospective study including 56 subjects with a clinical diagnosis of subperiosteal orbital abscess (SPOA) secondary to sinusitis at a tertiary pediatric hospital from 2002 to 2016. AI computer software was developed to perform 3D visualization and quantitative assessment of proptosis from computed tomography (CT) images acquired at the time of hospital admission. The AI software automatically computed linear and volume metrics of proptosis to provide more practice-consistent and informative measures. Two experienced physicians independently measured proptosis using the interzygomatic line method on axial CT images. The AI software and physician proptosis assessments were evaluated for association with eventual treatment procedures as standalone markers and in combination with the standard predictors. RESULTS: To treat the SPOA, 31 of 56 (55%) children underwent surgical intervention, including 18 early surgeries (performed within 24 h of admission), and 25 (45%) were managed medically. The physician measurements of proptosis were strongly correlated (Spearman r = 0.89, 95% CI 0.82-0.93) with 95% limits of agreement of ± 1.8 mm. The AI linear measurement was on average 1.2 mm larger (p = 0.007) and only moderately correlated with the average physicians' measurements (r = 0.53, 95% CI 0.31-0.69). Increased proptosis of both AI volumetric and linear measurements were moderately predictive of surgery (AUCs of 0.79, 95% CI 0.68-0.91, and 0.78, 95% CI 0.65-0.90, respectively) with the average physician measurement being poorly to fairly predictive (AUC of 0.70, 95% CI 0.56-0.84). The AI proptosis measures were also significantly greater in the early as compared to the late surgery groups (p = 0.02, and p = 0.04, respectively). The surgical and medical groups showed a substantial difference in the abscess volume (p < 0.001). CONCLUSION: AI proptosis measures significantly differed from physician assessments and showed a good overall ability to predict the eventual treatment. The volumetric AI proptosis measurement significantly improved the ability to predict the likelihood of surgery compared to abscess volume alone. Further studies are needed to better characterize and incorporate the AI proptosis measurements for assisting in clinical decision-making.

8.
J Med Imaging (Bellingham) ; 10(5): 051809, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37361550

RESUMEN

Purpose: To validate the effectiveness of an approach called batch-balanced focal loss (BBFL) in enhancing convolutional neural network (CNN) classification performance on imbalanced datasets. Materials and Methods: BBFL combines two strategies to tackle class imbalance: (1) batch-balancing to equalize model learning of class samples and (2) focal loss to add hard-sample importance to the learning gradient. BBFL was validated on two imbalanced fundus image datasets: a binary retinal nerve fiber layer defect (RNFLD) dataset (n=7,258) and a multiclass glaucoma dataset (n=7,873). BBFL was compared to several imbalanced learning techniques, including random oversampling (ROS), cost-sensitive learning, and thresholding, based on three state-of-the-art CNNs. Accuracy, F1-score, and the area under the receiver operator characteristic curve (AUC) were used as the performance metrics for binary classification. Mean accuracy and mean F1-score were used for multiclass classification. Confusion matrices, t-distributed neighbor embedding plots, and GradCAM were used for the visual assessment of performance. Results: In binary classification of RNFLD, BBFL with InceptionV3 (93.0% accuracy, 84.7% F1, 0.971 AUC) outperformed ROS (92.6% accuracy, 83.7% F1, 0.964 AUC), cost-sensitive learning (92.5% accuracy, 83.8% F1, 0.962 AUC), and thresholding (91.9% accuracy, 83.0% F1, 0.962 AUC) and others. In multiclass classification of glaucoma, BBFL with MobileNetV2 (79.7% accuracy, 69.6% average F1 score) outperformed ROS (76.8% accuracy, 64.7% F1), cost-sensitive learning (78.3% accuracy, 67.8.8% F1), and random undersampling (76.5% accuracy, 66.5% F1). Conclusion: The BBFL-based learning method can improve the performance of a CNN model in both binary and multiclass disease classification when the data are imbalanced.

9.
Am J Respir Cell Mol Biol ; 69(2): 126-134, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37236629

RESUMEN

Chord length is an indirect measure of alveolar size and a critical endpoint in animal models of chronic obstructive pulmonary disease (COPD). In assessing chord length, the lumens of nonalveolar structures are eliminated from measurement by various methods, including manual masking. However, manual masking is resource intensive and can introduce variability and bias. We created a fully automated deep learning-based tool to mask murine lung images and assess chord length to facilitate mechanistic and therapeutic discovery in COPD called Deep-Masker (available at http://47.93.0.75:8110/login). We trained the deep learning algorithm for Deep-Masker using 1,217 images from 137 mice from 12 strains exposed to room air or cigarette smoke for 6 months. We validated this algorithm against manual masking. Deep-Masker demonstrated high accuracy with an average difference in chord length compared with manual masking of -0.3 ± 1.4% (rs = 0.99) for room-air-exposed mice and 0.7 ± 1.9% (rs = 0.99) for cigarette-smoke-exposed mice. The difference between Deep-Masker and manually masked images for change in chord length because of cigarette smoke exposure was 6.0 ± 9.2% (rs = 0.95). These values exceed published estimates for interobserver variability for manual masking (rs = 0.65) and the accuracy of published algorithms by a significant margin. We validated the performance of Deep-Masker using an independent set of images. Deep-Masker can be an accurate, precise, fully automated method to standardize chord length measurement in murine models of lung disease.


Asunto(s)
Aprendizaje Profundo , Enfermedad Pulmonar Obstructiva Crónica , Animales , Ratones , Pulmón , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen
10.
Saf Sci ; 1642023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37206436

RESUMEN

Objective: To investigate the feasibility of predicting the risk of underground coal mine operations using data from the National Institute for Occupational Safety and Health (NIOSH). Methods: A total of 22,068 data entries from 3,982 unique underground coal mines from 1990 to 2020 were extracted from the NIOSH mine employment database. We defined the risk index of a mine as the ratio between the number of injuries and the size of the mine. Several machine learning models were used to predict the risk of a mine based on its employment demographics (i.e., number of underground employees, number of surface employees, and coal production). Based on these models, a mine was classified into a "low-risk" or "high-risk" category and assigned with a fuzzy risk index. Risk probabilities were then computed to generate risk profiles and identify mines with potential hazards. Results: NIOSH mine demographic features yielded a prediction performance with an AUC of 0.724 (95% CI 0.717-0.731) based on the last 31-years' mine data and an AUC of 0.738 (95% CI: 0.726, 0.749) on the last 16-years' mine data. Fuzzy risk score shows that risk is greatest in mines with an average of 621 underground employees and a production of 4,210,150 tons. The ratio of tons/employee maximizes the risk at 16,342.18 tons/employee. Conclusion: It is possible to predict the risk of underground coal mines based on their employee demographics and optimizing the allocation and distribution of employees in coal mines can help minimize the risk of accidents and injuries.

11.
Lung Cancer ; 179: 107189, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37058786

RESUMEN

OBJECTIVES: To evaluate the impact of body composition derived from computed tomography (CT) scans on postoperative lung cancer recurrence. METHODS: We created a retrospective cohort of 363 lung cancer patients who underwent lung resections and had verified recurrence, death, or at least 5-year follow-up without either event. Five key body tissues and ten tumor features were automatically segmented and quantified based on preoperative whole-body CT scans (acquired as part of a PET-CT scan) and chest CT scans, respectively. Time-to-event analysis accounting for the competing event of death was performed to analyze the impact of body composition, tumor features, clinical information, and pathological features on lung cancer recurrence after surgery. The hazard ratio (HR) of normalized factors was used to assess individual significance univariately and in the combined models. The 5-fold cross-validated time-dependent receiver operating characteristics analysis, with an emphasis on the area under the 3-year ROC curve (AUC), was used to characterize the ability to predict lung cancer recurrence. RESULTS: Body tissues that showed a standalone potential to predict lung cancer recurrence include visceral adipose tissue (VAT) volume (HR = 0.88, p = 0.047), subcutaneous adipose tissue (SAT) density (HR = 1.14, p = 0.034), inter-muscle adipose tissue (IMAT) volume (HR = 0.83, p = 0.002), muscle density (HR = 1.27, p < 0.001), and total fat volume (HR = 0.89, p = 0.050). The CT-derived muscular and tumor features significantly contributed to a model including clinicopathological factors, resulting in an AUC of 0.78 (95% CI: 0.75-0.83) to predict recurrence at 3 years. CONCLUSIONS: Body composition features (e.g., muscle density, or muscle and inter-muscle adipose tissue volumes) can improve the prediction of recurrence when combined with clinicopathological factors.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/patología , Estudios Retrospectivos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Recurrencia Local de Neoplasia , Pulmón/patología , Composición Corporal/fisiología , Tomografía Computarizada por Rayos X/métodos
12.
J Clin Med ; 12(6)2023 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-36983109

RESUMEN

BACKGROUND: Body composition can be accurately quantified based on computed tomography (CT) and typically reflects an individual's overall health status. However, there is a dearth of research examining the relationship between body composition and survival following esophagectomy. METHODS: We created a cohort consisting of 183 patients who underwent esophagectomy for esophageal cancer without neoadjuvant therapy. The cohort included preoperative PET-CT scans, along with pathologic and clinical data, which were collected prospectively. Radiomic, tumor, PET, and body composition features were automatically extracted from the images. Cox regression models were utilized to identify variables associated with survival. Logistic regression and machine learning models were developed to predict one-, three-, and five-year survival rates. Model performance was evaluated based on the area under the receiver operating characteristics curve (ROC/AUC). To test for the statistical significance of the impact of body composition on survival, body composition features were excluded for the best-performing models, and the DeLong test was used. RESULTS: The one-year survival model contained 10 variables, including three body composition variables (bone mass, bone density, and visceral adipose tissue (VAT) density), and demonstrated an AUC of 0.817 (95% CI: 0.738-0.897). The three-year survival model incorporated 14 variables, including three body composition variables (intermuscular adipose tissue (IMAT) volume, IMAT mass, and bone mass), with an AUC of 0.693 (95% CI: 0.594-0.792). For the five-year survival model, 10 variables were included, of which two were body composition variables (intramuscular adipose tissue (IMAT) volume and visceral adipose tissue (VAT) mass), with an AUC of 0.861 (95% CI: 0.783-0.938). The one- and five-year survival models exhibited significantly inferior performance when body composition features were not incorporated. CONCLUSIONS: Body composition features derived from preoperative CT scans should be considered when predicting survival following esophagectomy.

13.
Med Phys ; 50(1): 449-464, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36184848

RESUMEN

OBJECTIVE: To develop and validate a novel deep learning architecture to classify retinal vein occlusion (RVO) on color fundus photographs (CFPs) and reveal the image features contributing to the classification. METHODS: The neural understanding network (NUN) is formed by two components: (1) convolutional neural network (CNN)-based feature extraction and (2) graph neural networks (GNN)-based feature understanding. The CNN-based image features were transformed into a graph representation to encode and visualize long-range feature interactions to identify the image regions that significantly contributed to the classification decision. A total of 7062 CFPs were classified into three categories: (1) no vein occlusion ("normal"), (2) central RVO, and (3) branch RVO. The area under the receiver operative characteristic (ROC) curve (AUC) was used as the metric to assess the performance of the trained classification models. RESULTS: The AUC, accuracy, sensitivity, and specificity for NUN to classify CFPs as normal, central occlusion, or branch occlusion were 0.975 (± 0.003), 0.911 (± 0.007), 0.983 (± 0.010), and 0.803 (± 0.005), respectively, which outperformed available classical CNN models. CONCLUSION: The NUN architecture can provide a better classification performance and a straightforward visualization of the results compared to CNNs.


Asunto(s)
Monjas , Oclusión de la Vena Retiniana , Humanos , Oclusión de la Vena Retiniana/diagnóstico por imagen , Redes Neurales de la Computación , Fondo de Ojo , Técnicas de Diagnóstico Oftalmológico
14.
Artículo en Inglés | MEDLINE | ID: mdl-36127930

RESUMEN

Accurate identification of incomplete blinking from eye videography is critical for the early detection of eye disorders or diseases (e.g., dry eye). In this study, we develop a texture-aware neural network based on the classical U-Net (termed TAU-Net) to accurately extract palpebral fissures from each frame of eye videography for assessing incomplete blinking. We introduced three different convolutional blocks based on element-wise subtraction operations to highlight subtle textures associated with target objects and integrated these blocks with the U-Net to improve the segmentation of palpebral fissures. Quantitative experiments on 1396 frame images showed that the developed network achieved an average Dice index of 0.9587 and a Hausdorff distance (HD) of 4.9462 pixels when applied to segment palpebral fissures. It outperformed the U-Net and its several variants, demonstrating a promising performance in identifying incomplete blinking based on eye videography.

15.
Pattern Recognit ; 1282022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35528144

RESUMEN

Objective: To develop and validate a novel convolutional neural network (CNN) termed "Super U-Net" for medical image segmentation. Methods: Super U-Net integrates a dynamic receptive field module and a fusion upsampling module into the classical U-Net architecture. The model was developed and tested to segment retinal vessels, gastrointestinal (GI) polyps, skin lesions on several image types (i.e., fundus images, endoscopic images, dermoscopic images). We also trained and tested the traditional U-Net architecture, seven U-Net variants, and two non-U-Net segmentation architectures. K-fold cross-validation was used to evaluate performance. The performance metrics included Dice similarity coefficient (DSC), accuracy, positive predictive value (PPV), and sensitivity. Results: Super U-Net achieved average DSCs of 0.808±0.0210, 0.752±0.019, 0.804±0.239, and 0.877±0.135 for segmenting retinal vessels, pediatric retinal vessels, GI polyps, and skin lesions, respectively. The Super U-net consistently outperformed U-Net, seven U-Net variants, and two non-U-Net segmentation architectures (p < 0.05). Conclusion: Dynamic receptive fields and fusion upsampling can significantly improve image segmentation performance.

16.
Chronic Obstr Pulm Dis ; 9(3): 325-335, 2022 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-35550241

RESUMEN

Introduction: Factors beyond cigarette smoke likely contribute to chronic obstructive pulmonary disease (COPD) pathogenesis. Prior studies demonstrate fungal colonization of the respiratory tract and increased epithelial barrier permeability in COPD. We sought to determine whether 1,3-beta-d-glucan (BDG), a polysaccharide component of the fungal cell wall, is detectable in the plasma of individuals with COPD and associates with clinical outcomes and matrix degradation proteins. Methods: BDG was measured in the plasma of current and former smokers with COPD. High BDG was defined as a value greater than the 95th percentile of BDG in smokers without airflow obstruction. Pulmonary function, emphysema, and symptoms were compared between COPD participants with high versus low BDG. The relationship between plasma BDG, matrix metalloproteinases (MMP) 1, 7, and 9, and tissue inhibitor of matrix metalloproteinases (TIMP) 1, 2, and 4 was assessed adjusting for age, sex, and smoking status. Results: COPD participants with high BDG plasma levels (19.8%) had lower forced expiratory volume in 1 second to forced vital capacity ratios (median 31.9 versus 39.3, p=0.025), higher St George's Respiratory Questionnaire symptom scores (median 63.6 versus 57.4, p=0.016), and greater prevalence of sputum production (69.4% versus 52.0%) and exacerbations (69.4% versus 48%) compared to COPD participants with low BDG. BDG levels directly correlated with MMP1 (r=0.27, p<0.001) and TIMP1 (r=0.16, p=0.022) in unadjusted and adjusted analyses. Conclusions: Elevated plasma BDG levels correlate with worse lung function, greater respiratory morbidity, and circulating markers of matrix degradation in COPD. These findings suggest that targeting dysbiosis or enhancing epithelial barrier integrity may have disease-modifying effects in COPD.

17.
Med Image Anal ; 77: 102367, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35066393

RESUMEN

We present a novel integrative computerized solution to automatically identify and differentiate pulmonary arteries and veins depicted on chest computed tomography (CT) without iodinated contrast agents. We first identified the central extrapulmonary arteries and veins using a convolutional neural network (CNN) model. Then, a computational differential geometry method was used to automatically identify the tubular-like structures in the lungs with high densities, which we believe are the intrapulmonary vessels. Beginning with the extrapulmonary arteries and veins, we progressively traced the intrapulmonary vessels by following their skeletons and differentiated them into arteries and veins. Instead of manually labeling the numerous arteries and veins in the lungs for machine learning, this integrative strategy limits the manual effort only to the large extrapulmonary vessels. We used a dataset consisting of 120 chest CT scans acquired on different subjects using various protocols to develop, train, and test the algorithms. Our experiments on an independent test set (n = 15) showed promising performance. The computer algorithm achieved a sensitivity of ∼98% in labeling the pulmonary artery and vein branches when compared with a human expert's results, demonstrating the feasibility of our computerized solution in pulmonary artery/vein labeling.


Asunto(s)
Arteria Pulmonar , Tomografía Computarizada por Rayos X , Algoritmos , Humanos , Redes Neurales de la Computación , Arteria Pulmonar/diagnóstico por imagen , Tórax , Tomografía Computarizada por Rayos X/métodos
18.
J Thorac Cardiovasc Surg ; 163(4): 1496-1505.e10, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33726909

RESUMEN

OBJECTIVE: The study objective was to investigate if machine learning algorithms can predict whether a lung nodule is benign, adenocarcinoma, or its preinvasive subtype from computed tomography images alone. METHODS: A dataset of chest computed tomography scans containing lung nodules was collected with their pathologic diagnosis from several sources. The dataset was split randomly into training (70%), internal validation (15%), and independent test sets (15%) at the patient level. Two machine learning algorithms were developed, trained, and validated. The first algorithm used the support vector machine model, and the second used deep learning technology: a convolutional neural network. Receiver operating characteristic analysis was used to evaluate the performance of the classification on the test dataset. RESULTS: The support vector machine/convolutional neural network-based models classified nodules into 6 categories resulting in an area under the curve of 0.59/0.65 when differentiating atypical adenomatous hyperplasia versus adenocarcinoma in situ, 0.87/0.86 with minimally invasive adenocarcinoma versus invasive adenocarcinoma, 0.76/0.72 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma, 0.89/0.87 atypical adenomatous hyperplasia + adenocarcinoma in situ versus minimally invasive adenocarcinoma + invasive adenocarcinoma, and 0.93/0.92 atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma. Classifying benign versus atypical adenomatous hyperplasia + adenocarcinoma in situ + minimally invasive adenocarcinoma versus invasive adenocarcinoma resulted in a micro-average area under the curve of 0.93/0.94 for the support vector machine/convolutional neural network models, respectively. The convolutional neural network-based methods had higher sensitivities than the support vector machine-based methods but lower specificities and accuracies. CONCLUSIONS: The machine learning algorithms demonstrated reasonable performance in differentiating benign versus preinvasive versus invasive adenocarcinoma from computed tomography images alone. However, the prediction accuracy varies across its subtypes. This holds the potential for improved diagnostic capabilities with less-invasive means.


Asunto(s)
Adenocarcinoma/diagnóstico por imagen , Diagnóstico por Computador , Neoplasias Pulmonares/diagnóstico por imagen , Aprendizaje Automático , Adenoma/diagnóstico por imagen , Algoritmos , Diagnóstico Diferencial , Femenino , Humanos , Masculino , Estudios Retrospectivos , Tomografía Computarizada por Rayos X
19.
Front Med (Lausanne) ; 8: 761804, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34722596

RESUMEN

Objective: To investigate the associations between intrapulmonary vascular volume (IPVV) depicted on inspiratory and expiratory CT scans and disease severity in COPD patients, and to determine which CT parameters can be used to predict IPVV. Methods: We retrospectively collected 89 CT examinations acquired on COPD patients from an available database. All subjects underwent both inspiratory and expiratory CT scans. We quantified the IPVV, airway wall thickness (WT), the percentage of the airway wall area (WA%), and the extent of emphysema (LAA%-950) using an available pulmonary image analysis tool. The underlying relationship between IPVV and COPD severity, which was defined as mild COPD (GOLD stage I and II) and severe COPD (GOLD stage III and IV), was analyzed using the Student's t-test (or Mann-Whitney U-test). The correlations of IPVV with pulmonary function tests (PFTs), LAA%-950, and airway parameters for the third to sixth generation bronchus were analyzed using the Pearson or Spearman's rank correlation coefficients and multiple stepwise regression. Results: In the subgroup with only inspiratory examinations, the correlation coefficients between IPVV and PFT measures were -0.215 ~ -0.292 (p < 0.05), the correlation coefficients between IPVV and WT3-6 were 0.233 ~ 0.557 (p < 0.05), and the correlation coefficient between IPVV and LAA%-950 were 0.238 ~ 0.409 (p < 0.05). In the subgroup with only expiratory scan, the correlation coefficients between IPVV and PFT measures were -0.238 ~ -0.360 (p < 0.05), the correlation coefficients between IPVV and WT3-6 were 0.260 ~ 0.566 (p < 0.05), and the correlation coefficient between IPVV and LAA%-950 were 0.241 ~ 0.362 (p < 0.05). The multiple stepwise regression analyses demonstrated that WT were independently associated with IPVV (P < 0.05). Conclusion: The expiratory CT scans can provide a more accurate assessment of COPD than the inspiratory CT scans, and the airway wall thickness maybe an independent predictor of pulmonary vascular alteration in patients with COPD.

20.
Int J Med Inform ; 155: 104583, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34560490

RESUMEN

BACKGROUND: This study aims to investigate how infectious keratitis depicted on slit-lamp and smartphone photographs can be reliably assessed using deep learning. MATERIALS AND METHODS: We retrospectively collected a dataset consisting of 5,673 slit-lamp photographs and 400 smartphone photographs acquired on different subjects. Based on multiple clinical tests (e.g., cornea scraping), these photographs were diagnosed and classified into four categories, including normal (i.e., no keratitis), bacterial keratitis (BK), fungal keratitis (FK), and herpes simplex virus stromal keratitis (HSK). We preprocessed these slit-lamp images into two separate subgroups: (1) global images and (2) regional images. The cases in each group were randomly split into training, internal validation, and independent testing sets. Then, we implemented a deep learning network based on the InceptionV3 by fine-tuning its architecture and used the developed network to classify these slit-lamp images. Additionally, we investigated the performance of the InceptionV3 model in classifying infectious keratitis depicted on smartphone images. We, in particular, clarified whether the computer model trained on the global images outperformed the one trained on the regional images. The quadratic weighted kappa (QWK) and the receiver operating characteristic (ROC) analysis were used to assess the performance of the developed models. RESULTS: Our experiments on the independent testing sets showed that the developed models achieved the QWK of 0.9130 (95% CI: 88.99-93.61%) and 0.8872 (95% CI: 86.13-91.31%), and 0.5379 (95% CI, 48.89-58.69%) for the global images, the regional images, and the smartphone images, respectively. The area under the ROC curves (AUCs) were 0.9588 (95% CI: 94.28-97.48%), 0.9425 (95% CI: 92.35-96.15%), and 0.8529 (95% CI: 81.79-88.79%) for the same test sets, respectively. CONCLUSION: The deep learning solution demonstrated very promising performance in assessing infectious keratitis depicted on slit-lamp photographs and the images acquired by smartphones. In particular, the model trained on the global images outperformed that trained on the regional images.


Asunto(s)
Aprendizaje Profundo , Queratitis , Estudios de Factibilidad , Humanos , Queratitis/diagnóstico , Estudios Retrospectivos , Teléfono Inteligente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA