RESUMO
Ophthalmic diseases such as central serous chorioretinopathy (CSC) significantly impair the vision of millions of people globally. Precise segmentation of choroid and macular edema is critical for diagnosing and treating these conditions. However, existing 3D medical image segmentation methods often fall short due to the heterogeneous nature and blurry features of these conditions, compounded by medical image clarity issues and noise interference arising from equipment and environmental limitations. To address these challenges, we propose the Spectrum Analysis Synergy Axial-Spatial Network (SASAN), an approach that innovatively integrates spectrum features using the Fast Fourier Transform (FFT). SASAN incorporates two key modules: the Frequency Integrated Neural Enhancer (FINE), which mitigates noise interference, and the Axial-Spatial Elementum Multiplier (ASEM), which enhances feature extraction. Additionally, we introduce the Self-Adaptive Multi-Aspect Loss ( LSM ), which balances image regions, distribution, and boundaries, adaptively updating weights during training. We compiled and meticulously annotated the Choroid and Macular Edema OCT Mega Dataset (CMED-18k), currently the world's largest dataset of its kind. Comparative analysis against 13 baselines shows our method surpasses these benchmarks, achieving the highest Dice scores and lowest HD95 in the CMED and OIMHS datasets. Our code is publicly available at https://github.com/IMOP-lab/SASAN-Pytorch.
Assuntos
Algoritmos , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Edema Macular/diagnóstico por imagem , Corioide/diagnóstico por imagem , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Coriorretinopatia Serosa Central/diagnóstico por imagem , Análise de Fourier , Bases de Dados Factuais , Interpretação de Imagem Assistida por Computador/métodosRESUMO
Deep learning methods are frequently used in segmenting histopathology images with high-quality annotations nowadays. Compared with well-annotated data, coarse, scribbling-like labelling is more cost-effective and easier to obtain in clinical practice. The coarse annotations provide limited supervision, so employing them directly for segmentation network training remains challenging. We present a sketch-supervised method, called DCTGN-CAM, based on a dual CNN-Transformer network and a modified global normalised class activation map. By modelling global and local tumour features simultaneously, the dual CNN-Transformer network produces accurate patch-based tumour classification probabilities by training only on lightly annotated data. With the global normalised class activation map, more descriptive gradient-based representations of the histopathology images can be obtained, and inference of tumour segmentation can be performed with high accuracy. Additionally, we collect a private skin cancer dataset named BSS, which contains fine and coarse annotations for three types of cancer. To facilitate reproducible performance comparison, experts are also invited to label coarse annotations on the public liver cancer dataset PAIP2019. On the BSS dataset, our DCTGN-CAM segmentation outperforms the state-of-the-art methods and achieves 76.68 % IOU and 86.69 % Dice scores on the sketch-based tumour segmentation task. On the PAIP2019 dataset, our method achieves a Dice gain of 8.37 % compared with U-Net as the baseline network.
Assuntos
Neoplasias Hepáticas , Neoplasias Cutâneas , Humanos , Fontes de Energia Elétrica , Probabilidade , Processamento de Imagem Assistida por ComputadorRESUMO
Background: We aimed to propose a deep learning-based approach to automatically measure eyelid morphology in patients with thyroid-associated ophthalmopathy (TAO). Methods: This prospective study consecutively included 74 eyes of patients with TAO and 74 eyes of healthy volunteers visiting the ophthalmology department in a tertiary hospital. Patients diagnosed as TAO and healthy volunteers who were age- and gender-matched met the eligibility criteria for recruitment. Facial images were taken under the same light conditions. Comprehensive eyelid morphological parameters, such as palpebral fissure (PF) length, margin reflex distance (MRD), eyelid retraction distance, eyelid length, scleral area, and mid-pupil lid distance (MPLD), were automatically calculated using our deep learning-based analysis system. MRD1 and 2 were manually measured. Bland-Altman plots and intraclass correlation coefficients (ICCs) were performed to assess the agreement between automatic and manual measurements of MRDs. The asymmetry of the eyelid contour was analyzed using the temporal: nasal ratio of the MPLD. All eyelid features were compared between TAO eyes and control eyes using the independent samples t-test. Results: A strong agreement between automatic and manual measurement was indicated. Biases of MRDs in TAO eyes and control eyes ranged from -0.01 mm [95% limits of agreement (LoA): -0.64 to 0.63 mm] to 0.09 mm (LoA: -0.46 to 0.63 mm). ICCs ranged from 0.932 to 0.980 (P<0.001). Eyelid features were significantly different in TAO eyes and control eyes, including MRD1 (4.82±1.59 vs. 2.99±0.81 mm; P<0.001), MRD2 (5.89±1.16 vs. 5.47±0.73 mm; P=0.009), upper eyelid length (UEL) (27.73±4.49 vs. 25.42±4.35 mm; P=0.002), lower eyelid length (LEL) (31.51±4.59 vs. 26.34±4.72 mm; P<0.001), and total scleral area (SATOTAL) (96.14±34.38 vs. 56.91±14.97 mm2; P<0.001). The MPLDs at all angles showed significant differences in the 2 groups of eyes (P=0.008 at temporal 180°; P<0.001 at other angles). The greatest temporal-nasal asymmetry appeared at 75° apart from the midline in TAO eyes. Conclusions: Our proposed system allowed automatic, comprehensive, and objective measurement of eyelid morphology by only using facial images, which has potential application prospects in TAO. Future work with a large sample of patients that contains different TAO subsets is warranted.
RESUMO
Background: Inferior oblique overaction (IOOA) is a common ocular motility disorder. This study aimed to propose a novel deep learning-based approach to automatically evaluate the amount of IOOA. Methods: This prospective study included 106 eyes of 72 consecutive patients attending the strabismus clinic in a tertiary referral hospital. Patients were eligible for inclusion if they were diagnosed with IOOA. IOOA was clinically graded from +1 to +4. Based on photograph in the adducted position, the height difference between the inferior corneal limbus of both eyes was manually measured using ImageJ and automatically measured by our deep learning-based image analysis system with human supervision. Correlation coefficients, Bland-Altman plots and mean absolute deviation (MAD) were analyzed between two different measurements of evaluating IOOA. Results: There were significant correlations between automated photographic measurements and clinical gradings (Kendall's tau: 0.721; 95% confidence interval: 0.652 to 0.779; P<0.001), between automated and manual photographic measurements [intraclass correlation coefficients (ICCs): 0.975; 95% confidence interval: 0.963 to 0.983; P<0.001], and between two-repeated automated photographic measurements (ICCs: 0.998; 95% confidence interval: 0.997 to 0.999; P<0.001). The biases and MADs were 0.10 [95% limits of agreement (LoA): -0.45 to 0.64] mm and 0.26±0.14 mm between automated and manual photographic measurements, and 0.01 (95% LoA: -0.14 to 0.16) mm and 0.07±0.04 mm between two-repeated automated photographic measurements, respectively. Conclusions: The automated photographic measurements of IOOA using deep learning technique were in excellent agreement with manual photographic measurements and clinical gradings. This new approach allows objective, accurate and repeatable measurement of IOOA and could be easily implemented in clinical practice using only photographs.
RESUMO
Purpose: To automatically predict the postoperative appearance of blepharoptosis surgeries and evaluate the generated images both objectively and subjectively in a clinical setting. Design: Cross-sectional study. Participants: This study involved 970 pairs of images of 450 eyes from 362 patients undergoing blepharoptosis surgeries at our oculoplastic clinic between June 2016 and April 2021. Methods: Preoperative and postoperative facial images were used to train and test the deep learning-based postoperative appearance prediction system (POAP) consisting of 4 modules, including the data processing module (P), ocular detection module (O), analyzing module (A), and prediction module (P). Main Outcome Measures: The overall and local performance of the system were automatically quantified by the overlap ratio of eyes and by lid contour analysis using midpupil lid distances (MPLDs). Four ophthalmologists and 6 patients were invited to complete a satisfaction scale and a similarity survey with the test set of 75 pairs of images on each scale. Results: The overall performance (mean overlap ratio) was 0.858 ± 0.082. The corresponding multiple radial MPLDs showed no significant differences between the predictive results and the real samples at any angle (P > 0.05). The absolute error between the predicted marginal reflex distance-1 (MRD1) and the actual postoperative MRD1 ranged from 0.013 mm to 1.900 mm (95% within 1 mm, 80% within 0.75 mm). The participating experts and patients were "satisfied" with 268 pairs (35.7%) and "highly satisfied" with most of the outcomes (420 pairs, 56.0%). The similarity score was 9.43 ± 0.79. Conclusions: The fully automatic deep learning-based method can predict postoperative appearance for blepharoptosis surgery with high accuracy and satisfaction, thus offering the patients with blepharoptosis an opportunity to understand the expected change more clearly and to relieve anxiety. In addition, this system could be used to assist patients in selecting surgeons and the recovery phase of daily living, which may offer guidance for inexperienced surgeons as well.
RESUMO
The first consensus guidelines for scoring the histopathological growth patterns (HGPs) of liver metastases were established in 2017. Since then, numerous studies have applied these guidelines, have further substantiated the potential clinical value of the HGPs in patients with liver metastases from various tumour types and are starting to shed light on the biology of the distinct HGPs. In the present guidelines, we give an overview of these studies, discuss novel strategies for predicting the HGPs of liver metastases, such as deep-learning algorithms for whole-slide histopathology images and medical imaging, and highlight liver metastasis animal models that exhibit features of the different HGPs. Based on a pooled analysis of large cohorts of patients with liver-metastatic colorectal cancer, we propose a new cut-off to categorise patients according to the HGPs. An up-to-date standard method for HGP assessment within liver metastases is also presented with the aim of incorporating HGPs into the decision-making processes surrounding the treatment of patients with liver-metastatic cancer. Finally, we propose hypotheses on the cellular and molecular mechanisms that drive the biology of the different HGPs, opening some exciting preclinical and clinical research perspectives.
Assuntos
Neoplasias Colorretais , Neoplasias Hepáticas , Animais , Neoplasias Colorretais/patologia , Neoplasias Hepáticas/patologiaRESUMO
BACKGROUND AND AIM: Eyelid position and contour abnormality could lead to various diseases, such as blepharoptosis, which is a common eyelid disease. Accurate assessment of eyelid morphology is important in the management of blepharoptosis. We aimed to proposed a novel deep learning-based image analysis to automatically measure eyelid morphological properties before and after blepharoptosis surgery. METHODS: This study included 135 ptotic eyes of 103 patients who underwent blepharoptosis surgery. Facial photographs were taken preoperatively and postoperatively. Margin reflex distance (MRD) 1 and 2 of the operated eyes were manually measured by a senior surgeon. Multiple eyelid morphological parameters, such as MRD1, MRD2, upper eyelid length and corneal area, were automatically measured by our deep learning-based image analysis. Agreement between manual and automated measurements, as well as two repeated automated measurements of MRDs were analysed. Preoperative and postoperative eyelid morphological parameters were compared. Postoperative eyelid contour symmetry was evaluated using multiple mid-pupil lid distances (MPLDs). RESULTS: The intraclass correlation coefficients (ICCs) between manual and automated measurements of MRDs ranged from 0.934 to 0.971 (p < .001), and the bias ranged from 0.09 mm to 0.15 mm. The ICCs between two repeated automated measurements were up to 0.999 (p < .001), and the bias was no more than 0.002 mm. After surgery, MRD1 increased significantly from 0.31 ± 1.17 mm to 2.89 ± 1.06 mm, upper eyelid length from 19.94 ± 3.61 mm to 21.40 ± 2.40 mm, and corneal area from 52.72 ± 15.97 mm2 to 76.31 ± 11.31mm2 (all p < .001). Postoperative binocular MPLDs at different angles (from 0° to 180°) showed no significant differences in the patients. CONCLUSION: This technique had high accuracy and repeatability for automatically measuring eyelid morphology, which allows objective assessment of blepharoptosis surgical outcomes. Using only patients' photographs, this technique has great potential in diagnosis and management of other eyelid-related diseases.
Assuntos
Blefaroptose , Aprendizado Profundo , Blefaroptose/diagnóstico , Blefaroptose/cirurgia , Pálpebras/anatomia & histologia , Pálpebras/cirurgia , Humanos , Estudos RetrospectivosRESUMO
Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However, these approaches lack powerful strategies to incorporate contextual information of tumor cells and their surrounding, which has been proven as a fundamental cue to deal with local ambiguity. In this work, we propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation. CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs. We further propose context guided attentive conditional random fields which can selectively aggregate features. We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS2017, BRATS2018 and BRATS2019. The experimental results show that the proposed algorithm has better or competitive performance against several State-of-The-Art approaches under different segmentation metrics on the training and validation sets.
Assuntos
Glioma , Imageamento por Ressonância Magnética , Algoritmos , Encéfalo/diagnóstico por imagem , Glioma/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de ComputaçãoRESUMO
Accurate segmentation of lung cancer in pathology slides is a critical step in improving patient care. We proposed the ACDC@LungHP (Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology) challenge for evaluating different computer-aided diagnosis (CADs) methods on the automatic diagnosis of lung cancer. The ACDC@LungHP 2019 focused on segmentation (pixel-wise detection) of cancer tissue in whole slide imaging (WSI), using an annotated dataset of 150 training images and 50 test images from 200 patients. This paper reviews this challenge and summarizes the top 10 submitted methods for lung cancer segmentation. All methods were evaluated using metrics using the precision, accuracy, sensitivity, specificity, and DICE coefficient (DC). The DC ranged from 0.7354 ±0.1149 to 0.8372 ±0.0858. The DC of the best method was close to the inter-observer agreement (0.8398 ±0.0890). All methods were based on deep learning and categorized into two groups: multi-model method and single model method. In general, multi-model methods were significantly better (p 0.01) than single model methods, with mean DC of 0.7966 and 0.7544, respectively. Deep learning based methods could potentially help pathologists find suspicious regions for further analysis of lung cancer in WSI.
Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Diagnóstico por Computador , Humanos , Neoplasias Pulmonares/diagnóstico por imagemRESUMO
In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.
Assuntos
Mamografia/métodos , Redes Neurais de Computação , Simulação por Computador , Bases de Dados como Assunto , Humanos , Processamento de Imagem Assistida por Computador , Reprodutibilidade dos Testes , Máquina de Vetores de Suporte , Análise de OndaletasRESUMO
[This corrects the article DOI: 10.1371/journal.pone.0166067.].
RESUMO
BACKGROUND & AIMS: Adenocarcinomas of the pancreatobiliary system are currently classified by their primary anatomical location. In particular, the pathological diagnosis of intrahepatic cholangiocarcinoma is still considered as a diagnosis of exclusion of metastatic adenocarcinoma. Periampullary cancers have been previously classified according to the histological type of differentiation (pancreatobiliary, intestinal), but overlapping morphological features hinder their differential diagnosis. We performed an integrative immunohistochemical analysis of pancreato-biliary tumors to improve their diagnosis and prediction of outcome. METHODS: This was a retrospective observational cohort study on patients with adenocarcinoma of the pancreatobiliary system who underwent diagnostic core needle biopsy or surgical resection at a tertiary referral center. 409 tumor samples were analyzed with up to 27 conventional antibodies used in diagnostic pathology. Immunohistochemical scoring system was the percentage of stained tumor cells. Bioinformatic analysis, internal validation, and survival analysis were performed. RESULTS: Hierarchical clustering and differential expression analysis identified three immunohistochemical tumor types (extrahepatic pancreatobiliary, intestinal, and intrahepatic cholangiocarcinoma) and the discriminant markers between them. Among patients who underwent surgical resection of their primary tumor with curative intent, the intestinal type showed an adjusted hazard ratio of 0.19 for overall survival (95% confidence interval 0.05-0.72; p value = 0.014) compared to the extrahepatic pancreatobiliary type. CONCLUSIONS: Integrative immunohistochemical classification of adenocarcinomas of the pancreatobiliary system results in a characteristic immunohistochemical profile for intrahepatic cholangiocarcinoma and intestinal type adenocarcinoma, which helps in distinguishing them from metastatic and pancreatobiliary type adenocarcinoma, respectively. A diagnostic immunohistochemical panel and additional extended panels of discriminant markers are proposed as guidance for their pathological diagnosis.