Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 92
Filter
1.
Comput Med Imaging Graph ; 115: 102383, 2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38643551

ABSTRACT

Semi-supervised learning has made significant progress in medical image segmentation. However, existing methods primarily utilize information from a single dimensionality, resulting in sub-optimal performance on challenging magnetic resonance imaging (MRI) data with multiple segmentation objects and anisotropic resolution. To address this issue, we present a Hybrid Dual Mean-Teacher (HD-Teacher) model with hybrid, semi-supervised, and multi-task learning to achieve effective semi-supervised segmentation. HD-Teacher employs a 2D and a 3D mean-teacher network to produce segmentation labels and signed distance fields from the hybrid information captured in both dimensionalities. This hybrid mechanism allows HD-Teacher to utilize features from 2D, 3D, or both dimensions as needed. Outputs from 2D and 3D teacher models are dynamically combined based on confidence scores, forming a single hybrid prediction with estimated uncertainty. We propose a hybrid regularization module to encourage both student models to produce results close to the uncertainty-weighted hybrid prediction to further improve their feature extraction capability. Extensive experiments of binary and multi-class segmentation conducted on three MRI datasets demonstrated that the proposed framework could (1) significantly outperform state-of-the-art semi-supervised methods (2) surpass a fully-supervised VNet trained on substantially more annotated data, and (3) perform on par with human raters on muscle and bone segmentation task. Code will be available at https://github.com/ThisGame42/Hybrid-Teacher.

2.
Nat Methods ; 21(2): 182-194, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38347140

ABSTRACT

Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.


Subject(s)
Artificial Intelligence
3.
IEEE Trans Med Imaging ; PP2024 Feb 19.
Article in English | MEDLINE | ID: mdl-38373129

ABSTRACT

Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.

4.
Clin Exp Optom ; 107(2): 130-146, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37674264

ABSTRACT

Artificial Intelligence is a rapidly expanding field within computer science that encompasses the emulation of human intelligence by machines. Machine learning and deep learning - two primary data-driven pattern analysis approaches under the umbrella of artificial intelligence - has created considerable interest in the last few decades. The evolution of technology has resulted in a substantial amount of artificial intelligence research on ophthalmic and neurodegenerative disease diagnosis using retinal images. Various artificial intelligence-based techniques have been used for diagnostic purposes, including traditional machine learning, deep learning, and their combinations. Presented here is a review of the literature covering the last 10 years on this topic, discussing the use of artificial intelligence in analysing data from different modalities and their combinations for the diagnosis of glaucoma and neurodegenerative diseases. The performance of published artificial intelligence methods varies due to several factors, yet the results suggest that such methods can potentially facilitate clinical diagnosis. Generally, the accuracy of artificial intelligence-assisted diagnosis ranges from 67-98%, and the area under the sensitivity-specificity curve (AUC) ranges from 0.71-0.98, which outperforms typical human performance of 71.5% accuracy and 0.86 area under the curve. This indicates that artificial intelligence-based tools can provide clinicians with useful information that would assist in providing improved diagnosis. The review suggests that there is room for improvement of existing artificial intelligence-based models using retinal imaging modalities before they are incorporated into clinical practice.


Subject(s)
Glaucoma , Neurodegenerative Diseases , Humans , Artificial Intelligence , Neurodegenerative Diseases/diagnostic imaging , Glaucoma/diagnosis , Machine Learning , Sensitivity and Specificity
5.
IEEE Trans Med Imaging ; 43(4): 1308-1322, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38015689

ABSTRACT

Surgical scene segmentation is a critical task in Robotic-assisted surgery. However, the complexity of the surgical scene, which mainly includes local feature similarity (e.g., between different anatomical tissues), intraoperative complex artifacts, and indistinguishable boundaries, poses significant challenges to accurate segmentation. To tackle these problems, we propose the Long Strip Kernel Attention network (LSKANet), including two well-designed modules named Dual-block Large Kernel Attention module (DLKA) and Multiscale Affinity Feature Fusion module (MAFF), which can implement precise segmentation of surgical images. Specifically, by introducing strip convolutions with different topologies (cascaded and parallel) in two blocks and a large kernel design, DLKA can make full use of region- and strip-like surgical features and extract both visual and structural information to reduce the false segmentation caused by local feature similarity. In MAFF, affinity matrices calculated from multiscale feature maps are applied as feature fusion weights, which helps to address the interference of artifacts by suppressing the activations of irrelevant regions. Besides, the hybrid loss with Boundary Guided Head (BGH) is proposed to help the network segment indistinguishable boundaries effectively. We evaluate the proposed LSKANet on three datasets with different surgical scenes. The experimental results show that our method achieves new state-of-the-art results on all three datasets with improvements of 2.6%, 1.4%, and 3.4% mIoU, respectively. Furthermore, our method is compatible with different backbones and can significantly increase their segmentation accuracy. Code is available at https://github.com/YubinHan73/LSKANet.


Subject(s)
Robotic Surgical Procedures , Artifacts , Spine , Image Processing, Computer-Assisted
6.
Comput Biol Med ; 167: 107617, 2023 12.
Article in English | MEDLINE | ID: mdl-37918261

ABSTRACT

Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.


Subject(s)
Deep Learning , Algorithms , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging , Microscopy
7.
Cell Rep Methods ; 3(8): 100547, 2023 08 28.
Article in English | MEDLINE | ID: mdl-37671013

ABSTRACT

Single-cell-resolved systems biology methods, including omics- and imaging-based measurement modalities, generate a wealth of high-dimensional data characterizing the heterogeneity of cell populations. Representation learning methods are routinely used to analyze these complex, high-dimensional data by projecting them into lower-dimensional embeddings. This facilitates the interpretation and interrogation of the structures, dynamics, and regulation of cell heterogeneity. Reflecting their central role in analyzing diverse single-cell data types, a myriad of representation learning methods exist, with new approaches continually emerging. Here, we contrast general features of representation learning methods spanning statistical, manifold learning, and neural network approaches. We consider key steps involved in representation learning with single-cell data, including data pre-processing, hyperparameter optimization, downstream analysis, and biological validation. Interdependencies and contingencies linking these steps are also highlighted. This overview is intended to guide researchers in the selection, application, and optimization of representation learning strategies for current and future single-cell research applications.


Subject(s)
Law Enforcement , Learning , Humans , Neural Networks, Computer , Research Personnel , Data Analysis
8.
Sci Rep ; 13(1): 13604, 2023 08 21.
Article in English | MEDLINE | ID: mdl-37604916

ABSTRACT

Tumour heterogeneity in breast cancer poses challenges in predicting outcome and response to therapy. Spatial transcriptomics technologies may address these challenges, as they provide a wealth of information about gene expression at the cell level, but they are expensive, hindering their use in large-scale clinical oncology studies. Predicting gene expression from hematoxylin and eosin stained histology images provides a more affordable alternative for such studies. Here we present BrST-Net, a deep learning framework for predicting gene expression from histopathology images using spatial transcriptomics data. Using this framework, we trained and evaluated four distinct state-of-the-art deep learning architectures, which include ResNet101, Inception-v3, EfficientNet (with six different variants), and vision transformer (with two different variants), all without utilizing pretrained weights for the prediction of 250 genes. To enhance the generalisation performance of the main network, we introduce an auxiliary network into the framework. Our methodology outperforms previous studies, with 237 genes identified with positive correlation, including 24 genes with a median correlation coefficient greater than 0.50. This is a notable improvement over previous studies, which could predict only 102 genes with positive correlation, with the highest correlation values ranging from 0.29 to 0.34.


Subject(s)
Deep Learning , Mammary Neoplasms, Animal , Animals , Transcriptome , Gene Expression Profiling , Electric Power Supplies
9.
IEEE Trans Med Imaging ; 42(11): 3408-3419, 2023 11.
Article in English | MEDLINE | ID: mdl-37342952

ABSTRACT

Surgical instrument segmentation is of great significance to robot-assisted surgery, but the noise caused by reflection, water mist, and motion blur during the surgery as well as the different forms of surgical instruments would greatly increase the difficulty of precise segmentation. A novel method called Branch Aggregation Attention network (BAANet) is proposed to address these challenges, which adopts a lightweight encoder and two designed modules, named Branch Balance Aggregation module (BBA) and Block Attention Fusion module (BAF), for efficient feature localization and denoising. By introducing the unique BBA module, features from multiple branches are balanced and optimized through a combination of addition and multiplication to complement strengths and effectively suppress noise. Furthermore, to fully integrate the contextual information and capture the region of interest, the BAF module is proposed in the decoder, which receives adjacent feature maps from the BBA module and localizes the surgical instruments from both global and local perspectives by utilizing a dual branch attention mechanism. According to the experimental results, the proposed method has the advantage of being lightweight while outperforming the second-best method by 4.03%, 1.53%, and 1.34% in mIoU scores on three challenging surgical instrument datasets, respectively, compared to the existing state-of-the-art methods. Code is available at https://github.com/SWT-1014/BAANet.


Subject(s)
Robotic Surgical Procedures , Motion , Water , Surgical Instruments , Image Processing, Computer-Assisted
10.
Cancers (Basel) ; 15(9)2023 Apr 30.
Article in English | MEDLINE | ID: mdl-37174035

ABSTRACT

Gene expression can be used to subtype breast cancer with improved prediction of risk of recurrence and treatment responsiveness over that obtained using routine immunohistochemistry (IHC). However, in the clinic, molecular profiling is primarily used for ER+ breast cancer, which is costly, tissue destructive, requires specialised platforms, and takes several weeks to obtain a result. Deep learning algorithms can effectively extract morphological patterns in digital histopathology images to predict molecular phenotypes quickly and cost-effectively. We propose a new, computationally efficient approach called hist2RNA inspired by bulk RNA sequencing techniques to predict the expression of 138 genes (incorporated from 6 commercially available molecular profiling tests), including luminal PAM50 subtype, from hematoxylin and eosin (H&E)-stained whole slide images (WSIs). The training phase involves the aggregation of extracted features for each patient from a pretrained model to predict gene expression at the patient level using annotated H&E images from The Cancer Genome Atlas (TCGA, n = 335). We demonstrate successful gene prediction on a held-out test set (n = 160, corr = 0.82 across patients, corr = 0.29 across genes) and perform exploratory analysis on an external tissue microarray (TMA) dataset (n = 498) with known IHC and survival information. Our model is able to predict gene expression and luminal PAM50 subtype (Luminal A versus Luminal B) on the TMA dataset with prognostic significance for overall survival in univariate analysis (c-index = 0.56, hazard ratio = 2.16 (95% CI 1.12-3.06), p < 5 × 10-3), and independent significance in multivariate analysis incorporating standard clinicopathological variables (c-index = 0.65, hazard ratio = 1.87 (95% CI 1.30-2.68), p < 5 × 10-3). The proposed strategy achieves superior performance while requiring less training time, resulting in less energy consumption and computational cost compared to patch-based models. Additionally, hist2RNA predicts gene expression that has potential to determine luminal molecular subtypes which correlates with overall survival, without the need for expensive molecular testing.

11.
Nat Methods ; 20(7): 1010-1020, 2023 07.
Article in English | MEDLINE | ID: mdl-37202537

ABSTRACT

The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.


Subject(s)
Benchmarking , Cell Tracking , Cell Tracking/methods , Machine Learning , Algorithms
12.
IEEE Trans Med Imaging ; 42(5): 1401-1412, 2023 05.
Article in English | MEDLINE | ID: mdl-37015696

ABSTRACT

Histopathological Whole Slide Images (WSIs) at giga-pixel resolution are the gold standard for cancer analysis and prognosis. Due to the scarcity of pixel- or patch-level annotations of WSIs, many existing methods attempt to predict survival outcomes based on a three-stage strategy that includes patch selection, patch-level feature extraction and aggregation. However, the patch features are usually extracted by using truncated models (e.g. ResNet) pretrained on ImageNet without fine-tuning on WSI tasks, and the aggregation stage does not consider the many-to-one relationship between multiple WSIs and the patient. In this paper, we propose a novel survival prediction framework that consists of patch sampling, feature extraction and patient-level survival prediction. Specifically, we employ two kinds of self-supervised learning methods, i.e. colorization and cross-channel, as pretext tasks to train convnet-based models that are tailored for extracting features from WSIs. Then, at the patient-level survival prediction we explicitly aggregate features from multiple WSIs, using consistency and contrastive losses to normalize slide-level features at the patient level. We conduct extensive experiments on three large-scale datasets: TCGA-GBM, TCGA-LUSC and NLST. Experimental results demonstrate the effectiveness of our proposed framework, as it achieves state-of-the-art performance in comparison with previous studies, with concordance index of 0.670, 0.679 and 0.711 on TCGA-GBM, TCGA-LUSC and NLST, respectively.


Subject(s)
Neoplasms , Supervised Machine Learning , Humans , Neoplasms/diagnostic imaging
13.
Sci Rep ; 13(1): 1617, 2023 01 28.
Article in English | MEDLINE | ID: mdl-36709392

ABSTRACT

Segmentation of white matter tracts in diffusion magnetic resonance images is an important first step in many imaging studies of the brain in health and disease. Similar to medical image segmentation in general, a popular approach to white matter tract segmentation is to use U-Net based artificial neural network architectures. Despite many suggested improvements to the U-Net architecture in recent years, there is a lack of systematic comparison of architectural variants for white matter tract segmentation. In this paper, we evaluate multiple U-Net based architectures specifically for this purpose. We compare the results of these networks to those achieved by our own various architecture changes, as well as to new U-Net architectures designed automatically via neural architecture search (NAS). To the best of our knowledge, this is the first study to systematically compare multiple U-Net based architectures for white matter tract segmentation, and the first to use NAS. We find that the recently proposed medical imaging segmentation network UNet3+ slightly outperforms the current state of the art for white matter tract segmentation, and achieves a notably better mean Dice score for segmentation of the fornix (+ 0.01 and + 0.006 mean Dice increase for left and right fornix respectively), a tract that the current state of the art model struggles to segment. UNet3+ also outperforms the current state of the art when little training data is available. Additionally, manual architecture search found that a minor segmentation improvement is observed when an additional, deeper layer is added to the U-shape of UNet3+. However, all networks, including those designed via NAS, achieve similar results, suggesting that there may be benefit in exploring networks that deviate from the general U-Net paradigm.


Subject(s)
Biological Phenomena , White Matter , White Matter/diagnostic imaging , Neural Networks, Computer , Brain/diagnostic imaging , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods
14.
Bioinformatics ; 39(1)2023 01 01.
Article in English | MEDLINE | ID: mdl-36579866

ABSTRACT

MOTIVATION: Subcellular localization of human proteins is essential to comprehend their functions and roles in physiological processes, which in turn helps in diagnostic and prognostic studies of pathological conditions and impacts clinical decision-making. Since proteins reside at multiple locations at the same time and few subcellular locations host far more proteins than other locations, the computational task for their subcellular localization is to train a multilabel classifier while handling data imbalance. In imbalanced data, minority classes are underrepresented, thus leading to a heavy bias towards the majority classes and the degradation of predictive capability for the minority classes. Furthermore, data imbalance in multilabel settings is an even more complex problem due to the coexistence of majority and minority classes. RESULTS: Our studies reveal that based on the extent of concurrence of majority and minority classes, oversampling of minority samples through appropriate data augmentation techniques holds promising scope for boosting the classification performance for the minority classes. We measured the magnitude of data imbalance per class and the concurrence of majority and minority classes in the dataset. Based on the obtained values, we identified minority and medium classes, and a new oversampling method is proposed that includes non-linear mixup, geometric and colour transformations for data augmentation and a sampling approach to prepare minibatches. Performance evaluation on the Human Protein Atlas Kaggle challenge dataset shows that the proposed method is capable of achieving better predictions for minority classes than existing methods. AVAILABILITY AND IMPLEMENTATION: Data used in this study are available at https://www.kaggle.com/competitions/human-protein-atlas-image-classification/data. Source code is available at https://github.com/priyarana/Protein-subcellular-localisation-method. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Algorithms , Proteins , Humans , Proteins/metabolism , Software , Clinical Decision-Making , Protein Transport
15.
IEEE Trans Med Imaging ; 42(5): 1278-1288, 2023 05.
Article in English | MEDLINE | ID: mdl-36455082

ABSTRACT

Microscopy cell segmentation is a crucial step in biological image analysis and a challenging task. In recent years, deep learning has been widely used to tackle this task, with promising results. A critical aspect of training complex neural networks for this purpose is the selection of the loss function, as it affects the learning process. In the field of cell segmentation, most of the recent research in improving the loss function focuses on addressing the problem of inter-class imbalance. Despite promising achievements, more work is needed, as the challenge of cell segmentation is not only the inter-class imbalance but also the intra-class imbalance (the cost imbalance between the false positives and false negatives of the inference model), the segmentation of cell minutiae, and the missing annotations. To deal with these challenges, in this paper, we propose a new compound loss function employing a shape aware weight map. The proposed loss function is inspired by Youden's J index to handle the problem of inter-class imbalance and uses a focal cross-entropy term to penalize the intra-class imbalance and weight easy/hard samples. The proposed shape aware weight map can handle the problem of missing annotations and facilitate valid segmentation of cell minutiae. Results of evaluations on all ten 2D+time datasets from the public cell tracking challenge demonstrate 1) the superiority of the proposed loss function with the shape aware weight map, and 2) that the performance of recent deep learning-based cell segmentation methods can be improved by using the proposed compound loss function.


Subject(s)
Cells , Deep Learning , Image Processing, Computer-Assisted , Microscopy , Image Processing, Computer-Assisted/methods , Cells/ultrastructure
16.
IEEE Trans Med Imaging ; 42(1): 148-157, 2023 01.
Article in English | MEDLINE | ID: mdl-36103445

ABSTRACT

3D soma detection in whole brain images is a critical step for neuron reconstruction. However, existing soma detection methods are not suitable for whole mouse brain images with large amounts of data and complex structure. In this paper, we propose a two-stage deep neural network to achieve fast and accurate soma detection in large-scale and high-resolution whole mouse brain images (more than 1TB). For the first stage, a lightweight Multi-level Cross Classification Network (MCC-Net) is proposed to filter out images without somas and generate coarse candidate images by combining the advantages of the multi convolution layer's feature extraction ability. It can speed up the detection of somas and reduce the computational complexity. For the second stage, to further obtain the accurate locations of somas in the whole mouse brain images, the Scale Fusion Segmentation Network (SFS-Net) is developed to segment soma regions from candidate images. Specifically, the SFS-Net captures multi-scale context information and establishes a complementary relationship between encoder and decoder by combining the encoder-decoder structure and a 3D Scale-Aware Pyramid Fusion (SAPF) module for better segmentation performance. The experimental results on three whole mouse brain images verify that the proposed method can achieve excellent performance and provide the reconstruction of neurons with beneficial information. Additionally, we have established a public dataset named WBMSD, including 798 high-resolution and representative images ( 256 ×256 ×256 voxels) from three whole mouse brain images, dedicated to the research of soma detection, which will be released along with this paper.


Subject(s)
Brain , Neural Networks, Computer , Mice , Animals , Brain/diagnostic imaging , Neurons , Image Processing, Computer-Assisted/methods
17.
Sci Rep ; 12(1): 22286, 2022 12 24.
Article in English | MEDLINE | ID: mdl-36566313

ABSTRACT

Recent progress in encoder-decoder neural network architecture design has led to significant performance improvements in a wide range of medical image segmentation tasks. However, state-of-the-art networks for a given task may be too computationally demanding to run on affordable hardware, and thus users often resort to practical workarounds by modifying various macro-level design aspects. Two common examples are downsampling of the input images and reducing the network depth or size to meet computer memory constraints. In this paper, we investigate the effects of these changes on segmentation performance and show that image complexity can be used as a guideline in choosing what is best for a given dataset. We consider four statistical measures to quantify image complexity and evaluate their suitability on ten different public datasets. For the purpose of our illustrative experiments, we use DeepLabV3+ (deep large-size), M2U-Net (deep lightweight), U-Net (shallow large-size), and U-Net Lite (shallow lightweight). Our results suggest that median frequency is the best complexity measure when deciding on an acceptable input downsampling factor and using a deep versus shallow, large-size versus lightweight network. For high-complexity datasets, a lightweight network running on the original images may yield better segmentation results than a large-size network running on downsampled images, whereas the opposite may be the case for low-complexity images.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Costs and Cost Analysis
18.
Sci Rep ; 12(1): 18101, 2022 10 27.
Article in English | MEDLINE | ID: mdl-36302948

ABSTRACT

Due to progression in cell-cycle or duration of storage, classification of morphological changes in human blood cells is important for correct and effective clinical decisions. Automated classification systems help avoid subjective outcomes and are more efficient. Deep learning and more specifically Convolutional Neural Networks have achieved state-of-the-art performance on various biomedical image classification problems. However, real-world data often suffers from the data imbalance problem, owing to which the trained classifier is biased towards the majority classes and does not perform well on the minority classes. This study presents an imbalanced blood cells classification method that utilises Wasserstein divergence GAN, mixup and novel nonlinear mixup for data augmentation to achieve oversampling of the minority classes. We also present a minority class focussed sampling strategy, which allows effective representation of minority class samples produced by all three data augmentation techniques and contributes to the classification performance. The method was evaluated on two publicly available datasets of immortalised human T-lymphocyte cells and Red Blood Cells. Classification performance evaluated using F1-score shows that our proposed approach outperforms existing methods on the same datasets.


Subject(s)
Blood Cells , Neural Networks, Computer , Humans
19.
Article in English | MEDLINE | ID: mdl-36121961

ABSTRACT

The precise segmentation of medical images is one of the key challenges in pathology research and clinical practice. However, many medical image segmentation tasks have problems such as large differences between different types of lesions and similar shapes as well as colors between lesions and surrounding tissues, which seriously affects the improvement of segmentation accuracy. In this article, a novel method called Swin Pyramid Aggregation network (SwinPA-Net) is proposed by combining two designed modules with Swin Transformer to learn more powerful and robust features. The two modules, named dense multiplicative connection (DMC) module and local pyramid attention (LPA) module, are proposed to aggregate the multiscale context information of medical images. The DMC module cascades the multiscale semantic feature information through dense multiplicative feature fusion, which minimizes the interference of shallow background noise to improve the feature expression and solves the problem of excessive variation in lesion size and type. Moreover, the LPA module guides the network to focus on the region of interest by merging the global attention and the local attention, which helps to solve similar problems. The proposed network is evaluated on two public benchmark datasets for polyp segmentation task and skin lesion segmentation task as well as a clinical private dataset for laparoscopic image segmentation task. Compared with existing state-of-the-art (SOTA) methods, the SwinPA-Net achieves the most advanced performance and can outperform the second-best method on the mean Dice score by 1.68%, 0.8%, and 1.2% on the three tasks, respectively.

20.
Sci Rep ; 12(1): 14527, 2022 08 25.
Article in English | MEDLINE | ID: mdl-36008541

ABSTRACT

Computational pathology is a rapidly expanding area for research due to the current global transformation of histopathology through the adoption of digital workflows. Survival prediction of breast cancer patients is an important task that currently depends on histopathology assessment of cancer morphological features, immunohistochemical biomarker expression and patient clinical findings. To facilitate the manual process of survival risk prediction, we developed a computational pathology framework for survival prediction using digitally scanned haematoxylin and eosin-stained tissue microarray images of clinically aggressive triple negative breast cancer. Our results show that the model can produce an average concordance index of 0.616. Our model predictions are analysed for independent prognostic significance in univariate analysis (hazard ratio = 3.12, 95% confidence interval [1.69,5.75], p < 0.005) and multivariate analysis using clinicopathological data (hazard ratio = 2.68, 95% confidence interval [1.44,4.99], p < 0.005). Through qualitative analysis of heatmaps generated from our model, an expert pathologist is able to associate tissue features highlighted in the attention heatmaps of high-risk predictions with morphological features associated with more aggressive behaviour such as low levels of tumour infiltrating lymphocytes, stroma rich tissues and high-grade invasive carcinoma, providing explainability of our method for triple negative breast cancer.


Subject(s)
Breast Neoplasms , Carcinoma , Triple Negative Breast Neoplasms , Breast Neoplasms/pathology , Carcinoma/pathology , Female , Humans , Lymphocytes, Tumor-Infiltrating/pathology , Prognosis , Proportional Hazards Models , Triple Negative Breast Neoplasms/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...