Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Front Plant Sci ; 14: 1250844, 2023.
Article in English | MEDLINE | ID: mdl-37860254

ABSTRACT

Introduction: Accurate and timely detection of plant stress is essential for yield protection, allowing better-targeted intervention strategies. Recent advances in remote sensing and deep learning have shown great potential for rapid non-invasive detection of plant stress in a fully automated and reproducible manner. However, the existing models always face several challenges: 1) computational inefficiency and the misclassifications between the different stresses with similar symptoms; and 2) the poor interpretability of the host-stress interaction. Methods: In this work, we propose a novel fast Fourier Convolutional Neural Network (FFDNN) for accurate and explainable detection of two plant stresses with similar symptoms (i.e. Wheat Yellow Rust And Nitrogen Deficiency). Specifically, unlike the existing CNN models, the main components of the proposed model include: 1) a fast Fourier convolutional block, a newly fast Fourier transformation kernel as the basic perception unit, to substitute the traditional convolutional kernel to capture both local and global responses to plant stress in various time-scale and improve computing efficiency with reduced learning parameters in Fourier domain; 2) Capsule Feature Encoder to encapsulate the extracted features into a series of vector features to represent part-to-whole relationship with the hierarchical structure of the host-stress interactions of the specific stress. In addition, in order to alleviate over-fitting, a photochemical vegetation indices-based filter is placed as pre-processing operator to remove the non-photochemical noises from the input Sentinel-2 time series. Results and discussion: The proposed model has been evaluated with ground truth data under both controlled and natural conditions. The results demonstrate that the high-level vector features interpret the influence of the host-stress interaction/response and the proposed model achieves competitive advantages in the detection and discrimination of yellow rust and nitrogen deficiency on Sentinel-2 time series in terms of classification accuracy, robustness, and generalization.

2.
Front Plant Sci ; 14: 1230886, 2023.
Article in English | MEDLINE | ID: mdl-37621882

ABSTRACT

Pepper leaf disease identification based on convolutional neural networks (CNNs) is one of the interesting research areas. However, most existing CNN-based pepper leaf disease detection models are suboptimal in terms of accuracy and computing performance. In particular, it is challenging to apply CNNs on embedded portable devices due to a large amount of computation and memory consumption for leaf disease recognition in large fields. Therefore, this paper introduces an enhanced lightweight model based on GoogLeNet architecture. The initial step involves compressing the Inception structure to reduce model parameters, leading to a remarkable enhancement in recognition speed. Furthermore, the network incorporates the spatial pyramid pooling structure to seamlessly integrate local and global features. Subsequently, the proposed improved model has been trained on the real dataset of 9183 images, containing 6 types of pepper diseases. The cross-validation results show that the model accuracy is 97.87%, which is 6% higher than that of GoogLeNet based on Inception-V1 and Inception-V3. The memory requirement of the model is only 10.3 MB, which is reduced by 52.31%-86.69%, comparing to GoogLeNet. We have also compared the model with the existing CNN-based models including AlexNet, ResNet-50 and MobileNet-V2. The result shows that the average inference time of the proposed model decreases by 61.49%, 41.78% and 23.81%, respectively. The results show that the proposed enhanced model can significantly improve performance in terms of accuracy and computing efficiency, which has potential to improve productivity in the pepper farming industry.

3.
IEEE J Biomed Health Inform ; 27(2): 980-991, 2023 02.
Article in English | MEDLINE | ID: mdl-36350854

ABSTRACT

Accurate and rapid detection of COVID-19 pneumonia is crucial for optimal patient treatment. Chest X-Ray (CXR) is the first-line imaging technique for COVID-19 pneumonia diagnosis as it is fast, cheap and easily accessible. Currently, many deep learning (DL) models have been proposed to detect COVID-19 pneumonia from CXR images. Unfortunately, these deep classifiers lack the transparency in interpreting findings, which may limit their applications in clinical practice. The existing explanation methods produce either too noisy or imprecise results, and hence are unsuitable for diagnostic purposes. In this work, we propose a novel explainable CXR deep neural Network (CXR-Net) for accurate COVID-19 pneumonia detection with an enhanced pixel-level visual explanation using CXR images. An Encoder-Decoder-Encoder architecture is proposed, in which an extra encoder is added after the encoder-decoder structure to ensure the model can be trained on category samples. The method has been evaluated on real world CXR datasets from both public and private sources, including healthy, bacterial pneumonia, viral pneumonia and COVID-19 pneumonia cases. The results demonstrate that the proposed method can achieve a satisfactory accuracy and provide fine-resolution activation maps for visual explanation in the lung disease detection. Compared to current state-of-the-art visual explanation methods, the proposed method can provide more detailed, high-resolution, visual explanation for the classification results. It can be deployed in various computing environments, including cloud, CPU and GPU environments. It has a great potential to be used in clinical practice for COVID-19 pneumonia diagnosis.


Subject(s)
COVID-19 , Deep Learning , Pneumonia, Viral , Humans , COVID-19/diagnostic imaging , X-Rays , Thorax/diagnostic imaging , Pneumonia, Viral/diagnostic imaging , COVID-19 Testing
4.
IEEE J Biomed Health Inform ; 26(11): 5289-5297, 2022 11.
Article in English | MEDLINE | ID: mdl-33735087

ABSTRACT

Computer-aided early diagnosis of Alzheimer's disease (AD) and its prodromal form mild cognitive impairment (MCI) based on structure Magnetic Resonance Imaging (sMRI) has provided a cost-effective and objective way for early prevention and treatment of disease progression, leading to improved patient care. In this work, we have proposed a novel computer-aided approach for early diagnosis of AD by introducing an explainable 3D Residual Attention Deep Neural Network (3D ResAttNet) for end-to-end learning from sMRI scans. Different from the existing approaches, the novelty of our approach is three-fold: 1) A Residual Self-Attention Deep Neural Network has been proposed to capture local, global and spatial information of MR images to improve diagnostic performance; 2) An explainable method using Gradient-based Localization Class Activation mapping (Grad-CAM) has been introduced to improve the interpretability of the proposed method; 3) This work has provided a full end-to-end learning solution for automated disease diagnosis. Our proposed 3D ResAttNet method has been evaluated on a large cohort of subjects from real datasets for two changeling classification tasks (i.e. Alzheimer's disease (AD) vs. Normal cohort (NC) and progressive MCI (pMCI) vs. stable MCI (sMCI)). The experimental results show that the proposed approach has a competitive advantage over the state-of-the-art models in terms of accuracy performance and generalizability. The explainable mechanism in our approach is able to identify and highlight the contribution of the important brain parts (e.g., hippocampus, lateral ventricle and most parts of the cortex) for transparent decisions.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Humans , Alzheimer Disease/diagnostic imaging , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Cognitive Dysfunction/diagnostic imaging , Disease Progression , Atrophy , Attention
5.
PLoS One ; 16(7): e0254763, 2021.
Article in English | MEDLINE | ID: mdl-34320001

ABSTRACT

Understanding the processes by which the mammalian embryo implants in the maternal uterus is a long-standing challenge in embryology. New insights into this morphogenetic event could be of great importance in helping, for example, to reduce human infertility. During implantation the blastocyst, composed of epiblast, trophectoderm and primitive endoderm, undergoes significant remodelling from an oval ball to an egg cylinder. A main feature of this transformation is symmetry breaking and reshaping of the epiblast into a "cup". Based on previous studies, we hypothesise that this event is the result of mechanical constraints originating from the trophectoderm, which is also significantly transformed during this process. In order to investigate this hypothesis we propose MG# (MechanoGenetic Sharp), an original computational model of biomechanics able to reproduce key cell shape changes and tissue level behaviours in silico. With this model, we simulate epiblast and trophectoderm morphogenesis during implantation. First, our results uphold experimental findings that repulsion at the apical surface of the epiblast is essential to drive lumenogenesis. Then, we provide new theoretical evidence that trophectoderm morphogenesis indeed can dictate the cup shape of the epiblast and fosters its movement towards the uterine tissue. Our results offer novel mechanical insights into mouse peri-implantation and highlight the usefulness of agent-based modelling methods in the study of embryogenesis.


Subject(s)
Endoderm/cytology , Germ Layers/cytology , Models, Biological , Animals , Cell Proliferation , Embryo Implantation , Embryo, Mammalian/cytology , Embryo, Mammalian/metabolism , Embryonic Development , Endoderm/metabolism , Germ Layers/metabolism , Mice
6.
IEEE Trans Med Imaging ; 40(9): 2354-2366, 2021 09.
Article in English | MEDLINE | ID: mdl-33939609

ABSTRACT

Structural magnetic resonance imaging (sMRI) is widely used for the brain neurological disease diagnosis, which could reflect the variations of brain. However, due to the local brain atrophy, only a few regions in sMRI scans have obvious structural changes, which are highly correlative with pathological features. Hence, the key challenge of sMRI-based brain disease diagnosis is to enhance the identification of discriminative features. To address this issue, we propose a dual attention multi-instance deep learning network (DA-MIDL) for the early diagnosis of Alzheimer's disease (AD) and its prodromal stage mild cognitive impairment (MCI). Specifically, DA-MIDL consists of three primary components: 1) the Patch-Nets with spatial attention blocks for extracting discriminative features within each sMRI patch whilst enhancing the features of abnormally changed micro-structures in the cerebrum, 2) an attention multi-instance learning (MIL) pooling operation for balancing the relative contribution of each patch and yield a global different weighted representation for the whole brain structure, and 3) an attention-aware global classifier for further learning the integral features and making the AD-related classification decisions. Our proposed DA-MIDL model is evaluated on the baseline sMRI scans of 1689 subjects from two independent datasets (i.e., ADNI and AIBL). The experimental results show that our DA-MIDL model can identify discriminative pathological locations and achieve better classification performance in terms of accuracy and generalizability, compared with several state-of-the-art methods.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Deep Learning , Alzheimer Disease/diagnostic imaging , Attention , Brain/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Humans , Magnetic Resonance Imaging
7.
J Med Syst ; 42(1): 20, 2017 Dec 07.
Article in English | MEDLINE | ID: mdl-29218460

ABSTRACT

This paper proposes a novel Adaptive Region-based Edge Smoothing Model (ARESM) for automatic boundary detection of optic disc and cup to aid automatic glaucoma diagnosis. The novelty of our approach consists of two aspects: 1) automatic detection of initial optimum object boundary based on a Region Classification Model (RCM) in a pixel-level multidimensional feature space; 2) an Adaptive Edge Smoothing Update model (AESU) of contour points (e.g. misclassified or irregular points) based on iterative force field calculations with contours obtained from the RCM by minimising energy function (an approach that does not require predefined geometric templates to guide auto-segmentation). Such an approach provides robustness in capturing a range of variations and shapes. We have conducted a comprehensive comparison between our approach and the state-of-the-art existing deformable models and validated it with publicly available datasets. The experimental evaluation shows that the proposed approach significantly outperforms existing methods. The generality of the proposed approach will enable segmentation and detection of other object boundaries and provide added value in the field of medical image processing and analysis.


Subject(s)
Glaucoma/diagnosis , Image Processing, Computer-Assisted/methods , Machine Learning , Optic Disk/diagnostic imaging , Pattern Recognition, Automated/methods , Algorithms , Humans
8.
Med Image Anal ; 39: 87-100, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28458088

ABSTRACT

This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson's ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated.


Subject(s)
Algorithms , Four-Dimensional Computed Tomography/methods , Lung/diagnostic imaging , Motion , Humans
9.
J Med Syst ; 40(6): 132, 2016 Jun.
Article in English | MEDLINE | ID: mdl-27086033

ABSTRACT

Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %.


Subject(s)
Diagnosis, Computer-Assisted , Glaucoma/classification , Image Interpretation, Computer-Assisted/methods , Ophthalmoscopy/methods , Algorithms , Fundus Oculi , Glaucoma/diagnosis , Humans , Machine Learning
10.
Comput Math Methods Med ; 2015: 454076, 2015.
Article in English | MEDLINE | ID: mdl-26692046

ABSTRACT

Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer's disease, Parkinson's diseases, and autism). In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN) for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby). We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for "mushroom" spines, 97.6% for "stubby" spines, and 98.6% for "thin" spines.


Subject(s)
Dendritic Spines/ultrastructure , Neural Networks, Computer , Wavelet Analysis , Algorithms , Animals , Cells, Cultured , Computational Biology , Dendritic Spines/classification , Humans , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/statistics & numerical data , Imaging, Three-Dimensional/methods , Imaging, Three-Dimensional/statistics & numerical data , Microscopy, Confocal , Pattern Recognition, Automated/methods , Pattern Recognition, Automated/statistics & numerical data , Rats
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 4318-21, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26737250

ABSTRACT

Glaucoma is one of the leading cause of blindness but the detection at its earliest stage and subsequent treatment can aid patients to preserve blindness. The existing work has been focusing on global features such as texture, grayscale and wavelet energy of the Optic Nerve Head (ONH) and its surrounding to differentiate between normal and glaucoma images. In contrast to previous approaches which focus on global information only, this work proposes a new approach to automatically classify between the normal and glaucoma images based on Regional Wavelet Features of the ONH and different regions around it. These regions are usually used for diagnosis of glaucoma by clinicians through visual observation only. Our method automatically determines different clinically observed regions around the ONH and performs classification on the basis of wavelet energy at different frequency subbands. We have conducted experiments based on different global features and regional features respectively and applied it to RIMONE (An Open Retinal Image Database for Optic Nerve Evaluation) database with 158 images. The experimental evaluation demonstrated that the classification accuracy of normal and glaucoma images using Regional Wavelet Features of different regions with 93% outperforms all other feature sets.


Subject(s)
Glaucoma , Humans , Optic Disk , Optic Nerve
12.
IEEE J Biomed Health Inform ; 19(4): 1472-82, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25167560

ABSTRACT

Scanning laser ophthalmoscopes (SLOs) can be used for early detection of retinal diseases. With the advent of latest screening technology, the advantage of using SLO is its wide field of view, which can image a large part of the retina for better diagnosis of the retinal diseases. On the other hand, during the imaging process, artefacts such as eyelashes and eyelids are also imaged along with the retinal area. This brings a big challenge on how to exclude these artefacts. In this paper, we propose a novel approach to automatically extract out true retinal area from an SLO image based on image processing and machine learning approaches. To reduce the complexity of image processing tasks and provide a convenient primitive image pattern, we have grouped pixels into different regions based on the regional size and compactness, called superpixels. The framework then calculates image based features reflecting textural and structural information and classifies between retinal area and artefacts. The experimental evaluation results have shown good performance with an overall accuracy of 92%.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Ophthalmoscopy/methods , Retinal Diseases/diagnosis , Algorithms , Artifacts , Humans , Retina/pathology
13.
Comput Med Imaging Graph ; 37(7-8): 581-96, 2013.
Article in English | MEDLINE | ID: mdl-24139134

ABSTRACT

Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention. This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis.


Subject(s)
Algorithms , Artificial Intelligence , Colorimetry/methods , Glaucoma/pathology , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Retinoscopy/methods , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity , Subtraction Technique
14.
Philos Trans A Math Phys Eng Sci ; 369(1949): 3268-84, 2011 Aug 28.
Article in English | MEDLINE | ID: mdl-21768139

ABSTRACT

The performance database (PDB) stores performance-related data gathered during workflow enactment. We argue that, by carefully understanding and manipulating these data, we can improve efficiency when enacting workflows. This paper describes the rationale behind the PDB, and proposes a systematic way to implement it. The prototype is built as part of the Advanced Data Mining and Integration Research for Europe project. We use workflows from real-world experiments to demonstrate the usage of PDB.

15.
Bioinformatics ; 27(8): 1101-7, 2011 Apr 15.
Article in English | MEDLINE | ID: mdl-21357576

ABSTRACT

MOTIVATION: Deciphering the regulatory and developmental mechanisms for multicellular organisms requires detailed knowledge of gene interactions and gene expressions. The availability of large datasets with both spatial and ontological annotation of the spatio-temporal patterns of gene expression in mouse embryo provides a powerful resource to discover the biological function of embryo organization. Ontological annotation of gene expressions consists of labelling images with terms from the anatomy ontology for mouse development. If the spatial genes of an anatomical component are expressed in an image, the image is then tagged with a term of that anatomical component. The current annotation is done manually by domain experts, which is both time consuming and costly. In addition, the level of detail is variable, and inevitably errors arise from the tedious nature of the task. In this article, we present a new method to automatically identify and annotate gene expression patterns in the mouse embryo with anatomical terms. RESULTS: The method takes images from in situ hybridization studies and the ontology for the developing mouse embryo, it then combines machine learning and image processing techniques to produce classifiers that automatically identify and annotate gene expression patterns in these images. We evaluate our method on image data from the EURExpress study, where we use it to automatically classify nine anatomical terms: humerus, handplate, fibula, tibia, femur, ribs, petrous part, scapula and head mesenchyme. The accuracy of our method lies between 70% and 80% with few exceptions. We show that other known methods have lower classification performance than ours. We have investigated the images misclassified by our method and found several cases where the original annotation was not correct. This shows our method is robust against this kind of noise. AVAILABILITY: The annotation result and the experimental dataset in the article can be freely accessed at http://www2.docm.mmu.ac.uk/STAFF/L.Han/geneannotation/. CONTACT: l.han@mmu.ac.uk SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Embryo, Mammalian/metabolism , Gene Expression Regulation, Developmental , Image Processing, Computer-Assisted/methods , In Situ Hybridization , RNA/analysis , Animals , Artificial Intelligence , Embryo, Mammalian/anatomy & histology , Gene Expression , Mice , RNA/metabolism
SELECTION OF CITATIONS
SEARCH DETAIL