Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Sensors (Basel) ; 23(13)2023 Jun 28.
Article in English | MEDLINE | ID: mdl-37447843

ABSTRACT

Western corn rootworm (WCR) is one of the most devastating corn rootworm species in North America because of its ability to cause severe production loss and grain quality damage. To control the loss, it is important to identify the infection of WCR at an early stage. Because the root system is the earliest feeding source of the WCR at the larvae stage, assessing the direct damage in the root system is crucial to achieving early detection. Most of the current methods still necessitate uprooting the entire plant, which could cause permanent destruction and a loss of the original root's structural information. To measure the root damages caused by WCR non-destructively, this study utilized MISIRoot, a minimally invasive and in situ automatic plant root phenotyping robot to collect not only high-resolution images but also 3D positions of the roots without uprooting. To identify roots in the images and to study how the damages were distributed in different types of roots, a deep convolution neural network model was trained to differentiate the relatively thick and thin roots. In addition, a color camera was used to capture the above-ground morphological features, such as the leaf color, plant height, and side-view leaf area. To check if the plant shoot had any visible symptoms in the inoculated group compared to the control group, several vegetation indices were calculated based on the RGB color. Additionally, the shoot morphological features were fed into a PLS-DA model to differentiate the two groups. Results showed that none of the above-ground features or models output a statistically significant difference between the two groups at the 95% confidence level. On the contrary, many of the root structural features measured using MISIRoot could successfully differentiate the two groups with the smallest t-test p-value of 1.5791 × 10-6. The promising outcomes were solid proof of the effectiveness of MISIRoot as a potential solution for identifying WCR infestations before the plant shoot showed significant symptoms.


Subject(s)
Coleoptera , Robotics , Animals , Zea mays , Plant Roots/chemistry , Larva
2.
J Digit Imaging ; 36(5): 2227-2248, 2023 10.
Article in English | MEDLINE | ID: mdl-37407845

ABSTRACT

Cancerous skin lesions are one of the deadliest diseases that have the ability in spreading across other body parts and organs. Conventionally, visual inspection and biopsy methods are widely used to detect skin cancers. However, these methods have some drawbacks, and the prediction is not highly accurate. This is where a dependable automatic recognition system for skin cancers comes into play. With the extensive usage of deep learning in various aspects of medical health, a novel computer-aided dermatologist tool has been suggested for the accurate identification and classification of skin lesions by deploying a novel deep convolutional neural network (DCNN) model that incorporates global average pooling along with preprocessing to discern the skin lesions. The proposed model is trained and tested on the HAM10000 dataset, which contains seven different classes of skin lesions as target classes. The black hat filtering technique has been applied to remove artifacts in the preprocessing stage along with the resampling techniques to balance the data. The performance of the proposed model is evaluated by comparing it with some of the transfer learning models such as ResNet50, VGG-16, MobileNetV2, and DenseNet121. The proposed model provides an accuracy of 97.20%, which is the highest among the previous state-of-art models for multi-class skin lesion classification. The efficacy of the proposed model is also validated by visualizing the results obtained using a graphical user interface (GUI).


Subject(s)
Deep Learning , Skin Diseases , Skin Neoplasms , Humans , Skin Diseases/diagnostic imaging , Skin/diagnostic imaging , Skin/pathology , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Neural Networks, Computer
3.
Entropy (Basel) ; 25(10)2023 Oct 05.
Article in English | MEDLINE | ID: mdl-37895539

ABSTRACT

Deep convolution neural networks have proven their powerful ability in comparing many tasks of computer vision due to their strong data learning capacity. In this paper, we propose a novel end-to-end denoising network, termed Fourier embedded U-shaped network (FEUSNet). By analyzing the amplitude spectrum and phase spectrum of Fourier coefficients, we find that low-frequency features of an image are in the former while noise features are in the latter. To make full use of this characteristic, Fourier features are learned and are concatenated as a prior module that is embedded into a U-shaped network to reduce noise while preserving multi-scale fine details. In the experiments, we first present ablation studies on the Fourier coefficients' learning networks and loss function. Then, we compare the proposed FEUSNet with the state-of-the-art denoising methods in quantization and qualification. The experimental results show that our FEUSNet performs well in noise suppression and preserves multi-scale enjoyable structures, even outperforming advanced denoising approaches.

4.
J Digit Imaging ; 35(2): 258-280, 2022 04.
Article in English | MEDLINE | ID: mdl-35018536

ABSTRACT

Skin cancer is the most common type of cancer that affects humans and is usually diagnosed by initial clinical screening, which is followed by dermoscopic analysis. Automated classification of skin lesions is still a challenging task because of the high visual similarity between melanoma and benign lesions. This paper proposes a new residual deep convolutional neural network (RDCNN) for skin lesions diagnosis. The proposed neural network is trained and tested using six well-known skin cancer datasets, PH2, DermIS and Quest, MED-NODE, ISIC2016, ISIC2017, and ISIC2018. Three different experiments are carried out to measure the performance of the proposed RDCNN. In the first experiment, the proposed RDCNN is trained and tested using the original dataset images without any pre-processing or segmentation. In the second experiment, the proposed RDCNN is tested using segmented images. Finally, the utilized trained model in the second experiment is saved and reused in the third experiment as a pre-trained model. Then, it is trained again using a different dataset. The proposed RDCNN shows significant high performance and outperforms the existing deep convolutional networks.


Subject(s)
Melanoma , Skin Diseases , Skin Neoplasms , Dermoscopy , Disease Progression , Humans , Melanoma/diagnostic imaging , Neural Networks, Computer , Skin Diseases/diagnostic imaging , Skin Neoplasms/diagnostic imaging
5.
Zhongguo Yi Liao Qi Xie Za Zhi ; 46(3): 242-247, 2022 May 30.
Article in Zh | MEDLINE | ID: mdl-35678429

ABSTRACT

Premature delivery is one of the direct factors that affect the early development and safety of infants. Its direct clinical manifestation is the change of uterine contraction intensity and frequency. Uterine Electrohysterography(EHG) signal collected from the abdomen of pregnant women can accurately and effectively reflect the uterine contraction, which has higher clinical application value than invasive monitoring technology such as intrauterine pressure catheter. Therefore, the research of fetal preterm birth recognition algorithm based on EHG is particularly important for perinatal fetal monitoring. We proposed a convolution neural network(CNN) based on EHG fetal preterm birth recognition algorithm, and a deep CNN model was constructed by combining the Gramian angular difference field(GADF) with the transfer learning technology. The structure of the model was optimized using the clinical measured term-preterm EHG database. The classification accuracy of 94.38% and F1 value of 97.11% were achieved. The experimental results showed that the model constructed in this paper has a certain auxiliary diagnostic value for clinical prediction of premature delivery.


Subject(s)
Premature Birth , Algorithms , Electromyography , Female , Humans , Infant, Newborn , Neural Networks, Computer , Pregnancy , Premature Birth/diagnosis , Uterine Contraction
6.
Sensors (Basel) ; 21(16)2021 Aug 23.
Article in English | MEDLINE | ID: mdl-34451108

ABSTRACT

Defective PV panels reduce the efficiency of the whole PV string, causing loss of investment by decreasing its efficiency and lifetime. In this study, firstly, an isolated convolution neural model (ICNM) was prepared from scratch to classify the infrared images of PV panels based on their health, i.e., healthy, hotspot, and faulty. The ICNM occupies the least memory, and it also has the simplest architecture, lowest execution time, and an accuracy of 96% compared to transfer learned pre-trained ShuffleNet, GoogleNet, and SqueezeNet models. Afterward, ICNM, based on its advantages, is reused through transfer learning to classify the defects of PV panels into five classes, i.e., bird drop, single, patchwork, horizontally aligned string, and block with 97.62% testing accuracy. This proposed approach can identify and classify the PV panels based on their health and defects faster with high accuracy and occupies the least amount of the system's memory, resulting in savings in the PV investment.


Subject(s)
Diagnostic Imaging
7.
Sensors (Basel) ; 21(23)2021 Dec 06.
Article in English | MEDLINE | ID: mdl-34884166

ABSTRACT

(1) Background: Contact Endoscopy (CE) and Narrow Band Imaging (NBI) are optical imaging modalities that can provide enhanced and magnified visualization of the superficial vascular networks in the laryngeal mucosa. The similarity of vascular structures between benign and malignant lesions causes a challenge in the visual assessment of CE-NBI images. The main objective of this study is to use Deep Convolutional Neural Networks (DCNN) for the automatic classification of CE-NBI images into benign and malignant groups with minimal human intervention. (2) Methods: A pretrained Res-Net50 model combined with the cut-off-layer technique was selected as the DCNN architecture. A dataset of 8181 CE-NBI images was used during the fine-tuning process in three experiments where several models were generated and validated. The accuracy, sensitivity, and specificity were calculated as the performance metrics in each validation and testing scenario. (3) Results: Out of a total of 72 trained and tested models in all experiments, Model 5 showed high performance. This model is considerably smaller than the full ResNet50 architecture and achieved the testing accuracy of 0.835 on the unseen data during the last experiment. (4) Conclusion: The proposed fine-tuned ResNet50 model showed a high performance to classify CE-NBI images into the benign and malignant groups and has the potential to be part of an assisted system for automatic laryngeal cancer detection.


Subject(s)
Laryngeal Neoplasms , Larynx , Endoscopy , Humans , Laryngeal Neoplasms/diagnostic imaging , Narrow Band Imaging , Neural Networks, Computer
8.
Sensors (Basel) ; 21(7)2021 Apr 02.
Article in English | MEDLINE | ID: mdl-33918521

ABSTRACT

This paper is concerned with auto-focus of microscopes for the surface structure of transparent materials under transmission illumination, where two distinct focus states appear in the focusing process and the focus position is located between the two states with the local minimum of sharpness. Please note that most existing results are derived for one focus state with the global maximum value of sharpness, they cannot provide a feasible solution to this particular problem. In this paper, an auto-focus method is developed for such a specific situation with two focus states. Firstly, a focus state recognition model, which is essentially an image classification model based on a deep convolution neural network, is established to identify the focus states of the microscopy system. Then, an endpoint search algorithm which is an evolutionary algorithm based on differential evolution is designed to obtain the positions of the two endpoints of the region where the real focus position is located, by updating the parameters according to the focus states. At last, a region search algorithm is devised to locate the focus position. The experimental results show that our method can achieve auto-focus rapidly and accurately for such a specific situation with two focus states.

9.
Sensors (Basel) ; 20(6)2020 Mar 12.
Article in English | MEDLINE | ID: mdl-32178463

ABSTRACT

Image classification is a fundamental task in remote sensing image processing. In recent years, deep convolutional neural networks (DCNNs) have experienced significant breakthroughs in natural image recognition. The remote sensing field, however, is still lacking a large-scale benchmark similar to ImageNet. In this paper, we propose a remote sensing image classification benchmark (RSI-CB) based on massive, scalable, and diverse crowdsourced data. Using crowdsourced data, such as Open Street Map (OSM) data, ground objects in remote sensing images can be annotated effectively using points of interest, vector data from OSM, or other crowdsourced data. These annotated images can, then, be used in remote sensing image classification tasks. Based on this method, we construct a worldwide large-scale benchmark for remote sensing image classification. This benchmark has large-scale geographical distribution and large total image number. It contains six categories with 35 sub-classes of more than 24,000 images of size 256 × 256 pixels. This classification system of ground objects is defined according to the national standard of land-use classification in China and is inspired by the hierarchy mechanism of ImageNet. Finally, we conduct numerous experiments to compare RSI-CB with the SAT-4, SAT-6, and UC-Merced data sets. The experiments show that RSI-CB is more suitable as a benchmark for remote sensing image classification tasks than other benchmarks in the big data era and has many potential applications.

10.
Sensors (Basel) ; 19(5)2019 Mar 07.
Article in English | MEDLINE | ID: mdl-30866539

ABSTRACT

This paper presents a novel approach for semantic segmentation of building roofs in dense urban environments with a Deep Convolution Neural Network (DCNN) using Chinese Very High Resolution (VHR) satellite (i.e., GF2) imagery. To provide an operational end-to-end approach for accurately mapping build roofs with feature extraction and image segmentation, a fully convolutional DCNN with both convolutional and deconvolutional layers is designed to perform building roof segmentation. We selected typical cities with dense and diverse urban environments in different metropolitan regions of China as study areas, and sample images were collected over cities. High performance GPU-mounted workstations are employed to perform the model training and optimization. With the building roof samples collected over different cities, the predictive model with convolution layers is developed for building roof segmentation. The validation shows that the overall accuracy (OA) and the mean Intersection Over Union (mIOU) of DCNN-based semantic segmentation results are 94.67% and 0.85, respectively, and the CRF-refined segmentation results achieved OA of 94.69% and mIOU of 0.83. The results suggest that the proposed approach is a promising solution for building roof mapping with VHR images over large areas in dense urban environments with different building patterns. With the operational acquisition of GF2 VHR imagery, it is expected to develop an automated pipeline of operational built-up area monitoring, and the timely update of building roof map could be applied in urban management and assessment of human settlement-related sustainable development goals over large areas.

11.
Sensors (Basel) ; 18(3)2018 Mar 09.
Article in English | MEDLINE | ID: mdl-29522424

ABSTRACT

Landslides that take place in mountain cities tend to cause huge casualties and economic losses, and a precise survey of landslide areas is a critical task for disaster emergency. However, because of the complicated appearance of the nature, it is difficult to find a spatial regularity that only relates to landslides, thus landslides detection based on only spatial information or artificial features usually performs poorly. In this paper, an automated landslides detection approach that is aiming at mountain cities has been proposed based on pre- and post-event remote sensing images, it mainly utilizes the knowledge of landslide-related surface covering changes, and makes full use of the temporal and spatial information. A change detection method using Deep Convolution Neural Network (DCNN) was introduced to extract the areas where drastic alterations have taken place; then, focusing on the changed areas, the Spatial Temporal Context Learning (STCL) was conducted to identify the landslides areas; finally, we use slope degree which is derived from digital elevation model (DEM) to make the result more reliable, and the change of DEM is used for making the detected areas more complete. The approach was applied to detecting the landslides in Shenzhen, Zhouqu County and Beichuan County in China, and a quantitative accuracy assessment has been taken. The assessment indicates that this approach can guarantee less commission error of landslide areal extent which is below 17.6% and achieves a quality percentage above 61.1%, and for landslide areas, the detection percentage is also competitive, the experimental results proves the feasibility and accuracy of the proposed approach for the detection landslides in mountain cities.

12.
Curr Genomics ; 18(4): 322-331, 2017 Aug.
Article in English | MEDLINE | ID: mdl-29081688

ABSTRACT

BACKGROUNDS: With the advent of the post genomic era, the research for the genetic mechanism of the diseases has found to be increasingly depended on the studies of the genes, the gene-networks and gene-protein interaction networks. To explore gene expression and regulation, the researchers have carried out many studies on transcription factors and their binding sites (TFBSs). Based on the large amount of transcription factor binding sites predicting values in the deep learning models, further computation and analysis have been done to reveal the relationship between the gene mutation and the occurrence of the disease. It has been demonstrated that based on the deep learning methods, the performances of the prediction for the functions of the noncoding variants are outperforming than those of the conventional methods. The research on the prediction for functions of Single Nucleotide Polymorphisms (SNPs) is expected to uncover the mechanism of the gene mutation affection on traits and diseases of human beings. RESULTS: We reviewed the conventional TFBSs identification methods from different perspectives. As for the deep learning methods to predict the TFBSs, we discussed the related problems, such as the raw data preprocessing, the structure design of the deep convolution neural network (CNN) and the model performance measure et al. And then we summarized the techniques that usually used in finding out the functional noncoding variants from de novo sequence. CONCLUSION: Along with the rapid development of the high-throughout assays, more and more sample data and chromatin features would be conducive to improve the prediction accuracy of the deep convolution neural network for TFBSs identification. Meanwhile, getting more insights into the deep CNN framework itself has been proved useful for both the promotion on model performance and the development for more suitable design to sample data. Based on the feature values predicted by the deep CNN model, the prioritization model for functional noncoding variants would contribute to reveal the affection of gene mutation on the diseases.

13.
Math Biosci Eng ; 21(4): 5521-5535, 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38872546

ABSTRACT

Early diagnosis of abnormal electrocardiogram (ECG) signals can provide useful information for the prevention and detection of arrhythmia diseases. Due to the similarities in Normal beat (N) and Supraventricular Premature Beat (S) categories and imbalance of ECG categories, arrhythmia classification cannot achieve satisfactory classification results under the inter-patient assessment paradigm. In this paper, a multi-path parallel deep convolutional neural network was proposed for arrhythmia classification. Furthermore, a global average RR interval was introduced to address the issue of similarities between N vs. S categories, and a weighted loss function was developed to solve the imbalance problem using the dynamically adjusted weights based on the proportion of each class in the input batch. The MIT-BIH arrhythmia dataset was used to validate the classification performances of the proposed method. Experimental results under the intra-patient evaluation paradigm and inter-patient evaluation paradigm showed that the proposed method could achieve better classification results than other methods. Among them, the accuracy, average sensitivity, average precision, and average specificity under the intra-patient paradigm were 98.73%, 94.89%, 89.38%, and 98.24%, respectively. The accuracy, average sensitivity, average precision, and average specificity under the inter-patient paradigm were 91.22%, 89.91%, 68.23%, and 95.23%, respectively.


Subject(s)
Algorithms , Arrhythmias, Cardiac , Electrocardiography , Neural Networks, Computer , Signal Processing, Computer-Assisted , Humans , Arrhythmias, Cardiac/classification , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/physiopathology , Electrocardiography/methods , Sensitivity and Specificity , Deep Learning , Reproducibility of Results , Databases, Factual
14.
Int J Neural Syst ; 32(11): 2250046, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35997585

ABSTRACT

Autism spectrum disorder is a neurodevelopmental disorder typically characterized by abnormalities in social interaction and stereotyped and repetitive behaviors. Diagnosis of autism is mainly based on behavioral tests and interviews. In recent years, studies involving the diagnosis of autism based on analysis of EEG signals have increased. In this paper, recorded signals from people suffering from autism and healthy individuals are divided to without overlap windows considered as images and these images are classified using a two-dimensional Deep Convolution Neural Network (2D-DCNN). Deep learning models require a lot of data to extract the appropriate features and automate data classification. But, in most neurological studies, preparing a large number of measurements is difficult (a few 1000s as compared to million natural images), due to the cost, time, and difficulty of recording these signals. Therefore, to make the appropriate number of data, in our proposed method, some of the data augmentation methods are used. These data augmentation methods are mainly introduced for image databases and should be generalized for EEG-as-an-image database. In this paper, one of the nonlinear image mixing methods is used that mixes the rows of two images. According to the fact that any row in our image is one channel of EEG signal, this method is named channel combination. The result is that in the best case, i.e., augmentation according to channel combination, the average accuracy of 88.29% is achieved in the classification of short signals of healthy people and ASD ones and 100% for ASD and epilepsy ones, using 2D-DCNN. After the decision on joined windows related to each subject, we could achieve 100% accuracy in detecting ASD subjects, according to long EEG signals.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Epilepsy , Humans , Autistic Disorder/diagnosis , Autism Spectrum Disorder/diagnostic imaging , Electroencephalography/methods , Neural Networks, Computer
15.
Healthcare (Basel) ; 10(12)2022 Dec 09.
Article in English | MEDLINE | ID: mdl-36554021

ABSTRACT

Glaucoma is prominent in a variety of nations, with the United States and Europe being two of the most famous. Glaucoma now affects around 78 million people throughout the world (2020). By the year 2040, it is expected that there will be 111.8 million cases of glaucoma worldwide. In countries that are still building enough healthcare infrastructure to cope with glaucoma, the ailment is misdiagnosed nine times out of ten. To aid in the early diagnosis of glaucoma, the creation of a detection system is necessary. In this work, the researchers propose using a technology known as deep learning to identify and predict glaucoma before symptoms appear. The glaucoma dataset is used in this deep learning algorithm that has been proposed for analyzing glaucoma images. To get the required results when using deep learning principles for the job of segmenting the optic cup, pretrained transfer learning models are integrated with the U-Net architecture. For feature extraction, the DenseNet-201 deep convolution neural network (DCNN) is used. The DCNN approach is used to determine whether a person has glaucoma. The fundamental goal of this line of research is to recognize glaucoma in retinal fundus images, which will aid in assessing whether a patient has the condition. Because glaucoma can affect the model in both positive and negative ways, the model's outcome might be either positive or negative. Accuracy, precision, recall, specificity, the F-measure, and the F-score are some of the metrics used in the model evaluation process. An extra comparison study is performed as part of the process of establishing whether the suggested model is accurate. The findings are compared to convolution neural network classification methods based on deep learning. When used for training, the suggested model has an accuracy of 98.82 percent and an accuracy of 96.90 percent when used for testing. All assessments show that the new paradigm that has been proposed is more successful than the one that is currently in use.

16.
Bioengineering (Basel) ; 9(4)2022 Apr 06.
Article in English | MEDLINE | ID: mdl-35447721

ABSTRACT

Cancer is the second leading cause of death globally, and breast cancer (BC) is the second most reported cancer. Although the incidence rate is reducing in developed countries, the reverse is the case in low- and middle-income countries. Early detection has been found to contain cancer growth, prevent metastasis, ease treatment, and reduce mortality by 25%. The digital mammogram is one of the most common, cheapest, and most effective BC screening techniques capable of early detection of up to 90% BC incidence. However, the mammogram is one of the most difficult medical images to analyze. In this paper, we present a method of training a deep learning model for BC diagnosis. We developed a discriminative fine-tuning method which dynamically assigns different learning rates to each layer of the deep CNN. In addition, the model was trained using mixed-precision training to ease the computational demand of training deep learning models. Lastly, we present data augmentation methods for mammograms. The discriminative fine-tuning algorithm enables rapid convergence of the model loss; hence, the models were trained to attain their best performance within 50 epochs. Comparing the results, DenseNet achieved the highest accuracy of 0.998, while AlexNet obtained 0.988.

17.
Animals (Basel) ; 12(21)2022 Nov 01.
Article in English | MEDLINE | ID: mdl-36359124

ABSTRACT

Enabling the public to easily recognize water birds has a positive effect on wetland bird conservation. However, classifying water birds requires advanced ornithological knowledge, which makes it very difficult for the public to recognize water bird species in daily life. To break the knowledge barrier of water bird recognition for the public, we construct a water bird recognition system (Eyebirds) by using deep learning, which is implemented as a smartphone app. Eyebirds consists of three main modules: (1) a water bird image dataset; (2) an attention mechanism-based deep convolution neural network for water bird recognition (AM-CNN); (3) an app for smartphone users. The waterbird image dataset currently covers 48 families, 203 genera and 548 species of water birds worldwide, which is used to train our water bird recognition model. The AM-CNN model employs attention mechanism to enhance the shallow features of bird images for boosting image classification performance. Experimental results on the North American bird dataset (CUB200-2011) show that the AM-CNN model achieves an average classification accuracy of 85%. On our self-built water bird image dataset, the AM-CNN model also works well with classification accuracies of 94.0%, 93.6% and 86.4% at three levels: family, genus and species, respectively. The user-side app is a WeChat applet deployed in smartphones. With the app, users can easily recognize water birds in expeditions, camping, sightseeing, or even daily life. In summary, our system can bring not only fun, but also water bird knowledge to the public, thus inspiring their interests and further promoting their participation in bird ecological conservation.

18.
Signal Image Video Process ; 15(5): 959-966, 2021.
Article in English | MEDLINE | ID: mdl-33432267

ABSTRACT

The COVID-19, novel coronavirus or SARS-Cov-2, has claimed hundreds of thousands of lives and affected millions of people all around the world with the number of deaths and infections growing exponentially. Deep convolutional neural network (DCNN) has been a huge milestone for image classification task including medical images. Transfer learning of state-of-the-art models have proven to be an efficient method of overcoming deficient data problem. In this paper, a thorough evaluation of eight pre-trained models is presented. Training, validating, and testing of these models were performed on chest X-ray (CXR) images belonging to five distinct classes, containing a total of 760 images. Fine-tuned models, pre-trained in ImageNet dataset, were computationally efficient and accurate. Fine-tuned DenseNet121 achieved a test accuracy of 98.69% and macro f1-score of 0.99 for four classes classification containing healthy, bacterial pneumonia, COVID-19, and viral pneumonia, and fine-tuned models achieved higher test accuracy for three-class classification containing healthy, COVID-19, and SARS images. The experimental results show that only 62% of total parameters were retrained to achieve such accuracy.

19.
Med Biol Eng Comput ; 59(7-8): 1495-1527, 2021 Aug.
Article in English | MEDLINE | ID: mdl-34184181

ABSTRACT

Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.


Subject(s)
Brain Neoplasms , Image Processing, Computer-Assisted , Brain Neoplasms/diagnostic imaging , Humans , Magnetic Resonance Imaging , Neural Networks, Computer
20.
Quant Imaging Med Surg ; 9(7): 1242-1254, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31448210

ABSTRACT

BACKGROUND: Shading artifact may lead to CT number inaccuracy, image contrast loss and spatial non-uniformity (SNU), which is considered as one of the fundamental limitations for volumetric CT (VCT) application. To correct the shading artifact, a novel approach is proposed using deep learning and an adaptive filter (AF). METHODS: Firstly, we apply the deep convolutional neural network (DCNN) to train a human tissue segmentation model. The trained model is implemented to segment the tissue. According to the general knowledge that CT number of the same human tissue is approximately the same, a template image without shading artifact can be generated using segmentation and then each tissue is filled with the corresponding CT number of a specific tissue. By subtracting the template image from the uncorrected image, the residual image with image detail and shading artifact are generated. The shading artifact is mainly low-frequency signals while the image details are mainly high-frequency signals. Therefore, we proposed an adaptive filter to separate the shading artifact and image details accurately. Finally, the estimated shading artifacts are deleted from the raw image to generate the corrected image. RESULTS: On the Catphan©504 study, the error of CT number in the corrected image's region of interest (ROI) is reduced from 109 to 11 HU, and the image contrast is increased by a factor of 1.46 on average. On the patient pelvis study, the error of CT number in selected ROI is reduced from 198 to 10 HU. The SNU calculated from the ROIs decreases from 24% to 9% after correction. CONCLUSIONS: The proposed shading correction method using DCNN and AF may find a useful application in future clinical practice.

SELECTION OF CITATIONS
SEARCH DETAIL