Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Network ; 35(1): 27-54, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37947040

RESUMEN

Brain tumour (BT) is a dangerous neurological disorder produced by abnormal cell growth within the skull or brain. Nowadays, the death rate of people with BT is linearly growing. The finding of tumours at an early stage is crucial for giving treatment to patients, which improves the survival rate of patients. Hence, the BT classification (BTC) is done in this research using magnetic resonance imaging (MRI) images. In this research, the input MRI image is pre-processed using a non-local means (NLM) filter that denoises the input image. For attaining the effective classified result, the tumour area from the MRI image is segmented by the SegNet model. Furthermore, the BTC is accomplished by the LeNet model whose weight is optimized by the Golden Teacher Learning Optimization Algorithm (GTLO) such that the classified output produced by the LeNet model is Gliomas, Meningiomas, and Pituitary tumours. The experimental outcome displays that the GTLO-LeNet achieved an Accuracy of 0.896, Negative Predictive value (NPV) of 0.907, Positive Predictive value (PPV) of 0.821, True Negative Rate (TNR) of 0.880, and True Positive Rate (TPR) of 0.888.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Encéfalo , Imagen por Resonancia Magnética/métodos , Algoritmos
2.
Sensors (Basel) ; 24(3)2024 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-38339722

RESUMEN

Cracks inside urban underground comprehensive pipe galleries are small and their characteristics are not obvious. Due to low lighting and large shadow areas, the differentiation between the cracks and background in an image is low. Most current semantic segmentation methods focus on overall segmentation and have a large perceptual range. However, for urban underground comprehensive pipe gallery crack segmentation tasks, it is difficult to pay attention to the detailed features of local edges to obtain accurate segmentation results. A Global Attention Segmentation Network (GA-SegNet) is proposed in this paper. The GA-SegNet is designed to perform semantic segmentation by incorporating global attention mechanisms. In order to perform precise pixel classification in the image, a residual separable convolution attention model is employed in an encoder to extract features at multiple scales. A global attention upsample model (GAM) is utilized in a decoder to enhance the connection between shallow-level features and deep abstract features, which could increase the attention of the network towards small cracks. By employing a balanced loss function, the contribution of crack pixels is increased while reducing the focus on background pixels in the overall loss. This approach aims to improve the segmentation accuracy of cracks. The comparative experimental results with other classic models show that the GA SegNet model proposed in this study has better segmentation performance and multiple evaluation indicators, and has advantages in segmentation accuracy and efficiency.

3.
J Xray Sci Technol ; 32(3): 651-675, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38393884

RESUMEN

BACKGROUND: Thyroid tumor is considered to be a very rare form of cancer. But recent researches and surveys highlight the fact that it is becoming prevalent these days because of various factors. OBJECTIVES: This paper proposes a novel hybrid classification system that is able to identify and classify the above said four different types of thyroid tumors using high end artificial intelligence techniques. The input data set is obtained from Digital Database of Thyroid Ultrasound Images through Kaggle repository and augmented for achieving a better classification performance using data warping mechanisms like flipping, rotation, cropping, scaling, and shifting. METHODS: The input data after augmentation goes through preprocessing with the help of bilateral filter and is contrast enhanced using dynamic histogram equalization. The ultrasound images are then segmented using SegNet algorithm of convolutional neural network. The features needed for thyroid tumor classification are obtained from two different algorithms called CapsuleNet and EfficientNetB2 and both the features are fused together. This process of feature fusion is carried out to heighten the accuracy of classification. RESULTS: A Multilayer Perceptron Classifier is used for classification and Bonobo optimizer is employed for optimizing the results produced. The classification performance of the proposed model is weighted using metrics like accuracy, sensitivity, specificity, F1-score, and Matthew's correlation coefficient. CONCLUSION: It can be observed from the results that the proposed multilayer perceptron based thyroid tumor type classification system works in an efficient manner than the existing classifiers like CANFES, Spatial Fuzzy C means, Deep Belief Networks, Thynet and Generative adversarial network and Long Short-Term memory.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Neoplasias de la Tiroides , Ultrasonografía , Humanos , Neoplasias de la Tiroides/diagnóstico por imagen , Neoplasias de la Tiroides/clasificación , Neoplasias de la Tiroides/patología , Ultrasonografía/métodos , Glándula Tiroides/diagnóstico por imagen , Sensibilidad y Especificidad , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos
4.
BMC Med Imaging ; 21(1): 19, 2021 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-33557772

RESUMEN

BACKGROUND: Currently, there is an urgent need for efficient tools to assess the diagnosis of COVID-19 patients. In this paper, we present feasible solutions for detecting and labeling infected tissues on CT lung images of such patients. Two structurally-different deep learning techniques, SegNet and U-NET, are investigated for semantically segmenting infected tissue regions in CT lung images. METHODS: We propose to use two known deep learning networks, SegNet and U-NET, for image tissue classification. SegNet is characterized as a scene segmentation network and U-NET as a medical segmentation tool. Both networks were exploited as binary segmentors to discriminate between infected and healthy lung tissue, also as multi-class segmentors to learn the infection type on the lung. Each network is trained using seventy-two data images, validated on ten images, and tested against the left eighteen images. Several statistical scores are calculated for the results and tabulated accordingly. RESULTS: The results show the superior ability of SegNet in classifying infected/non-infected tissues compared to the other methods (with 0.95 mean accuracy), while the U-NET shows better results as a multi-class segmentor (with 0.91 mean accuracy). CONCLUSION: Semantically segmenting CT scan images of COVID-19 patients is a crucial goal because it would not only assist in disease diagnosis, also help in quantifying the severity of the illness, and hence, prioritize the population treatment accordingly. We propose computer-based techniques that prove to be reliable as detectors for infected tissue in lung CT scans. The availability of such a method in today's pandemic would help automate, prioritize, fasten, and broaden the treatment of COVID-19 patients globally.


Asunto(s)
COVID-19/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Aprendizaje Profundo , Humanos , Tomografía Computarizada por Rayos X
5.
Sensors (Basel) ; 21(8)2021 Apr 12.
Artículo en Inglés | MEDLINE | ID: mdl-33921451

RESUMEN

The accuracy in diagnosing prostate cancer (PCa) has increased with the development of multiparametric magnetic resonance imaging (mpMRI). Biparametric magnetic resonance imaging (bpMRI) was found to have a diagnostic accuracy comparable to mpMRI in detecting PCa. However, prostate MRI assessment relies on human experts and specialized training with considerable inter-reader variability. Deep learning may be a more robust approach for prostate MRI assessment. Here we present a method for autosegmenting the prostate zone and cancer region by using SegNet, a deep convolution neural network (DCNN) model. We used PROSTATEx dataset to train the model and combined different sequences into three channels of a single image. For each subject, all slices that contained the transition zone (TZ), peripheral zone (PZ), and PCa region were selected. The datasets were produced using different combinations of images, including T2-weighted (T2W) images, diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) images. Among these groups, the T2W + DWI + ADC images exhibited the best performance with a dice similarity coefficient of 90.45% for the TZ, 70.04% for the PZ, and 52.73% for the PCa region. Image sequence analysis with a DCNN model has the potential to assist PCa diagnosis.


Asunto(s)
Aprendizaje Profundo , Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata , Imagen de Difusión por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética , Masculino , Redes Neurales de la Computación , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen
6.
Sensors (Basel) ; 21(16)2021 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-34451072

RESUMEN

Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening is time-consuming and highly operator dependent. In view of this, a computer-aided diagnosis (CAD) method needs to be developed for the automatic segmentation of polyps in colonoscopy images. This paper proposes a modified SegNet Visual Geometry Group-19 (VGG-19), a form of convolutional neural network, as a CAD method for polyp segmentation. The modifications include skip connections, 5 × 5 convolutional filters, and the concatenation of four dilated convolutions applied in parallel form. The CVC-ClinicDB, CVC-ColonDB, and ETIS-LaribPolypDB databases were used to evaluate the model, and it was found that our proposed polyp segmentation model achieved an accuracy, sensitivity, specificity, precision, mean intersection over union, and dice coefficient of 96.06%, 94.55%, 97.56%, 97.48%, 92.3%, and 95.99%, respectively. These results indicate that our model performs as well as or better than previous schemes in the literature. We believe that this study will offer benefits in terms of the future development of CAD tools for polyp segmentation for colorectal cancer diagnosis and management. In the future, we intend to embed our proposed network into a medical capsule robot for practical usage and try it in a hospital setting with clinicians.


Asunto(s)
Colonoscopía , Redes Neurales de la Computación , Bases de Datos Factuales , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador , Proyectos de Investigación
7.
J Digit Imaging ; 34(2): 404-417, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33728563

RESUMEN

PURPOSE: The objective of this paper was to develop a computer-aided diagnostic (CAD) tools for automated analysis of capsule endoscopic (CE) images, more precisely, detect small intestinal abnormalities like bleeding. METHODS: In particular, we explore a convolutional neural network (CNN)-based deep learning framework to identify bleeding and non-bleeding CE images, where a pre-trained AlexNet neural network is used to train a transfer learning CNN that carries out the identification. Moreover, bleeding zones in a bleeding-identified image are also delineated using deep learning-based semantic segmentation that leverages a SegNet deep neural network. RESULTS: To evaluate the performance of the proposed framework, we carry out experiments on two publicly available clinical datasets and achieve a 98.49% and 88.39% F1 score, respectively, on the capsule endoscopy.org and KID datasets. For bleeding zone identification, 94.42% global accuracy and 90.69% weighted intersection over union (IoU) are achieved. CONCLUSION: Finally, our performance results are compared to other recently developed state-of-the-art methods, and consistent performance advances are demonstrated in terms of performance measures for bleeding image and bleeding zone detection. Relative to the present and established practice of manual inspection and annotation of CE images by a physician, our framework enables considerable annotation time and human labor savings in bleeding detection in CE images, while providing the additional benefits of bleeding zone delineation and increased detection accuracy. Moreover, the overall cost of CE enabled by our framework will also be much lower due to the reduction of manual labor, which can make CE affordable for a larger population.


Asunto(s)
Endoscopía Capsular , Aprendizaje Profundo , Hemorragia Gastrointestinal/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Intestino Delgado , Redes Neurales de la Computación
8.
J Digit Imaging ; 34(2): 263-272, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33674979

RESUMEN

Coronavirus (COVID-19) is a pandemic, which caused suddenly unexplained pneumonia cases and caused a devastating effect on global public health. Computerized tomography (CT) is one of the most effective tools for COVID-19 screening. Since some specific patterns such as bilateral, peripheral, and basal predominant ground-glass opacity, multifocal patchy consolidation, crazy-paving pattern with a peripheral distribution can be observed in CT images and these patterns have been declared as the findings of COVID-19 infection. For patient monitoring, diagnosis and segmentation of COVID-19, which spreads into the lung, expeditiously and accurately from CT, will provide vital information about the stage of the disease. In this work, we proposed a SegNet-based network using the attention gate (AG) mechanism for the automatic segmentation of COVID-19 regions in CT images. AGs can be easily integrated into standard convolutional neural network (CNN) architectures with a minimum computing load as well as increasing model precision and predictive accuracy. Besides, the success of the proposed network has been evaluated based on dice, Tversky, and focal Tversky loss functions to deal with low sensitivity arising from the small lesions. The experiments were carried out using a fivefold cross-validation technique on a COVID-19 CT segmentation database containing 473 CT images. The obtained sensitivity, specificity, and dice scores were reported as 92.73%, 99.51%, and 89.61%, respectively. The superiority of the proposed method has been highlighted by comparing with the results reported in previous studies and it is thought that it will be an auxiliary tool that accurately detects automatic COVID-19 regions from CT images.


Asunto(s)
COVID-19 , Humanos , Redes Neurales de la Computación , SARS-CoV-2 , Semántica , Tomografía Computarizada por Rayos X
9.
BMC Bioinformatics ; 21(Suppl 1): 192, 2020 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-33297952

RESUMEN

BACKGROUND: Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC). RESULTS: We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively. CONCLUSIONS: The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Mamografía , Redes Neurales de la Computación , Automatización , Bases de Datos Factuales , Humanos
10.
Sensors (Basel) ; 20(11)2020 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-32503330

RESUMEN

In this paper, we present an evaluation of four encoder-decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder-decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92 . 8 % . This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Próstata/diagnóstico por imagen , Humanos , Masculino , Semántica
11.
J Imaging ; 10(1)2024 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-38248998

RESUMEN

Ultrasound imaging has been used to investigate compression of the median nerve in carpal tunnel syndrome patients. Ultrasound imaging and the extraction of median nerve parameters from ultrasound images are crucial and are usually performed manually by experts. The manual annotation of ultrasound images relies on experience, and intra- and interrater reliability may vary among studies. In this study, two types of convolutional neural networks (CNNs), U-Net and SegNet, were used to extract the median nerve morphology. To the best of our knowledge, the application of these methods to ultrasound imaging of the median nerve has not yet been investigated. Spearman's correlation and Bland-Altman analyses were performed to investigate the correlation and agreement between manual annotation and CNN estimation, namely, the cross-sectional area, circumference, and diameter of the median nerve. The results showed that the intersection over union (IoU) of U-Net (0.717) was greater than that of SegNet (0.625). A few images in SegNet had an IoU below 0.6, decreasing the average IoU. In both models, the IoU decreased when the median nerve was elongated longitudinally with a blurred outline. The Bland-Altman analysis revealed that, in general, both the U-Net- and SegNet-estimated measurements showed 95% limits of agreement with manual annotation. These results show that these CNN models are promising tools for median nerve ultrasound imaging analysis.

12.
Heliyon ; 10(4): e25989, 2024 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-38390142

RESUMEN

Image-based gauging stations offer the potential for substantial enhancement in the monitoring networks of river water levels. Nonetheless, the majority of camera gauges fall short in delivering reliable and precise measurements because of the fluctuating appearance of water in the rivers over the course of the year. In this study, we introduce a method for measuring water levels in rivers using both the traditional continuous image subtraction (CIS) approach and a SegNet neural network based on deep learning computer vision. The historical images collected from on-site investigations were employed to train three neural networks (SegNet, U-Net, and FCN) in order to evaluate their effectiveness, overall performance, and reliability. The research findings demonstrated that the SegNet neural network outperformed the CIS method in accurately measuring water levels. The root mean square error (RMSE) between the water level measurements obtained by the SegNet neural network and the gauge station's readings ranged from 0.013 m to 0.066 m, with a high correlation coefficient of 0.998. Furthermore, the study revealed that the performance of the SegNet neural network in analyzing water levels in rivers improved with the inclusion of a larger number of images, diverse image categories, and higher image resolutions in the training dataset. These promising results emphasize the potential of deep learning computer vision technology, particularly the SegNet neural network, to enhance water level measurement in rivers. Notably, the quality and diversity of the training dataset play a crucial role in optimizing the network's performance. Overall, the application of this advanced technology holds great promise for advancing water level monitoring and management in river systems.

13.
Spectrochim Acta A Mol Biomol Spectrosc ; 323: 124897, 2024 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-39094271

RESUMEN

Assessing crop seed phenotypic traits is essential for breeding innovations and germplasm enhancement. However, the tough outer layers of thin-shelled seeds present significant challenges for traditional methods aimed at the rapid assessment of their internal structures and quality attributes. This study explores the potential of combining terahertz (THz) time-domain spectroscopy and imaging with semantic segmentation models for the rapid and non-destructive examination of these traits. A total of 120 watermelon seed samples from three distinct varieties, were curated in this study, facilitating a comprehensive analysis of both their outer layers and inner kernels. Utilizing a transmission imaging modality, THz spectral images were acquired and subsequently reconstructed employing a correlation coefficient method. Deep learning-based SegNet and DeepLab V3+ models were employed for automatic tissue segmentation. Our research revealed that DeepLab V3+ significantly surpassed SegNet in both speed and accuracy. Specifically, DeepLab V3+ achieved a pixel accuracy of 96.69 % and an intersection over the union of 91.3 % for the outer layer, with the inner kernel results closely following. These results underscore the proficiency of DeepLab V3+ in distinguishing between the seed coat and kernel, thereby furnishing precise phenotypic trait analyses for seeds with thin shells. Moreover, this study accentuates the instrumental role of deep learning technologies in advancing agricultural research and practices.


Asunto(s)
Citrullus , Semillas , Semillas/química , Citrullus/química , Imágen por Terahertz/métodos , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Espectroscopía de Terahertz/métodos , Semántica
14.
J Ultrasound ; 26(3): 673-685, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36195781

RESUMEN

Ultrasound features related to thyroid lesions structure, shape, volume, and margins are considered to determine cancer risk. Automatic segmentation of the thyroid lesion would allow the sonographic features to be estimated. On the basis of clinical ultrasonography B-mode scans, a multi-output CNN-based semantic segmentation is used to separate thyroid nodules' cystic & solid components. Semantic segmentation is an automatic technique that labels the ultrasound (US) pixels with an appropriate class or pixel category, i.e., belongs to a lesion or background. In the present study, encoder-decoder-based semantic segmentation models i.e. SegNet using VGG16, UNet, and Hybrid-UNet implemented for segmentation of thyroid US images. For this work, 820 thyroid US images are collected from the DDTI and ultrasoundcases.info (USC) datasets. These segmentation models were trained using a transfer learning approach with original and despeckled thyroid US images. The performance of segmentation models is evaluated by analyzing the overlap region between the true contour lesion marked by the radiologist and the lesion retrieved by the segmentation model. The mean intersection of union (mIoU), mean dice coefficient (mDC) metrics, TPR, TNR, FPR, and FNR metrics are used to measure performance. Based on the exhaustive experiments and performance evaluation parameters it is observed that the proposed Hybrid-UNet segmentation model segments thyroid nodules and cystic components effectively.


Asunto(s)
Redes Neurales de la Computación , Nódulo Tiroideo , Humanos , Nódulo Tiroideo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía/métodos
15.
Med Biol Eng Comput ; 61(8): 2159-2195, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37353695

RESUMEN

Encoder-decoder-based semantic segmentation models classify image pixels into the corresponding class, such as the ROI (region of interest) or background. In the present study, simple / dilated convolution / series / directed acyclic graph (DAG)-based encoder-decoder semantic segmentation models have been implemented, i.e., SegNet (VGG16), SegNet (VGG19), U-Net, mobileNetv2, ResNet18, ResNet50, Xception and Inception networks for the segment TTUS(Thyroid Tumor Ultrasound) images. Transfer learning has been used to train these segmentation networks using original and despeckled TTUS images. The performance of the networks has been calculated using mIoU and mDC metrics. Based on the exhaustive experiments, it has been observed that ResNet50-based segmentation model obtained the best results objectively with values 0.87 for mIoU, 0.94 for mDC, and also according to radiologist opinion on shape, margin, and echogenicity characteristics of segmented lesions. It is noted that the segmentation model, namely ResNet50, provides better segmentation based on objective and subjective assessment. It may be used in the healthcare system to identify thyroid nodules accurately in real time.


Asunto(s)
Benchmarking , Nódulo Tiroideo , Humanos , Aprendizaje , Semántica , Nódulo Tiroideo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
16.
Plant Methods ; 18(1): 55, 2022 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-35477580

RESUMEN

BACKGROUND: China has a unique cotton planting pattern. Cotton is densely planted in alternating wide and narrow rows to increase yield in Xinjiang, China, causing the difficulty in the accurate estimation of cotton yield using remote sensing in such field with branches occluded and overlapped. RESULTS: In this study, unmanned aerial vehicle (UAV) imaging and deep convolutional neural networks (DCNN) were used to estimate densely planted cotton yield. Images of cotton fields were acquired by the UAV at an altitude of 5 m. Cotton bolls were manually harvested and weighed afterwards. Then, a modified DCNN model (CD-SegNet) was constructed for pixel-level segmentation of cotton boll images by reorganizing the encoder-decoder and adding dilated convolutions. Besides, linear regression analysis was employed to build up the relationship between cotton boll pixels ratio and cotton yield. Finally, the estimated yield for four cotton fields were verified by weighing harvested cotton. The results showed that CD-SegNet outperformed the other tested models, including SegNet, support vector machine (SVM), and random forest (RF). The average error in yield estimates of the cotton fields was as low as 6.2%. CONCLUSIONS: Overall, the estimation of densely planted cotton yields based on low-altitude UAV imaging is feasible. This study provides a methodological reference for cotton yield estimation in China.

17.
Multimed Tools Appl ; 81(10): 13537-13562, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35194385

RESUMEN

Deep-learning techniques have led to technological progress in the area of medical imaging segmentation especially in the ultrasound domain. In this paper, the main goal of this study is to optimize a deep-learning-based neural network architecture for automatic segmentation in Ultrasonic Computed Tomography (USCT) bone images in a short time process. The proposed method is based on an end to end neural network architecture. First, the novelty is shown by the improvement of Variable Structure Model of Neuron (VSMN), which is trained for both USCT noise removal and dataset augmentation. Second, a VGG-SegNet neural network architecture is trained and tested on new USCT images not seen before for automatic bone segmentation. Therefore, we offer a free USCT dataset. In addition, the proposed model is implemented on both the CPU and the GPU, hence overcoming previous works by a value of 97.38% and 96% for training and validation and achieving high segmentation accuracy for testing with a small error of 0.006, in a short time process. The suggested method demonstrates its ability to augment USCT data and then to automatically segment USCT bone structures achieving excellent accuracy outperforming the state of the art.

18.
Comput Methods Programs Biomed ; 227: 107197, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-36351349

RESUMEN

OBJECTIVE: A set of cardiac MRI short-axis image dataset is constructed, and an automatic segmentation based on an improved SegNet model is developed to evaluate its performance based on deep learning techniques. METHODS: The Affiliated Hospital of Qingdao University collected 1354 cardiac MRI between 2019 and 2022, and the dataset was divided into four categories: for the diagnosis of cardiac hypertrophy and myocardial infraction and normal control group by manual annotation to establish a cardiac MRI library. On the basis, the training set, validation set and test set were separated. SegNet is a classical deep learning segmentation network, which borrows part of the classical convolutional neural network, that pixelates the region of an object in an image division of levels. Its implementation consists of a convolutional neural network. Aiming at the problems of low accuracy and poor generalization ability of current deep learning frameworks in medical image segmentation, this paper proposes a semantic segmentation method based on deep separable convolutional network to improve the SegNet model, and trains the data set. Tensorflow framework was used to train the model and the experiment detection achieves good results. RESULTS: In the validation experiment, the sensitivity and specificity of the improved SegNet model in the segmentation of left ventricular MRI were 0.889, 0.965, Dice coefficient was 0.878, Jaccard coefficient was 0.955, and Hausdorff distance was 10.163 mm, showing good segmentation effect. CONCLUSION: The segmentation accuracy of the deep learning model developed in this paper can meet the requirements of most clinical medicine applications, and provides technical support for left ventricular identification in cardiac MRI.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Infarto del Miocardio , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Imagen por Resonancia Magnética , Infarto del Miocardio/diagnóstico por imagen , Cardiomegalia/diagnóstico por imagen
19.
Comput Biol Med ; 147: 105797, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35780603

RESUMEN

Accurate segmentation of lesions in medical images is of great significance for clinical diagnosis and evaluation. The low contrast between lesions and surrounding tissues increases the difficulty of automatic segmentation, while the efficiency of manual segmentation is low. In order to increase the generalization performance of segmentation model, we proposed a deep learning-based automatic segmentation model called CM-SegNet for segmenting medical images of different modalities. It was designed according to the multiscale input and encoding-decoding thoughts, and composed of multilayer perceptron and convolution modules. This model achieved communication of different channels and different spatial locations of each patch, and considers the edge related feature information between adjacent patches. Thus, it could fully extract global and local image information for the segmentation task. Meanwhile, this model met the effective segmentation of different structural lesion regions in different slices of three-dimensional medical images. In this experiment, the proposed CM-SegNet was trained, validated, and tested using six medical image datasets of different modalities and 5-fold cross validation method. The results showed that the CM-SegNet model had better segmentation performance and shorter training time for different medical images than the previous methods, suggesting it is faster and more accurate in automatic segmentation and has great potential application in clinic.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Redes Neurales de la Computación
20.
Front Public Health ; 10: 855994, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35734764

RESUMEN

Artificial intelligence researchers conducted different studies to reduce the spread of COVID-19. Unlike other studies, this paper isn't for early infection diagnosis, but for preventing the transmission of COVID-19 in social environments. Among the studies on this is regarding social distancing, as this method is proven to prevent COVID-19 to be transmitted from one to another. In the study, Robot Operating System (ROS) simulates a shopping mall using Gazebo, and customers are monitored by Turtlebot and Unmanned Aerial Vehicle (UAV, DJI Tello). Through frames analysis captured by Turtlebot, a particular person is identified and followed at the shopping mall. Turtlebot is a wheeled robot that follows people without contact and is used as a shopping cart. Therefore, a customer doesn't touch the shopping cart that someone else comes into contact with, and also makes his/her shopping easier. The UAV detects people from above and determines the distance between people. In this way, a warning system can be created by detecting places where social distance is neglected. Histogram of Oriented-Gradients (HOG)-Support Vector Machine (SVM) is applied by Turtlebot to detect humans, and Kalman-Filter is used for human tracking. SegNet is performed for semantically detecting people and measuring distance via UAV. This paper proposes a new robotic study to prevent the infection and proved that this system is feasible.


Asunto(s)
COVID-19 , Robótica , Inteligencia Artificial , COVID-19/prevención & control , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA