Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 97
Filtrar
1.
Plant Phenomics ; 5: 0031, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37287583

RESUMO

Automatically segmenting crops and weeds in the image input from cameras accurately is essential in various agricultural technology fields, such as herbicide spraying by farming robots based on crop and weed segmentation information. However, crop and weed images taken with a camera have motion blur due to various causes (e.g., vibration or shaking of a camera on farming robots, shaking of crops and weeds), which reduces the accuracy of crop and weed segmentation. Therefore, robust crop and weed segmentation for motion-blurred images is essential. However, previous crop and weed segmentation studies were performed without considering motion-blurred images. To solve this problem, this study proposed a new motion-blur image restoration method based on a wide receptive field attention network (WRA-Net), based on which we investigated improving crop and weed segmentation accuracy in motion-blurred images. WRA-Net comprises a main block called a lite wide receptive field attention residual block, which comprises modified depthwise separable convolutional blocks, an attention gate, and a learnable skip connection. We conducted experiments using the proposed method with 3 open databases: BoniRob, crop/weed field image, and rice seedling and weed datasets. According to the results, the crop and weed segmentation accuracy based on mean intersection over union was 0.7444, 0.7741, and 0.7149, respectively, demonstrating that this method outperformed the state-of-the-art methods.

2.
Biomedicines ; 10(7)2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35885022

RESUMO

Infertility is one of the most important health concerns worldwide. It is characterized by not being successful of pregnancy after some periods of periodic unprotected sexual intercourse. In vitro fertilization (IVF) is an assisted reproduction technique that efficiently addresses infertility. IVF replaces the actual mode of reproduction through a manual procedure wherein embryos are cultivated in a controlled laboratory environment until they reach the blastocyst stage. The standard IVF procedure includes the transfer of one or two blastocysts from several blastocysts that are grown in a controlled environment. The morphometric properties of blastocysts with their compartments such as trophectoderm (TE), zona pellucida (ZP), inner cell mass (ICM), and blastocoel (BL), are analyzed through manual microscopic analysis to predict viability. Deep learning has been extensively used for medical diagnosis and analysis and can be a powerful tool to automate the morphological analysis of human blastocysts. However, the existing approaches are inaccurate and require extensive preprocessing and expensive architectures. Thus, to cope with the automatic detection of blastocyst components, this study proposed a novel multiscale aggregation semantic segmentation network (MASS-Net) that combined four different scales via depth-wise concatenation. The extensive use of depthwise separable convolutions resulted in a decrease in the number of trainable parameters. Further, the innovative multiscale design provided rich spatial information of different resolutions, thereby achieving good segmentation performance without a very deep architecture. MASS-Net utilized 2.06 million trainable parameters and accurately detects TE, ZP, ICM, and BL without using preprocessing stages. Moreover, it can provide a separate binary mask for each blastocyst component simultaneously, and these masks provide the structure of each component for embryonic analysis. Further, the proposed MASS-Net was evaluated using publicly available human blastocyst (microscopic) imaging data. The experimental results revealed that it can effectively detect TE, ZP, ICM, and BL with mean Jaccard indices of 79.08, 84.69, 85.88%, and 89.28%, respectively, for embryological analysis, which was higher than those of the state-of-the-art methods.

3.
IEEE J Biomed Health Inform ; 26(8): 3685-3696, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35635825

RESUMO

White blood cells (WBCs), also known as leukocytes, are one of the valuable parts of the blood and immune system. Typically, pathologists use microscope for the manual inspection of blood smears which is a time-consuming, error-prone, and labor-intensive procedure. To address these issues, we present two novel shallow networks: a leukocyte deep segmentation network (LDS-Net) and leukocyte deep aggregation segmentation network (LDAS-Net) for the joint segmentation of cytoplasm and nuclei in WBC images. LDS-Net is a shallow architecture with three downsampling stages and seven convolution layers. LDAS-Net is an extended version of LDS-Net that utilizes a novel pool-less low-level information transfer bridge to transfer low-level information to the deep layers of the network. This information is aggregated with deep features in a dense feature concatenation block to achieve accurate cytoplasm and nuclei joint segmentation. We evaluated our developed architectures on four WBC publicly available datasets. For cytoplasmic segmentation in WBCs, the proposed method achieved the dice coefficients of 98.97%, 99.0%, 96.05%, and 98.79% on Datasets 1, 2, 3, and 4, respectively. For nuclei segmentation, the dice coefficients of 96.35% and 98.09% are achieved for Datasets 1 and 2, respectively. Proposed method outperforms state-of-the-art methods with superior computational efficiency and requires only 6.5 million trainable parameters.


Assuntos
Leucócitos , Redes Neurais de Computação , Citoplasma , Humanos , Processamento de Imagem Assistida por Computador/métodos
4.
Expert Syst Appl ; 202: 117360, 2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-35529253

RESUMO

The recent disaster of COVID-19 has brought the whole world to the verge of devastation because of its highly transmissible nature. In this pandemic, radiographic imaging modalities, particularly, computed tomography (CT), have shown remarkable performance for the effective diagnosis of this virus. However, the diagnostic assessment of CT data is a human-dependent process that requires sufficient time by expert radiologists. Recent developments in artificial intelligence have substituted several personal diagnostic procedures with computer-aided diagnosis (CAD) methods that can make an effective diagnosis, even in real time. In response to COVID-19, various CAD methods have been developed in the literature, which can detect and localize infectious regions in chest CT images. However, most existing methods do not provide cross-data analysis, which is an essential measure for assessing the generality of a CAD method. A few studies have performed cross-data analysis in their methods. Nevertheless, these methods show limited results in real-world scenarios without addressing generality issues. Therefore, in this study, we attempt to address generality issues and propose a deep learning-based CAD solution for the diagnosis of COVID-19 lesions from chest CT images. We propose a dual multiscale dilated fusion network (DMDF-Net) for the robust segmentation of small lesions in a given CT image. The proposed network mainly utilizes the strength of multiscale deep features fusion inside the encoder and decoder modules in a mutually beneficial manner to achieve superior segmentation performance. Additional pre- and post-processing steps are introduced in the proposed method to address the generality issues and further improve the diagnostic performance. Mainly, the concept of post-region of interest (ROI) fusion is introduced in the post-processing step, which reduces the number of false-positives and provides a way to accurately quantify the infected area of lung. Consequently, the proposed framework outperforms various state-of-the-art methods by accomplishing superior infection segmentation results with an average Dice similarity coefficient of 75.7%, Intersection over Union of 67.22%, Average Precision of 69.92%, Sensitivity of 72.78%, Specificity of 99.79%, Enhance-Alignment Measure of 91.11%, and Mean Absolute Error of 0.026.

5.
J Pers Med ; 12(2)2022 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-35207617

RESUMO

Morphological attributes of human blastocyst components and their characteristics are highly correlated with the success rate of in vitro fertilization (IVF). Blastocyst component analysis aims to choose the most viable embryos to improve the success rate of IVF. The embryologist evaluates blastocyst viability by manual microscopic assessment of its components, such as zona pellucida (ZP), trophectoderm (TE), blastocoel (BL), and inner cell mass (ICM). With the success of deep learning in the medical diagnosis domain, semantic segmentation has the potential to detect crucial components of human blastocysts for computerized analysis. In this study, a sprint semantic segmentation network (SSS-Net) is proposed to accurately detect blastocyst components for embryological analysis. The proposed method is based on a fully convolutional semantic segmentation scheme that provides the pixel-wise classification of important blastocyst components that help to automatically check the morphologies of these elements. The proposed SSS-Net uses the sprint convolutional block (SCB), which uses asymmetric kernel convolutions in combination with depth-wise separable convolutions to reduce the overall cost of the network. SSS-Net is a shallow architecture with dense feature aggregation, which helps in better segmentation. The proposed SSS-Net consumes a smaller number of trainable parameters (4.04 million) compared to state-of-the-art methods. The SSS-Net was evaluated using a publicly available human blastocyst image dataset for component segmentation. The experimental results confirm that our proposal provides promising segmentation performance with a Jaccard Index of 82.88%, 77.40%, 88.39%, 84.94%, and 96.03% for ZP, TE, BL, ICM, and background, with residual connectivity, respectively. It is also provides a Jaccard Index of 84.51%, 78.15%, 88.68%, 84.50%, and 95.82% for ZP, TE, BL, ICM, and background, with dense connectivity, respectively. The proposed SSS-Net is providing a mean Jaccard Index (Mean JI) of 85.93% and 86.34% with residual and dense connectivity, respectively; this shows effective segmentation of blastocyst components for embryological analysis.

6.
J Pers Med ; 12(1)2022 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-35055427

RESUMO

BACKGROUND: Early recognition of prostheses before reoperation can reduce perioperative morbidity and mortality. Because of the intricacy of the shoulder biomechanics, accurate classification of implant models before surgery is fundamental for planning the correct medical procedure and setting apparatus for personalized medicine. Expert surgeons usually use X-ray images of prostheses to set the patient-specific apparatus. However, this subjective method is time-consuming and prone to errors. METHOD: As an alternative, artificial intelligence has played a vital role in orthopedic surgery and clinical decision-making for accurate prosthesis placement. In this study, three different deep learning-based frameworks are proposed to identify different types of shoulder implants in X-ray scans. We mainly propose an efficient ensemble network called the Inception Mobile Fully-Connected Convolutional Network (IMFC-Net), which is comprised of our two designed convolutional neural networks and a classifier. To evaluate the performance of the IMFC-Net and state-of-the-art models, experiments were performed with a public data set of 597 de-identified patients (597 shoulder implants). Moreover, to demonstrate the generalizability of IMFC-Net, experiments were performed with two augmentation techniques and without augmentation, in which our model ranked first, with a considerable difference from the comparison models. A gradient-weighted class activation map technique was also used to find distinct implant characteristics needed for IMFC-Net classification decisions. RESULTS: The results confirmed that the proposed IMFC-Net model yielded an average accuracy of 89.09%, a precision rate of 89.54%, a recall rate of 86.57%, and an F1.score of 87.94%, which were higher than those of the comparison models. CONCLUSION: The proposed model is efficient and can minimize the revision complexities of implants.

7.
J Pers Med ; 11(10)2021 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-34683149

RESUMO

BACKGROUND: Early and accurate detection of COVID-19-related findings (such as well-aerated regions, ground-glass opacity, crazy paving and linear opacities, and consolidation in lung computed tomography (CT) scan) is crucial for preventive measures and treatment. However, the visual assessment of lung CT scans is a time-consuming process particularly in case of trivial lesions and requires medical specialists. METHOD: A recent breakthrough in deep learning methods has boosted the diagnostic capability of computer-aided diagnosis (CAD) systems and further aided health professionals in making effective diagnostic decisions. In this study, we propose a domain-adaptive CAD framework, namely the dilated aggregation-based lightweight network (DAL-Net), for effective recognition of trivial COVID-19 lesions in CT scans. Our network design achieves a fast execution speed (inference time is 43 ms on a single image) with optimal memory consumption (almost 9 MB). To evaluate the performances of the proposed and state-of-the-art models, we considered two publicly accessible datasets, namely COVID-19-CT-Seg (comprising a total of 3520 images of 20 different patients) and MosMed (including a total of 2049 images of 50 different patients). RESULTS: Our method exhibits average area under the curve (AUC) up to 98.84%, 98.47%, and 95.51% for COVID-19-CT-Seg, MosMed, and cross-dataset, respectively, and outperforms various state-of-the-art methods. CONCLUSIONS: These results demonstrate that deep learning-based models are an effective tool for building a robust CAD solution based on CT data in response to present disaster of COVID-19.

8.
Sensors (Basel) ; 21(14)2021 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-34300373

RESUMO

Among many available biometrics identification methods, finger-vein recognition has an advantage that is difficult to counterfeit, as finger veins are located under the skin, and high user convenience as a non-invasive image capturing device is used for recognition. However, blurring can occur when acquiring finger-vein images, and such blur can be mainly categorized into three types. First, skin scattering blur due to light scattering in the skin layer; second, optical blur occurs due to lens focus mismatching; and third, motion blur exists due to finger movements. Blurred images generated in these kinds of blur can significantly reduce finger-vein recognition performance. Therefore, restoration of blurred finger-vein images is necessary. Most of the previous studies have addressed the restoration method of skin scattering blurred images and some of the studies have addressed the restoration method of optically blurred images. However, there has been no research on restoration methods of motion blurred finger-vein images that can occur in actual environments. To address this problem, this study proposes a new method for improving the finger-vein recognition performance by restoring motion blurred finger-vein images using a modified deblur generative adversarial network (modified DeblurGAN). Based on an experiment conducted using two open databases, the Shandong University homologous multi-modal traits (SDUMLA-HMT) finger-vein database and Hong Kong Polytechnic University finger-image database version 1, the proposed method demonstrates outstanding performance that is better than those obtained using state-of-the-art methods.


Assuntos
Biometria , Veias , Dedos/diagnóstico por imagem , Hong Kong , Humanos , Movimento (Física)
9.
J Pers Med ; 11(6)2021 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-34199932

RESUMO

Accurate nuclear segmentation in histopathology images plays a key role in digital pathology. It is considered a prerequisite for the determination of cell phenotype, nuclear morphometrics, cell classification, and the grading and prognosis of cancer. However, it is a very challenging task because of the different types of nuclei, large intraclass variations, and diverse cell morphologies. Consequently, the manual inspection of such images under high-resolution microscopes is tedious and time-consuming. Alternatively, artificial intelligence (AI)-based automated techniques, which are fast and robust, and require less human effort, can be used. Recently, several AI-based nuclear segmentation techniques have been proposed. They have shown a significant performance improvement for this task, but there is room for further improvement. Thus, we propose an AI-based nuclear segmentation technique in which we adopt a new nuclear segmentation network empowered by residual skip connections to address this issue. Experiments were performed on two publicly available datasets: (1) The Cancer Genome Atlas (TCGA), and (2) Triple-Negative Breast Cancer (TNBC). The results show that our proposed technique achieves an aggregated Jaccard index (AJI) of 0.6794, Dice coefficient of 0.8084, and F1-measure of 0.8547 on TCGA dataset, and an AJI of 0.7332, Dice coefficient of 0.8441, precision of 0.8352, recall of 0.8306, and F1-measure of 0.8329 on the TNBC dataset. These values are higher than those of the state-of-the-art methods.

10.
J Pers Med ; 11(6)2021 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-34072079

RESUMO

Re-operations and revisions are often performed in patients who have undergone total shoulder arthroplasty (TSA) and reverse total shoulder arthroplasty (RTSA). This necessitates an accurate recognition of the implant model and manufacturer to set the correct apparatus and procedure according to the patient's anatomy as personalized medicine. Owing to unavailability and ambiguity in the medical data of a patient, expert surgeons identify the implants through a visual comparison of X-ray images. False steps cause heedlessness, morbidity, extra monetary weight, and a waste of time. Despite significant advancements in pattern recognition and deep learning in the medical field, extremely limited research has been conducted on classifying shoulder implants. To overcome these problems, we propose a robust deep learning-based framework comprised of an ensemble of convolutional neural networks (CNNs) to classify shoulder implants in X-ray images of different patients. Through our rotational invariant augmentation, the size of the training dataset is increased 36-fold. The modified ResNet and DenseNet are then combined deeply to form a dense residual ensemble-network (DRE-Net). To evaluate DRE-Net, experiments were executed on a 10-fold cross-validation on the openly available shoulder implant X-ray dataset. The experimental results showed that DRE-Net achieved an accuracy, F1-score, precision, and recall of 85.92%, 84.69%, 85.33%, and 84.11%, respectively, which were higher than those of the state-of-the-art methods. Moreover, we confirmed the generalization capability of our network by testing it in an open-world configuration, and the effectiveness of rotational invariant augmentation.

11.
Appl Soft Comput ; 108: 107490, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33994894

RESUMO

Currently, the coronavirus disease 2019 (COVID19) pandemic has killed more than one million people worldwide. In the present outbreak, radiological imaging modalities such as computed tomography (CT) and X-rays are being used to diagnose this disease, particularly in the early stage. However, the assessment of radiographic images includes a subjective evaluation that is time-consuming and requires substantial clinical skills. Nevertheless, the recent evolution in artificial intelligence (AI) has further strengthened the ability of computer-aided diagnosis tools and supported medical professionals in making effective diagnostic decisions. Therefore, in this study, the strength of various AI algorithms was analyzed to diagnose COVID19 infection from large-scale radiographic datasets. Based on this analysis, a light-weighted deep network is proposed, which is the first ensemble design (based on MobileNet, ShuffleNet, and FCNet) in medical domain (particularly for COVID19 diagnosis) that encompasses the reduced number of trainable parameters (a total of 3.16 million parameters) and outperforms the various existing models. Moreover, the addition of a multilevel activation visualization layer in the proposed network further visualizes the lesion patterns as multilevel class activation maps (ML-CAMs) along with the diagnostic result (either COVID19 positive or negative). Such additional output as ML-CAMs provides a visual insight of the computer decision and may assist radiologists in validating it, particularly in uncertain situations Additionally, a novel hierarchical training procedure was adopted to perform the training of the proposed network. It proceeds the network training by the adaptive number of epochs based on the validation dataset rather than using the fixed number of epochs. The quantitative results show the better performance of the proposed training method over the conventional end-to-end training procedure. A large collection of CT-scan and X-ray datasets (based on six publicly available datasets) was used to evaluate the performance of the proposed model and other baseline methods. The experimental results of the proposed network exhibit a promising performance in terms of diagnostic decision. An average F1 score (F1) of 94.60% and 95.94% and area under the curve (AUC) of 97.50% and 97.99% are achieved for the CT-scan and X-ray datasets, respectively. Finally, the detailed comparative analysis reveals that the proposed model outperforms the various state-of-the-art methods in terms of both quantitative and computational performance.

12.
IEEE J Biomed Health Inform ; 25(6): 1881-1891, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33835928

RESUMO

In the present epidemic of the coronavirus disease 2019 (COVID-19), radiological imaging modalities, such as X-ray and computed tomography (CT), have been identified as effective diagnostic tools. However, the subjective assessment of radiographic examination is a time-consuming task and demands expert radiologists. Recent advancements in artificial intelligence have enhanced the diagnostic power of computer-aided diagnosis (CAD) tools and assisted medical specialists in making efficient diagnostic decisions. In this work, we propose an optimal multilevel deep-aggregated boosted network to recognize COVID-19 infection from heterogeneous radiographic data, including X-ray and CT images. Our method leverages multilevel deep-aggregated features and multistage training via a mutually beneficial approach to maximize the overall CAD performance. To improve the interpretation of CAD predictions, these multilevel deep features are visualized as additional outputs that can assist radiologists in validating the CAD results. A total of six publicly available datasets were fused to build a single large-scale heterogeneous radiographic collection that was used to analyze the performance of the proposed technique and other baseline methods. To preserve generality of our method, we selected different patient data for training, validation, and testing, and consequently, the data of same patient were not included in training, validation, and testing subsets. In addition, fivefold cross-validation was performed in all the experiments for a fair evaluation. Our method exhibits promising performance values of 95.38%, 95.57%, 92.53%, 98.14%, 93.16%, and 98.55% in terms of average accuracy, F-measure, specificity, sensitivity, precision, and area under the curve, respectively and outperforms various state-of-the-art methods.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , COVID-19/virologia , Diagnóstico por Computador/métodos , Humanos , Redes Neurais de Computação , SARS-CoV-2/isolamento & purificação , Tomografia Computadorizada por Raios X/métodos
13.
Sensors (Basel) ; 21(2)2021 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-33451009

RESUMO

The conventional finger-vein recognition system is trained using one type of database and entails the serious problem of performance degradation when tested with different types of databases. This degradation is caused by changes in image characteristics due to variable factors such as position of camera, finger, and lighting. Therefore, each database has varying characteristics despite the same finger-vein modality. However, previous researches on improving the recognition accuracy of unobserved or heterogeneous databases is lacking. To overcome this problem, we propose a method to improve the finger-vein recognition accuracy using domain adaptation between heterogeneous databases using cycle-consistent adversarial networks (CycleGAN), which enhances the recognition accuracy of unobserved data. The experiments were performed with two open databases-Shandong University homologous multi-modal traits finger-vein database (SDUMLA-HMT-DB) and Hong Kong Polytech University finger-image database (HKPolyU-DB). They showed that the equal error rate (EER) of finger-vein recognition was 0.85% in case of training with SDUMLA-HMT-DB and testing with HKPolyU-DB, which had an improvement of 33.1% compared to the second best method. The EER was 3.4% in case of training with HKPolyU-DB and testing with SDUMLA-HMT-DB, which also had an improvement of 4.8% compared to the second best method.


Assuntos
Dedos , Veias , Bases de Dados Factuais , Hong Kong , Humanos
14.
J Pers Med ; 12(1)2021 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-35055322

RESUMO

Retinal blood vessels are considered valuable biomarkers for the detection of diabetic retinopathy, hypertensive retinopathy, and other retinal disorders. Ophthalmologists analyze retinal vasculature by manual segmentation, which is a tedious task. Numerous studies have focused on automatic retinal vasculature segmentation using different methods for ophthalmic disease analysis. However, most of these methods are computationally expensive and lack robustness. This paper proposes two new shallow deep learning architectures: dual-stream fusion network (DSF-Net) and dual-stream aggregation network (DSA-Net) to accurately detect retinal vasculature. The proposed method uses semantic segmentation in raw color fundus images for the screening of diabetic and hypertensive retinopathies. The proposed method's performance is assessed using three publicly available fundus image datasets: Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of Retina (STARE), and Children Heart Health Study in England Database (CHASE-DB1). The experimental results revealed that the proposed method provided superior segmentation performance with accuracy (Acc), sensitivity (SE), specificity (SP), and area under the curve (AUC) of 96.93%, 82.68%, 98.30%, and 98.42% for DRIVE, 97.25%, 82.22%, 98.38%, and 98.15% for CHASE-DB1, and 97.00%, 86.07%, 98.00%, and 98.65% for STARE datasets, respectively. The experimental results also show that the proposed DSA-Net provides higher SE compared to the existing approaches. It means that the proposed method detected the minor vessels and provided the least false negatives, which is extremely important for diagnosis. The proposed method provides an automatic and accurate segmentation mask that can be used to highlight the vessel pixels. This detected vasculature can be utilized to compute the ratio between the vessel and the non-vessel pixels and distinguish between diabetic and hypertensive retinopathies, and morphology can be analyzed for related retinal disorders.

15.
JMIR Med Inform ; 8(12): e21790, 2020 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-33284119

RESUMO

BACKGROUND: Tuberculosis (TB) is one of the most infectious diseases that can be fatal. Its early diagnosis and treatment can significantly reduce the mortality rate. In the literature, several computer-aided diagnosis (CAD) tools have been proposed for the efficient diagnosis of TB from chest radiograph (CXR) images. However, the majority of previous studies adopted conventional handcrafted feature-based algorithms. In addition, some recent CAD tools utilized the strength of deep learning methods to further enhance diagnostic performance. Nevertheless, all these existing methods can only classify a given CXR image into binary class (either TB positive or TB negative) without providing further descriptive information. OBJECTIVE: The main objective of this study is to propose a comprehensive CAD framework for the effective diagnosis of TB by providing visual as well as descriptive information from the previous patients' database. METHODS: To accomplish our objective, first we propose a fusion-based deep classification network for the CAD decision that exhibits promising performance over the various state-of-the-art methods. Furthermore, a multilevel similarity measure algorithm is devised based on multiscale information fusion to retrieve the best-matched cases from the previous database. RESULTS: The performance of the framework was evaluated based on 2 well-known CXR data sets made available by the US National Library of Medicine and the National Institutes of Health. Our classification model exhibited the best diagnostic performance (0.929, 0.937, 0.921, 0.928, and 0.965 for F1 score, average precision, average recall, accuracy, and area under the curve, respectively) and outperforms the performance of various state-of-the-art methods. CONCLUSIONS: This paper presents a comprehensive CAD framework to diagnose TB from CXR images by retrieving the relevant cases and their clinical observations from the previous patients' database. These retrieval results assist the radiologist in making an effective diagnostic decision related to the current medical condition of a patient. Moreover, the retrieval results can facilitate the radiologists in subjectively validating the CAD decision.

16.
J Med Internet Res ; 22(11): e18563, 2020 11 26.
Artigo em Inglês | MEDLINE | ID: mdl-33242010

RESUMO

BACKGROUND: The early diagnosis of various gastrointestinal diseases can lead to effective treatment and reduce the risk of many life-threatening conditions. Unfortunately, various small gastrointestinal lesions are undetectable during early-stage examination by medical experts. In previous studies, various deep learning-based computer-aided diagnosis tools have been used to make a significant contribution to the effective diagnosis and treatment of gastrointestinal diseases. However, most of these methods were designed to detect a limited number of gastrointestinal diseases, such as polyps, tumors, or cancers, in a specific part of the human gastrointestinal tract. OBJECTIVE: This study aimed to develop a comprehensive computer-aided diagnosis tool to assist medical experts in diagnosing various types of gastrointestinal diseases. METHODS: Our proposed framework comprises a deep learning-based classification network followed by a retrieval method. In the first step, the classification network predicts the disease type for the current medical condition. Then, the retrieval part of the framework shows the relevant cases (endoscopic images) from the previous database. These past cases help the medical expert validate the current computer prediction subjectively, which ultimately results in better diagnosis and treatment. RESULTS: All the experiments were performed using 2 endoscopic data sets with a total of 52,471 frames and 37 different classes. The optimal performances obtained by our proposed method in accuracy, F1 score, mean average precision, and mean average recall were 96.19%, 96.99%, 98.18%, and 95.86%, respectively. The overall performance of our proposed diagnostic framework substantially outperformed state-of-the-art methods. CONCLUSIONS: This study provides a comprehensive computer-aided diagnosis framework for identifying various types of gastrointestinal diseases. The results show the superiority of our proposed method over various other recent methods and illustrate its potential for clinical diagnosis and treatment. Our proposed network can be applicable to other classification domains in medical imaging, such as computed tomography scans, magnetic resonance imaging, and ultrasound sequences.


Assuntos
Aprendizado Profundo/normas , Diagnóstico por Computador/métodos , Endoscopia Gastrointestinal/métodos , Trato Gastrointestinal/patologia , Bases de Dados Factuais , Humanos
17.
Sensors (Basel) ; 20(21)2020 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-33105736

RESUMO

In vivo diseases such as colorectal cancer and gastric cancer are increasingly occurring in humans. These are two of the most common types of cancer that cause death worldwide. Therefore, the early detection and treatment of these types of cancer are crucial for saving lives. With the advances in technology and image processing techniques, computer-aided diagnosis (CAD) systems have been developed and applied in several medical systems to assist doctors in diagnosing diseases using imaging technology. In this study, we propose a CAD method to preclassify the in vivo endoscopic images into negative (images without evidence of a disease) and positive (images that possibly include pathological sites such as a polyp or suspected regions including complex vascular information) cases. The goal of our study is to assist doctors to focus on the positive frames of endoscopic sequence rather than the negative frames. Consequently, we can help in enhancing the performance and mitigating the efforts of doctors in the diagnosis procedure. Although previous studies were conducted to solve this problem, they were mostly based on a single classification model, thus limiting the classification performance. Thus, we propose the use of multiple classification models based on ensemble learning techniques to enhance the performance of pathological site classification. Through experiments with an open database, we confirmed that the ensemble of multiple deep learning-based models with different network architectures is more efficient for enhancing the performance of pathological site classification using a CAD system as compared to the state-of-the-art methods.


Assuntos
Neoplasias Colorretais/diagnóstico por imagem , Aprendizado Profundo , Diagnóstico por Computador , Processamento de Imagem Assistida por Computador , Neoplasias Gástricas/diagnóstico por imagem , Bases de Dados Factuais , Endoscopia , Humanos
18.
Sensors (Basel) ; 20(18)2020 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-32937774

RESUMO

The long-distance recognition methods in indoor environments are commonly divided into two categories, namely face recognition and face and body recognition. Cameras are typically installed on ceilings for face recognition. Hence, it is difficult to obtain a front image of an individual. Therefore, in many studies, the face and body information of an individual are combined. However, the distance between the camera and an individual is closer in indoor environments than that in outdoor environments. Therefore, face information is distorted due to motion blur. Several studies have examined deblurring of face images. However, there is a paucity of studies on deblurring of body images. To tackle the blur problem, a recognition method is proposed wherein the blur of body and face images is restored using a generative adversarial network (GAN), and the features of face and body obtained using a deep convolutional neural network (CNN) are used to fuse the matching score. The database developed by us, Dongguk face and body dataset version 2 (DFB-DB2) and ChokePoint dataset, which is an open dataset, were used in this study. The equal error rate (EER) of human recognition in DFB-DB2 and ChokePoint dataset was 7.694% and 5.069%, respectively. The proposed method exhibited better results than the state-of-art methods.


Assuntos
Reconhecimento Facial Automatizado , Identificação Biométrica/instrumentação , Face , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Bases de Dados Factuais , Humanos , Movimento (Física)
19.
Sensors (Basel) ; 20(14)2020 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-32674485

RESUMO

Deep learning-based marker detection for autonomous drone landing is widely studied, due to its superior detection performance. However, no study was reported to address non-uniform motion-blurred input images, and most of the previous handcrafted and deep learning-based methods failed to operate with these challenging inputs. To solve this problem, we propose a deep learning-based marker detection method for autonomous drone landing, by (1) introducing a two-phase framework of deblurring and object detection, by adopting a slimmed version of deblur generative adversarial network (DeblurGAN) model and a You only look once version 2 (YOLOv2) detector, respectively, and (2) considering the balance between the processing time and accuracy of the system. To this end, we propose a channel-pruning framework for slimming the DeblurGAN model called SlimDeblurGAN, without significant accuracy degradation. The experimental results on the two datasets showed that our proposed method exhibited higher performance and greater robustness than the previous methods, in both deburring and marker detection.

20.
Sensors (Basel) ; 20(12)2020 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-32570943

RESUMO

Ophthalmological analysis plays a vital role in the diagnosis of various eye diseases, such as glaucoma, retinitis pigmentosa (RP), and diabetic and hypertensive retinopathy. RP is a genetic retinal disorder that leads to progressive vision degeneration and initially causes night blindness. Currently, the most commonly applied method for diagnosing retinal diseases is optical coherence tomography (OCT)-based disease analysis. In contrast, fundus imaging-based disease diagnosis is considered a low-cost diagnostic solution for retinal diseases. This study focuses on the detection of RP from the fundus image, which is a crucial task because of the low quality of fundus images and non-cooperative image acquisition conditions. Automatic detection of pigment signs in fundus images can help ophthalmologists and medical practitioners in diagnosing and analyzing RP disorders. To accurately segment pigment signs for diagnostic purposes, we present an automatic RP segmentation network (RPS-Net), which is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background. Because pigment spots can be very small and consist of very few pixels, the RPS-Net provides fine segmentation, even in the case of degraded images, by importing high-frequency information from the preceding layers through concatenation inside and outside the encoder-decoder. To evaluate the proposed RPS-Net, experiments were performed based on 4-fold cross-validation using the publicly available Retinal Images for Pigment Signs (RIPS) dataset for detection and segmentation of retinal pigments. Experimental results show that RPS-Net achieved superior segmentation performance for RP diagnosis, compared with the state-of-the-art methods.


Assuntos
Aprendizado Profundo , Retinose Pigmentar , Tomografia de Coerência Óptica , Fundo de Olho , Humanos , Retina/diagnóstico por imagem , Retinose Pigmentar/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA