Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Magn Reson Med ; 91(6): 2483-2497, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38342983

RESUMO

PURPOSE: We introduced a novel reconstruction network, jointly unrolled cross-domain optimization-based spatio-temporal reconstruction network (JUST-Net), aimed at accelerating 3D multi-echo gradient-echo (mGRE) data acquisition and improving the quality of resulting myelin water imaging (MWI) maps. METHOD: An unrolled cross-domain spatio-temporal reconstruction network was designed. The main idea is to combine frequency and spatio-temporal image feature representations and to sequentially implement convolution layers in both domains. The k-space subnetwork utilizes shared information from adjacent frames, whereas the image subnetwork applies separate convolutions in both spatial and temporal dimensions. The proposed reconstruction network was evaluated for both retrospectively and prospectively accelerated acquisition. Furthermore, it was assessed in simulation studies and real-world cases with k-space corruptions to evaluate its potential for motion artifact reduction. RESULTS: The proposed JUST-Net enabled highly reproducible and accelerated 3D mGRE acquisition for whole-brain MWI, reducing the acquisition time from fully sampled 15:23 to 2:22 min within a 3-min reconstruction time. The normalized root mean squared error of the reconstructed mGRE images increased by less than 4.0%, and the correlation coefficients for MWI showed a value of over 0.68 when compared to the fully sampled reference. Additionally, the proposed method demonstrated a mitigating effect on both simulated and clinical motion-corrupted cases. CONCLUSION: The proposed JUST-Net has demonstrated the capability to achieve high acceleration factors for 3D mGRE-based MWI, which is expected to facilitate widespread clinical applications of MWI.


Assuntos
Bainha de Mielina , Água , Imageamento por Ressonância Magnética/métodos , Estudos Retrospectivos , Imageamento Tridimensional/métodos , Processamento de Imagem Assistida por Computador/métodos
2.
J Xray Sci Technol ; 2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38701131

RESUMO

BACKGROUND: The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE: This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS: Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS: Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS: The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION: Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.

3.
Hum Brain Mapp ; 44(15): 4986-5001, 2023 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-37466309

RESUMO

Magnetic resonance electrical properties tomography (MR-EPT) is a non-invasive measurement technique that derives the electrical properties (EPs, e.g., conductivity or permittivity) of tissues in the radiofrequency range (64 MHz for 1.5 T and 128 MHz for 3 T MR systems). Clinical studies have shown the potential of tissue conductivity as a biomarker. To date, model-based conductivity reconstructions rely on numerical assumptions and approximations, leading to inaccuracies in the reconstructed maps. To address such limitations, we propose an artificial neural network (ANN)-based non-linear conductivity estimator trained on simulated data for conductivity brain imaging. Network training was performed on 201 synthesized T2-weighted spin-echo (SE) data obtained from the finite-difference time-domain (FDTD) electromagnetic (EM) simulation. The dataset was composed of an approximated T2-w SE magnitude and transceive phase information. The proposed method was tested three in-silico and in-vivo on two volunteers and three patients' data. For comparison purposes, various conventional phase-based EPT reconstruction methods were used that ignore B 1 + magnitude information, such as Savitzky-Golay kernel combined with Gaussian filter (S-G Kernel), phase-based convection-reaction EPT (cr-EPT), magnitude-weighted polynomial-fitting phase-based EPT (Poly-Fit), and integral-based phase-based EPT (Integral-based). From the in-silico experiments, quantitative analysis showed that the proposed method provides more accurate and improved quality (e.g., high structural preservation) conductivity maps compared to conventional reconstruction methods. Representatively, in the healthy brain in-silico phantom experiment, the proposed method yielded mean conductivity values of 1.97 ± 0.20 S/m for CSF, 0.33 ± 0.04 S/m for WM, and 0.52 ± 0.08 S/m for GM, which were closer to the ground-truth conductivity (2.00, 0.30, 0.50 S/m) than the integral-based method (2.56 ± 2.31, 0.39 ± 0.12, 0.68 ± 0.33 S/m). In-vivo ANN-based conductivity reconstructions were also of improved quality compared to conventional reconstructions and demonstrated network generalizability and robustness to in-vivo data and pathologies. The reported in-vivo brain conductivity values were in agreement with literatures. In addition, the proposed method was observed for various SNR levels (SNR levels = 10, 20, 40, and 58) and repeatability conditions (the eight acquisitions with the number of signal averages = 1). The preliminary investigations on brain tumor patient datasets suggest that the network trained on simulated dataset can generalize to unforeseen in-vivo pathologies, thus demonstrating its potential for clinical applications.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Condutividade Elétrica , Imagens de Fantasmas , Neuroimagem , Algoritmos
4.
J Magn Reson Imaging ; 58(1): 272-283, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36285604

RESUMO

BACKGROUND: Cerebral microbleeds (CMBs) are microscopic brain hemorrhages with implications for various diseases. Automated detection of CMBs is a challenging task due to their wide distribution throughout the brain, small size, and visual similarity to their mimics. For this reason, most of the previously proposed methods have been accomplished through two distinct stages, which may lead to difficulties in integrating them into clinical workflows. PURPOSE: To develop a clinically feasible end-to-end CMBs detection network with a single-stage structure utilizing 3D information. This study proposes triplanar ensemble detection network (TPE-Det), ensembling 2D convolutional neural networks (CNNs) based detection networks on axial, sagittal, and coronal planes. STUDY TYPE: Retrospective. SUBJECTS: Two datasets (DS1 and DS2) were used: 1) 116 patients with 367 CMBs and 12 patients without CMBs for training, validation, and testing (70.39 ± 9.30 years, 68 women, 60 men, DS1); 2) 58 subjects with 148 microbleeds and 21 subjects without CMBs only for testing (76.13 ± 7.89 years, 47 women, 32 men, DS2). FIELD STRENGTH/SEQUENCE: A 3 T field strength and 3D GRE sequence scan for SWI reconstructions. ASSESSMENT: The sensitivity, FPavg (false-positive per subject), and precision measures were computed and analyzed with statistical analysis. STATISTICAL TESTS: A paired t-test was performed to investigate the improvement of detection performance by the suggested ensembling technique in this study. A P value < 0.05 was considered significant. RESULTS: The proposed TPE-Det detected CMBs on the DS1 testing set with a sensitivity of 96.05% and an FPavg of 0.88, presenting statistically significant improvement. Even when the testing on DS2 was performed without retraining, the proposed model provided a sensitivity of 85.03% and an FPavg of 0.55. The precision was significantly higher than the other models. DATA CONCLUSION: The ensembling of multidimensional networks significantly improves precision, suggesting that this new approach could increase the benefits of detecting lesions in the clinic. EVIDENCE LEVEL: 1 TECHNICAL EFFICACY: Stage 2.


Assuntos
Hemorragia Cerebral , Imageamento por Ressonância Magnética , Masculino , Humanos , Feminino , Imageamento por Ressonância Magnética/métodos , Hemorragia Cerebral/diagnóstico por imagem , Hemorragia Cerebral/patologia , Estudos Retrospectivos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Redes Neurais de Computação
5.
Neuroimage ; 259: 119411, 2022 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-35753594

RESUMO

Magnetic Resonance Imaging (MRI) is sensitive to motion caused by patient movement due to the relatively long data acquisition time. This could cause severe degradation of image quality and therefore affect the overall diagnosis. In this paper, we develop an efficient retrospective 2D deep learning method called stacked U-Nets with self-assisted priors to address the problem of rigid motion artifacts in 3D brain MRI. The proposed work exploits the usage of additional knowledge priors from the corrupted images themselves without the need for additional contrast data. The proposed network learns the missed structural details through sharing auxiliary information from the contiguous slices of the same distorted subject. We further design a refinement stacked U-Nets that facilitates preserving the spatial image details and improves the pixel-to-pixel dependency. To perform network training, simulation of MRI motion artifacts is inevitable. The proposed network is optimized by minimizing the loss of structural similarity (SSIM) using the synthesized motion-corrupted images from 83 real motion-free subjects. We present an intensive analysis using various types of image priors: the proposed self-assisted priors and priors from other image contrast of the same subject. The experimental analysis proves the effectiveness and feasibility of our self-assisted priors since it does not require any further data scans. The overall image quality of the motion-corrected images via the proposed motion correction network significantly improves SSIM from 71.66% to 95.03% and declines the mean square error from 99.25 to 29.76. These results indicate the high similarity of the brain's anatomical structure in the corrected images compared to the motion-free data. The motion-corrected results of both the simulated and real motion data showed the potential of the proposed motion correction network to be feasible and applicable in clinical practices.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Movimento (Física) , Estudos Retrospectivos
6.
Adv Exp Med Biol ; 1213: 59-72, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32030663

RESUMO

For computer-aided diagnosis (CAD), detection, segmentation, and classification from medical imagery are three key components to efficiently assist physicians for accurate diagnosis. In this chapter, a completely integrated CAD system based on deep learning is presented to diagnose breast lesions from digital X-ray mammograms involving detection, segmentation, and classification. To automatically detect breast lesions from mammograms, a regional deep learning approach called You-Only-Look-Once (YOLO) is used. To segment breast lesions, full resolution convolutional network (FrCN), a novel segmentation model of deep network, is implemented and used. Finally, three conventional deep learning models including regular feedforward CNN, ResNet-50, and InceptionResNet-V2 are separately adopted and used to classify or recognize the detected and segmented breast lesion as either benign or malignant. To evaluate the integrated CAD system for detection, segmentation, and classification, the publicly available and annotated INbreast database is used over fivefold cross-validation tests. The evaluation results of the YOLO-based detection achieved detection accuracy of 97.27%, Matthews's correlation coefficient (MCC) of 93.93%, and F1-score of 98.02%. Moreover, the results of the breast lesion segmentation via FrCN achieved an overall accuracy of 92.97%, MCC of 85.93%, Dice (F1-score) of 92.69%, and Jaccard similarity coefficient of 86.37%. The detected and segmented breast lesions are classified via CNN, ResNet-50, and InceptionResNet-V2 achieving an average overall accuracies of 88.74%, 92.56%, and 95.32%, respectively. The performance evaluation results through all stages of detection, segmentation, and classification show that the integrated CAD system outperforms the latest conventional deep learning methodologies. We conclude that our CAD system could be used to assist radiologists over all stages of detection, segmentation, and classification for diagnosis of breast lesions.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Aprendizado Profundo , Diagnóstico por Computador , Interpretação de Imagem Assistida por Computador , Mamografia/métodos , Humanos
7.
J Xray Sci Technol ; 26(5): 727-746, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30056442

RESUMO

BACKGROUND: Accurate measurement of bone mineral density (BMD) in dual-energy X-ray absorptiometry (DXA) is essential for proper diagnosis of osteoporosis. Calculation of BMD requires precise bone segmentation and subtraction of soft tissue absorption. Femur segmentation remains a challenge as many existing methods fail to correctly distinguish femur from soft tissue. Reasons for this failure include low contrast and noise in DXA images, bone shape variability, and inconsistent X-ray beam penetration and attenuation, which cause shadowing effects and person-to-person variation. OBJECTIVE: To present a new method namely, a Pixel Label Decision Tree (PLDT), and test whether it can achieve higher accurate performance in femur segmentation in DXA imaging. METHODS: PLDT involves mainly feature extraction and selection. Unlike photographic images, X-ray images include features on the surface and inside an object. In order to reveal hidden patterns in DXA images, PLDT generates seven new feature maps from existing high energy (HE) and low energy (LE) X-ray features and determines the best feature set for the model. The performance of PLDT in femur segmentation is compared with that of three widely used medical image segmentation algorithms, the Global Threshold (GT), Region Growing Threshold (RGT), and artificial neural networks (ANN). RESULTS: PLDT achieved a higher accuracy of femur segmentation in DXA imaging (91.4%) than either GT (68.4%), RGT (76%) or ANN (84.4%). CONCLUSIONS: The study demonstrated that PLDT outperformed other conventional segmentation techniques in segmenting DXA images. Improved segmentation should help accurate computation of BMD which later improves clinical diagnosis of osteoporosis.


Assuntos
Absorciometria de Fóton/métodos , Árvores de Decisões , Fêmur/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Humanos , Osteoporose/diagnóstico por imagem
8.
J Xray Sci Technol ; 26(3): 395-412, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29562584

RESUMO

BACKGROUND: In general, the image quality of high and low energy images of dual energy X-ray absorptiometry (DXA) suffers from noise due to the use of a small amount of X-rays. Denoising of DXA images could be a key process to improve a bone mineral density map, which is derived from a pair of high and low energy images. This could further improve the accuracy of diagnosis of bone fractures and osteoporosis. OBJECTIVE: This study aims to develop and test a new technology to improve the quality, remove the noise, and preserve the edges and fine details of real DXA images. METHODS: In this study, a denoising technique for high and low energy DXA images using a non-local mean filter (NLM) was presented. The source and detector noises of a DXA system were modeled for both high and low DXA images. Then, the optimized parameters of the NLM filter were derived utilizing the experimental data from CIRS-BFP phantoms. After that, the optimized NLM was tested and verified using the DXA images of the phantoms and real human spine and femur. RESULTS: Quantitative evaluation of the results showed average 24.22% and 34.43% improvement of the signal-to-noise ratio for real high and low spine images, respectively, while the improvements were about 15.26% and 13.55% for the high and low images of the femur. The qualitative visual observations of both phantom and real structures also showed significantly improved quality and reduced noise while preserving the edges in both high and low energy images. Our results demonstrate that the proposed NLM outperforms the conventional method using an anisotropic diffusion filter (ADF) and median techniques for all phantom and real human DXA images. CONCLUSIONS: Our work suggests that denoising via NLM could be a key preprocessing method for clinical DXA imaging.


Assuntos
Absorciometria de Fóton/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Absorciometria de Fóton/instrumentação , Fêmur/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Imagens de Fantasmas , Razão Sinal-Ruído , Coluna Vertebral/diagnóstico por imagem
9.
Bioengineering (Basel) ; 11(5)2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38790344

RESUMO

The analysis of body motion is a valuable tool in the assessment and diagnosis of gait impairments, particularly those related to neurological disorders. In this study, we propose a novel automated system leveraging artificial intelligence for efficiently analyzing gait impairment from video-recorded images. The proposed methodology encompasses three key aspects. First, we generate a novel one-dimensional representation of each silhouette image, termed a silhouette sinogram, by computing the distance and angle between the centroid and each detected boundary points. This process enables us to effectively utilize relative variations in motion at different angles to detect gait patterns. Second, a one-dimensional convolutional neural network (1D CNN) model is developed and trained by incorporating the consecutive silhouette sinogram signals of silhouette frames to capture spatiotemporal information via assisted knowledge learning. This process allows the network to capture a broader context and temporal dependencies within the gait cycle, enabling a more accurate diagnosis of gait abnormalities. This study conducts training and an evaluation utilizing the publicly accessible INIT GAIT database. Finally, two evaluation schemes are employed: one leveraging individual silhouette frames and the other operating at the subject level, utilizing a majority voting technique. The outcomes of the proposed method showed superior enhancements in gait impairment recognition, with overall F1-scores of 100%, 90.62%, and 77.32% when evaluated based on sinogram signals, and 100%, 100%, and 83.33% when evaluated based on the subject level, for cases involving two, four, and six gait abnormalities, respectively. In conclusion, by comparing the observed locomotor function to a conventional gait pattern often seen in healthy individuals, the recommended approach allows for a quantitative and non-invasive evaluation of locomotion.

10.
Math Biosci Eng ; 21(4): 5712-5734, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38872555

RESUMO

This research introduces a novel dual-pathway convolutional neural network (DP-CNN) architecture tailored for robust performance in Log-Mel spectrogram image analysis derived from raw multichannel electromyography signals. The primary objective is to assess the effectiveness of the proposed DP-CNN architecture across three datasets (NinaPro DB1, DB2, and DB3), encompassing both able-bodied and amputee subjects. Performance metrics, including accuracy, precision, recall, and F1-score, are employed for comprehensive evaluation. The DP-CNN demonstrates notable mean accuracies of 94.93 ± 1.71% and 94.00 ± 3.65% on NinaPro DB1 and DB2 for healthy subjects, respectively. Additionally, it achieves a robust mean classification accuracy of 85.36 ± 0.82% on amputee subjects in DB3, affirming its efficacy. Comparative analysis with previous methodologies on the same datasets reveals substantial improvements of 28.33%, 26.92%, and 39.09% over the baseline for DB1, DB2, and DB3, respectively. The DP-CNN's superior performance extends to comparisons with transfer learning models for image classification, reaffirming its efficacy. Across diverse datasets involving both able-bodied and amputee subjects, the DP-CNN exhibits enhanced capabilities, holding promise for advancing myoelectric control.


Assuntos
Algoritmos , Amputados , Eletromiografia , Gestos , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Extremidade Superior , Humanos , Eletromiografia/métodos , Extremidade Superior/fisiologia , Masculino , Adulto , Feminino , Adulto Jovem , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
11.
Comput Biol Med ; 153: 106553, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36641933

RESUMO

Patient movement during Magnetic Resonance Imaging (MRI) scan can cause severe degradation of image quality. In Susceptibility Weighted Imaging (SWI), several echoes are typically measured during a single repetition period, where the earliest echoes show less contrast between various tissues, while the later echoes are more susceptible to artifacts and signal dropout. In this paper, we propose a knowledge interaction paradigm that jointly learns feature details from multiple distorted echoes by sharing their knowledge with unified training parameters, thereby simultaneously reducing motion artifacts of all echoes. This is accomplished by developing a new scheme that boosts a Single Encoder with Multiple Decoders (SEMD), which assures that the generated features not only get fused but also learned together. We called the proposed method Knowledge Interaction Learning between Multi-Echo data (KIL-ME-based SEMD). The proposed KIL-ME-based SEMD allows to share information and gain an understanding of the correlations between the multiple echoes. The main purpose of this work is to correct the motion artifacts and maintain image quality and structure details of all motion-corrupted echoes towards generating high-resolution susceptibility enhanced contrast images, i.e., SWI, using a weighted average of multi-echo motion-corrected acquisitions. We also compare various potential strategies that might be used to address the problem of reducing artifacts in multi-echoes data. The experimental results demonstrate the feasibility and effectiveness of the proposed method, reducing the severity of motion artifacts and improving the overall clinical image quality of all echoes with their associated SWI maps. Significant improvement of image quality is observed using both motion-simulated test data and actual volunteer data with various motion severity strengths. Eventually, by enhancing the overall image quality, the proposed network can increase the effectiveness of the physicians' capability to evaluate and correctly diagnose brain MR images.


Assuntos
Artefatos , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Aumento da Imagem/métodos , Movimento (Física) , Processamento de Imagem Assistida por Computador/métodos
12.
PLoS One ; 18(11): e0293742, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37917752

RESUMO

Refactoring, a widely adopted technique, has proven effective in facilitating and reducing maintenance activities and costs. Nonetheless, the effects of applying refactoring techniques on software quality exhibit inconsistencies and contradictions, leading to conflicting evidence on their overall benefit. Consequently, software developers face challenges in leveraging these techniques to improve software quality. Moreover, the absence of a categorization model hampers developers' ability to decide the most suitable refactoring techniques for improving software quality, considering specific design goals. Thus, this study aims to propose a novel refactoring categorization model that categorizes techniques based on their measurable impacts on internal quality attributes. Initially, the most common refactoring techniques used by software practitioners were identified. Subsequently, an experimental study was conducted using five case studies to measure the impacts of refactoring techniques on internal quality attributes. A subsequent multi-case analysis was conducted to analyze these effects across the case studies. The proposed model was developed based on the experimental study results and the subsequent multi-case analysis. The model categorizes refactoring techniques into green, yellow, and red categories. The proposed model, by acting as a guideline, assists developers in understanding the effects of each refactoring technique on quality attributes, allowing them to select appropriate techniques to improve specific quality attributes. Compared to existing studies, the proposed model emerges superior by offering a more granular categorization (green, yellow, and red categories), and its range is wide (including ten refactoring techniques and eleven internal quality attributes). Such granularity not only equips developers with an in-depth understanding of each technique's impact but also fosters informed decision-making. In addition, the proposed model outperforms current studies and offers a more nuanced understanding, explicitly highlighting areas of strength and concern for each refactoring technique. This enhancement aids developers in better grasping the implications of each refactoring technique on quality attributes. As a result, the model simplifies the decision-making process for developers, saving time and effort that would otherwise be spent weighing the benefits and drawbacks of various refactoring techniques. Furthermore, it has the potential to help reduce maintenance activities and associated costs.


Assuntos
Melhoria de Qualidade , Software
13.
Comput Methods Programs Biomed ; 240: 107644, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37307766

RESUMO

BACKGROUND AND OBJECTIVE: Precisely segmenting brain tumors using multimodal Magnetic Resonance Imaging (MRI) is an essential task for early diagnosis, disease monitoring, and surgical planning. Unfortunately, the complete four image modalities utilized in the well-known BraTS benchmark dataset: T1, T2, Fluid-Attenuated Inversion Recovery (FLAIR), and T1 Contrast-Enhanced (T1CE) are not regularly acquired in clinical practice due to the high cost and long acquisition time. Rather, it is common to utilize limited image modalities for brain tumor segmentation. METHODS: In this paper, we propose a single stage learning of knowledge distillation algorithm that derives information from the missing modalities for better segmentation of brain tumors. Unlike the previous works that adopted a two-stage framework to distill the knowledge from a pre-trained network into a student network, where the latter network is trained on limited image modality, we train both models simultaneously using a single-stage knowledge distillation algorithm. We transfer the information by reducing the redundancy from a teacher network trained on full image modalities to the student network using Barlow Twins loss on a latent-space level. To distill the knowledge on the pixel level, we further employ a deep supervision idea that trains the backbone networks of both teacher and student paths using Cross-Entropy loss. RESULTS: We demonstrate that the proposed single-stage knowledge distillation approach enables improving the performance of the student network in each tumor category with overall dice scores of 91.11% for Tumor Core, 89.70% for Enhancing Tumor, and 92.20% for Whole Tumor in the case of only using the FLAIR and T1CE images, outperforming the state-of-the-art segmentation methods. CONCLUSIONS: The outcomes of this work prove the feasibility of exploiting the knowledge distillation in segmenting brain tumors using limited image modalities and hence make it closer to clinical practices.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/patologia , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal
14.
Bioengineering (Basel) ; 10(5)2023 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-37237687

RESUMO

Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented reality (AR) to provide surgeons with intuitive imaging during surgery, through the use of planar and three-dimensional imagery. However, these studies have mainly focused on visual aids and have paid relatively little attention to real surgical guidance aids. Moreover, the use of augmented reality reduces system stability and accuracy, and optical navigation systems are costly. Therefore, this paper proposed an augmented reality surgical navigation system based on image positioning that achieves the desired system advantages with low cost, high stability, and high accuracy. This system also provides intuitive guidance for the surgical target point, entry point, and trajectory. Once the surgeon uses the navigation stick to indicate the position of the surgical entry point, the connection between the surgical target and the surgical entry point is immediately displayed on the AR device (tablet or HoloLens glasses), and a dynamic auxiliary line is shown to assist with incision angle and depth. Clinical trials were conducted for EVD (extra-ventricular drainage) surgery, and surgeons confirmed the system's overall benefit. A "virtual object automatic scanning" method is proposed to achieve a high accuracy of 1 ± 0.1 mm for the AR-based system. Furthermore, a deep learning-based U-Net segmentation network is incorporated to enable automatic identification of the hydrocephalus location by the system. The system achieves improved recognition accuracy, sensitivity, and specificity of 99.93%, 93.85%, and 95.73%, respectively, representing a significant improvement from previous studies.

15.
Biomedicines ; 10(11)2022 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-36428538

RESUMO

Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer Encoder with Multiple Layer Perceptron (MLP), this study proposes a novel hybrid deep learning Computer-Aided Diagnosis (CAD) system for breast lesions. While the backbone residual deep learning network is employed to create the deep features, the transformer is utilized to classify breast cancer according to the self-attention mechanism. The proposed CAD system has the capability to recognize breast cancer in two scenarios: Scenario A (Binary classification) and Scenario B (Multi-classification). Data collection and preprocessing, patch image creation and splitting, and artificial intelligence-based breast lesion identification are all components of the execution framework that are applied consistently across both cases. The effectiveness of the proposed AI model is compared against three separate deep learning models: a custom CNN, the VGG16, and the ResNet50. Two datasets, CBIS-DDSM and DDSM, are utilized to construct and test the proposed CAD system. Five-fold cross validation of the test data is used to evaluate the accuracy of the performance results. The suggested hybrid CAD system achieves encouraging evaluation results, with overall accuracies of 100% and 95.80% for binary and multiclass prediction challenges, respectively. The experimental results reveal that the proposed hybrid AI model could identify benign and malignant breast tissues significantly, which is important for radiologists to recommend further investigation of abnormal mammograms and provide the optimal treatment plan.

16.
Sci Rep ; 11(1): 10191, 2021 05 13.
Artigo em Inglês | MEDLINE | ID: mdl-33986375

RESUMO

Medical image segmentation of tissue abnormalities, key organs, or blood vascular system is of great significance for any computerized diagnostic system. However, automatic segmentation in medical image analysis is a challenging task since it requires sophisticated knowledge of the target organ anatomy. This paper develops an end-to-end deep learning segmentation method called Contextual Multi-Scale Multi-Level Network (CMM-Net). The main idea is to fuse the global contextual features of multiple spatial scales at every contracting convolutional network level in the U-Net. Also, we re-exploit the dilated convolution module that enables an expansion of the receptive field with different rates depending on the size of feature maps throughout the networks. In addition, an augmented testing scheme referred to as Inversion Recovery (IR) which uses logical "OR" and "AND" operators is developed. The proposed segmentation network is evaluated on three medical imaging datasets, namely ISIC 2017 for skin lesions segmentation from dermoscopy images, DRIVE for retinal blood vessels segmentation from fundus images, and BraTS 2018 for brain gliomas segmentation from MR scans. The experimental results showed superior state-of-the-art performance with overall dice similarity coefficients of 85.78%, 80.27%, and 88.96% on the segmentation of skin lesions, retinal blood vessels, and brain tumors, respectively. The proposed CMM-Net is inherently general and could be efficiently applied as a robust tool for various medical image segmentations.


Assuntos
Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Bases de Dados Factuais , Aprendizado Profundo , Glioma/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Retina/diagnóstico por imagem , Vasos Retinianos/diagnóstico por imagem , Pele/diagnóstico por imagem , Manejo de Espécimes/métodos
17.
Comput Methods Programs Biomed ; 190: 105351, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32028084

RESUMO

BACKGROUND AND OBJECTIVE: Computer automated diagnosis of various skin lesions through medical dermoscopy images remains a challenging task. METHODS: In this work, we propose an integrated diagnostic framework that combines a skin lesion boundary segmentation stage and a multiple skin lesions classification stage. Firstly, we segment the skin lesion boundaries from the entire dermoscopy images using deep learning full resolution convolutional network (FrCN). Then, a convolutional neural network classifier (i.e., Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201) is applied on the segmented skin lesions for classification. The former stage is a critical prerequisite step for skin lesion diagnosis since it extracts prominent features of various types of skin lesions. A promising classifier is selected by testing well-established classification convolutional neural networks. The proposed integrated deep learning model has been evaluated using three independent datasets (i.e., International Skin Imaging Collaboration (ISIC) 2016, 2017, and 2018, which contain two, three, and seven types of skin lesions, respectively) with proper balancing, segmentation, and augmentation. RESULTS: In the integrated diagnostic system, segmented lesions improve the classification performance of Inception-ResNet-v2 by 2.72% and 4.71% in terms of the F1-score for benign and malignant cases of the ISIC 2016 test dataset, respectively. The classifiers of Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201 exhibit their capability with overall weighted prediction accuracies of 77.04%, 79.95%, 81.79%, and 81.27% for two classes of ISIC 2016, 81.29%, 81.57%, 81.34%, and 73.44% for three classes of ISIC 2017, and 88.05%, 89.28%, 87.74%, and 88.70% for seven classes of ISIC 2018, respectively, demonstrating the superior performance of ResNet-50. CONCLUSIONS: The proposed integrated diagnostic networks could be used to support and aid dermatologists for further improvement in skin cancer diagnosis.


Assuntos
Diagnóstico por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Cutâneas/diagnóstico , Dermoscopia , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Neoplasias Cutâneas/classificação
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1055-1058, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018167

RESUMO

Cerebral Microbleeds (CMBs) are small chronic brain hemorrhages, which have been considered as diagnostic indicators for different cerebrovascular diseases including stroke, dysfunction, dementia, and cognitive impairment. In this paper, we propose a fully automated two-stage integrated deep learning approach for efficient CMBs detection, which combines a regional-based You Only Look Once (YOLO) stage for potential CMBs candidate detection and three-dimensional convolutional neural networks (3D-CNN) stage for false positives reduction. Both stages are conducted using the 3D contextual information of microbleeds from the MR susceptibility-weighted imaging (SWI) and phase images. However, we average the adjacent slices of SWI and complement the phase images independently and utilize them as a two- channel input for the regional-based YOLO method. The results in the first stage show that the proposed regional-based YOLO efficiently detected the CMBs with an overall sensitivity of 93.62% and an average number of false positives per subject (FPavg) of 52.18 throughout the five-folds cross-validation. The 3D-CNN based second stage further improved the detection performance by reducing the FPavg to 1.42. The outcomes of this work might provide useful guidelines towards applying deep learning algorithms for automatic CMBs detection.


Assuntos
Imageamento por Ressonância Magnética , Redes Neurais de Computação , Algoritmos , Encéfalo , Hemorragia Cerebral/diagnóstico , Humanos
19.
Neuroimage Clin ; 28: 102464, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33395960

RESUMO

Cerebral Microbleeds (CMBs) are small chronic brain hemorrhages, which have been considered as diagnostic indicators for different cerebrovascular diseases including stroke, dysfunction, dementia, and cognitive impairment. However, automated detection and identification of CMBs in Magnetic Resonance (MR) images is a very challenging task due to their wide distribution throughout the brain, small sizes, and the high degree of visual similarity between CMBs and CMB mimics such as calcifications, irons, and veins. In this paper, we propose a fully automated two-stage integrated deep learning approach for efficient CMBs detection, which combines a regional-based You Only Look Once (YOLO) stage for potential CMBs candidate detection and three-dimensional convolutional neural networks (3D-CNN) stage for false positives reduction. Both stages are conducted using the 3D contextual information of microbleeds from the MR susceptibility-weighted imaging (SWI) and phase images. However, we average the adjacent slices of SWI and complement the phase images independently and utilize them as a two-channel input for the regional-based YOLO method. This enables YOLO to learn more reliable and representative hierarchal features and hence achieve better detection performance. The proposed work was independently trained and evaluated using high and low in-plane resolution data, which contained 72 subjects with 188 CMBs and 107 subjects with 572 CMBs, respectively. The results in the first stage show that the proposed regional-based YOLO efficiently detected the CMBs with an overall sensitivity of 93.62% and 78.85% and an average number of false positives per subject (FPavg) of 52.18 and 155.50 throughout the five-folds cross-validation for both the high and low in-plane resolution data, respectively. These findings outperformed results by previously utilized techniques such as 3D fast radial symmetry transform, producing fewer FPavg and lower computational cost. The 3D-CNN based second stage further improved the detection performance by reducing the FPavg to 1.42 and 1.89 for the high and low in-plane resolution data, respectively. The outcomes of this work might provide useful guidelines towards applying deep learning algorithms for automatic CMBs detection.


Assuntos
Aprendizado Profundo , Interpretação de Imagem Assistida por Computador , Algoritmos , Hemorragia Cerebral/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
20.
Int J Med Inform ; 117: 44-54, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-30032964

RESUMO

A computer-aided diagnosis (CAD) system requires detection, segmentation, and classification in one framework to assist radiologists efficiently in an accurate diagnosis. In this paper, a completely integrated CAD system is proposed to screen digital X-ray mammograms involving detection, segmentation, and classification of breast masses via deep learning methodologies. In this work, to detect breast mass from entire mammograms, You-Only-Look-Once (YOLO), a regional deep learning approach, is used. To segment the mass, full resolution convolutional network (FrCN), a new deep network model, is proposed and utilized. Finally, a deep convolutional neural network (CNN) is used to recognize the mass and classify it as either benign or malignant. To evaluate the proposed integrated CAD system in terms of the accuracies of detection, segmentation, and classification, the publicly available and annotated INbreast database was utilized. The evaluation results of the proposed CAD system via four-fold cross-validation tests show that a mass detection accuracy of 98.96%, Matthews correlation coefficient (MCC) of 97.62%, and F1-score of 99.24% are achieved with the INbreast dataset. Moreover, the mass segmentation results via FrCN produced an overall accuracy of 92.97%, MCC of 85.93%, and Dice (F1-score) of 92.69% and Jaccard similarity coefficient metrics of 86.37%, respectively. The detected and segmented masses were classified via CNN and achieved an overall accuracy of 95.64%, AUC of 94.78%, MCC of 89.91%, and F1-score of 96.84%, respectively. Our results demonstrate that the proposed CAD system, through all stages of detection, segmentation, and classification, outperforms the latest conventional deep learning methodologies. Our proposed CAD system could be used to assist radiologists in all stages of detection, segmentation, and classification of breast masses.


Assuntos
Aprendizado Profundo , Mamografia/métodos , Neoplasias da Mama , Diagnóstico por Computador , Feminino , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Intensificação de Imagem Radiográfica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA