Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
World J Surg Oncol ; 22(1): 12, 2024 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-38183069

RESUMO

BACKGROUND: Sentinel lymph node biopsy (SLNB) is the standard of care for axillary staging in early breast cancer patients with low-burden axillary metastasis (≤ 2 positive nodes). This study aimed to determine the diagnostic performances of 18F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) and breast magnetic resonance imaging in detecting axillary lymph node (ALN) metastases and the reliability to predict ALN burden. METHODS: A total of 275 patients with primary operable breast cancer receiving preoperative PET/CT and upfront surgery from January 2001 to December 2022 in a single institution were enrolled. A total of 244 (88.7%) of them also received breast MRI. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of PET/CT and breast MRI were assessed. The predictive values to determine ALN burden were evaluated using radio-histopathological concordance. RESULTS: PET/CT demonstrated a sensitivity of 53.4%, specificity of 82.1%, PPV of 65.5%, NPV of 73.5%, and accuracy of 70.9% for detecting ALN metastasis, and the corresponding values for MRI were 71.8%, 67.8%, 56%, 80.8%, and 69.2%, respectively. Combining PET/CT and MRI showed a significantly higher PPV than MRI (72.7% vs 56% for MRI alone, p = 0.037) and a significantly higher NPV than PET/CT (84% vs 73.5% for PET/CT alone, p = 0.041). For predicting low-burden axillary metastasis (1-2 positive nodes), the PPVs were 35.9% for PET/CT, 36.7% for MRI, and 55% for combined PET/CT and MRI. Regarding patients with 0-2 positive ALNs in imaging, who were indicated for SLNB, the predictive correctness was 96.1% for combined PET/CT and MRI, 95.7% for MRI alone, and 88.6% for PET/CT alone. CONCLUSIONS: PET/CT and breast MRI exhibit high predictive values for identifying low-burden axillary metastasis in patients with operable breast cancer with ≦ 2 positive ALNs on imaging.


Assuntos
Neoplasias da Mama , Biópsia de Linfonodo Sentinela , Humanos , Feminino , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/cirurgia , Reprodutibilidade dos Testes , Estudos Retrospectivos , Imageamento por Ressonância Magnética , Metástase Linfática , Linfonodos/diagnóstico por imagem , Linfonodos/cirurgia
2.
Sci Rep ; 13(1): 17087, 2023 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-37816815

RESUMO

We aimed to develop an accurate and efficient skin cancer classification system using deep-learning technology with a relatively small dataset of clinical images. We proposed a novel skin cancer classification method, SkinFLNet, which utilizes model fusion and lifelong learning technologies. The SkinFLNet's deep convolutional neural networks were trained using a dataset of 1215 clinical images of skin tumors diagnosed at Taichung and Taipei Veterans General Hospital between 2015 and 2020. The dataset comprised five categories: benign nevus, seborrheic keratosis, basal cell carcinoma, squamous cell carcinoma, and malignant melanoma. The SkinFLNet's performance was evaluated using 463 clinical images between January and December 2021. SkinFLNet achieved an overall classification accuracy of 85%, precision of 85%, recall of 82%, F-score of 82%, sensitivity of 82%, and specificity of 93%, outperforming other deep convolutional neural network models. We also compared SkinFLNet's performance with that of three board-certified dermatologists, and the average overall performance of SkinFLNet was comparable to, or even better than, the dermatologists. Our study presents an efficient skin cancer classification system utilizing model fusion and lifelong learning technologies that can be trained on a relatively small dataset. This system can potentially improve skin cancer screening accuracy in clinical practice.


Assuntos
Ceratose Seborreica , Melanoma , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/patologia , Melanoma/patologia , Redes Neurais de Computação , Pele/patologia , Ceratose Seborreica/diagnóstico , Ceratose Seborreica/patologia
3.
Breast Cancer ; 30(6): 976-985, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37500823

RESUMO

BACKGROUND: The value and utility of axillary lymph node (ALN) evaluation with MRI in breast cancer were not clear for various intrinsic subtypes. The aim of the current study is to test the potential of combining breast MRI and clinicopathologic factors to identify low-risk groups of ALN metastasis and improve diagnostic performance. MATERIAL AND METHODS: Patients with primary operable invasive breast cancer with pre-operative breast MRI and post-operative pathologic reports were retrospectively collected from January 2009 to December 2021 in a single institute. The concordance of MRI and pathology of ALN status were determined, and also analyzed in different intrinsic subtypes. A stepwise strategy was designed to improve MRI-negative predictive value (NPV) on ALN metastasis. RESULTS: 2473 patients were enrolled. The diagnostic performance of MRI in detecting metastatic ALN was significantly different between intrinsic subtypes (p = 0.007). Multivariate analysis identified tumor size and histologic type as independent predictive factors of ALN metastases. Patients with HER-2 (MRI tumor size ≤ 2 cm), or TNBC (MRI tumor size ≤ 2 cm) were found to have MRI-ALN-NPV higher than 90%, and these false cases were limited to low axillary tumor burden. CONCLUSION: The diagnostic performance of MRI to predict ALN metastasis varied according to the intrinsic subtype. Combined pre-operative clinicopathologic factors and intrinsic subtypes may increase ALN MRI NPV, and further identify some groups of patients with low risks of ALN metastasis, high NPV, and low burdens of axillary disease even in false-negative cases.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Metástase Linfática/diagnóstico por imagem , Metástase Linfática/patologia , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/cirurgia , Neoplasias da Mama/patologia , Valor Preditivo dos Testes , Estudos Retrospectivos , Linfonodos/diagnóstico por imagem , Linfonodos/cirurgia , Linfonodos/patologia , Imageamento por Ressonância Magnética , Axila/patologia , Biópsia de Linfonodo Sentinela/métodos
4.
Methods ; 214: 28-34, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37116670

RESUMO

BACKGROUND AND OBJECTIVE: The gold standard for diagnosing epiretinal membranes is to observe the surface of the internal limiting membrane on optical coherence tomography images. The stages of the epiretinal membrane are used to decide the condition of the health of the membrane. The stages are not detected because some of them are similar. To accurately classify the stages, a deep-learning technology can be used to improve the classification accuracy. METHODS: A combinatorial fusion with multiple convolutional neural networks (CNN) algorithms are proposed to enhance the accuracy of a single image classification model. The proposed method was trained using a dataset of 1947 optical coherence tomography images diagnosed with the epiretinal membrane at the Taichung Veterans General Hospital in Taiwan. The images consisted of 4 stages; stages 1, 2, 3, and 4. RESULTS: The overall accuracy of the classification was 84%. The combination of five and six CNN models achieves the highest testing accuracy (85%) among other combinations, respectively. Any combination with a different number of CNN models outperforms any single CNN algorithm working alone. Meanwhile, the accuracy of the proposed method is better than ophthalmologists with years of clinical experience. CONCLUSIONS: We have developed an efficient epiretinal membrane classification method by using combinatorial fusion with CNN models on optical coherence tomography images. The proposed method can be used for screening purposes to facilitate ophthalmologists making the correct diagnoses in general medical practice.


Assuntos
Membrana Epirretiniana , Humanos , Membrana Epirretiniana/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Redes Neurais de Computação , Algoritmos , Retina
5.
Biomed J ; 45(3): 465-471, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-34628059

RESUMO

Time-lapse microscopy images generated by biological experiments have been widely used for observing target activities, such as the motion trajectories and survival states. Based on these observations, biologists can conclude experimental results or present new hypotheses for several biological applications, i.e. virus research or drug design. Many methods or tools have been proposed in the past to observe cell and particle activities, which are defined as single cell tracking and single particle tracking problems, by using algorithms and deep learning technologies. In this article, a review for these works is presented in order to summarize the past methods and research topics at first, then points out the problems raised by these works, and finally proposes future research directions. The contributions of this article will help researchers to understand past development trends and further propose innovative technologies.


Assuntos
Aprendizado Profundo , Microscopia , Algoritmos , Humanos , Microscopia/métodos
7.
Diagnostics (Basel) ; 11(10)2021 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-34679465

RESUMO

Benign prostatic hyperplasia (BPH) is the main cause of lower urinary tract symptoms (LUTS) in aging males. Transurethral resection of the prostate (TURP) surgery is performed by a cystoscope passing through the urethra and scraping off the prostrate piece by piece through a cutting loop. Although TURP is a minimally invasive procedure, bleeding is still the most common complication. Therefore, the evaluation, monitoring, and prevention of interop bleeding during TURP are very important issues. The main idea of this study is to rank bleeding levels during TURP surgery from videos. Generally, to judge bleeding level by human eyes from surgery videos is a difficult task, which requires sufficient experienced urologists. In this study, machine learning-based ranking algorithms are proposed to efficiently evaluate the ranking of blood levels. Based on the visual clarity of the surgical field, the four ranking of blood levels, including score 0: excellent; score 1: acceptable; score 2: slightly bad; and 3: bad, were identified by urologists who have sufficient experience in TURP surgery. The results of extensive experiments show that the revised accuracy can achieve 90, 89, 90, and 91%, respectively. Particularly, the results reveal that the proposed methods were capable of classifying the ranking of bleeding level accurately and efficiently reducing the burden of urologists.

8.
Sci Rep ; 11(1): 19938, 2021 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-34620900

RESUMO

Flank wear is the most common wear that happens in the end milling process. However, the process of detecting the flank wear is cumbersome. To achieve comprehensively automatic detecting the flank wear area of the spiral end milling cutter, this study proposed a novel flank wear detection method of combining the template matching and deep learning techniques to expand the curved surface images into panorama images, which is more available to detect the flank wear areas without choosing a specific position of cutting tool image. You Only Look Once v4 model was employed to automatically detect the range of cutting tips. Then, popular segmentation models, namely, U-Net, Segnet and Autoencoder were used to extract the areas of the tool flank wear. To evaluate the segmenting performance among these models, U-Net model obtained the best maximum dice coefficient score with 0.93. Moreover, the predicting wear areas of the U-Net model is presented in the trend figure, which can determine the times of the tool change depend on the curve of the tool wear. Overall, the experiments have shown that the proposed methods can effectively extract the tool wear regions of the spiral cutting tool. With the developed system, users can obtain detailed information about the cutting tool before being worn severely to change the cutting tools in advance.

10.
Sci Rep ; 10(1): 10403, 2020 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-32576902

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

11.
Sci Rep ; 10(1): 8424, 2020 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-32439844

RESUMO

PURPOSE: Previous deep learning studies on optical coherence tomography (OCT) mainly focused on diabetic retinopathy and age-related macular degeneration. We proposed a deep learning model that can identify epiretinal membrane (ERM) in OCT with ophthalmologist-level performance. DESIGN: Cross-sectional study. PARTICIPANTS: A total of 3,618 central fovea cross section OCT images from 1,475 eyes of 964 patients. METHODS: We retrospectively collected 7,652 OCT images from 1,197 patients. From these images, 2,171 were normal and 1,447 were ERM OCT. A total of 3,141 OCT images was used as training dataset and 477 images as testing dataset. DL algorithm was used to train the interpretation model. Diagnostic results by four board-certified non-retinal specialized ophthalmologists on the testing dataset were compared with those generated by the DL model. MAIN OUTCOME MEASURES: We calculated for the derived DL model the following characteristics: sensitivity, specificity, F1 score and area under curve (AUC) of the receiver operating characteristic (ROC) curve. These were calculated according to the gold standard results which were parallel diagnoses of the retinal specialist. Performance of the DL model was finally compared with that of non-retinal specialized ophthalmologists. RESULTS: Regarding the diagnosis of ERM in OCT images, the trained DL model had the following characteristics in performance: sensitivity: 98.7%, specificity: 98.0%, and F1 score: 0.945. The accuracy on the training dataset was 99.7% (95% CI: 99.4 - 99.9%), and for the testing dataset, diagnostic accuracy was 98.1% (95% CI: 96.5 - 99.1%). AUC of the ROC curve was 0.999. The DL model slightly outperformed the average non-retinal specialized ophthalmologists. CONCLUSIONS: An ophthalmologist-level DL model was built here to accurately identify ERM in OCT images. The performance of the model was slightly better than the average non-retinal specialized ophthalmologists. The derived model may play a role to assist clinicians to promote the efficiency and safety of healthcare in the future.


Assuntos
Retinopatia Diabética/diagnóstico , Diagnóstico por Computador/métodos , Membrana Epirretiniana/diagnóstico por imagem , Degeneração Macular/diagnóstico , Retina/patologia , Tomografia de Coerência Óptica/métodos , Algoritmos , Estudos Transversais , Aprendizado Profundo , Retinopatia Diabética/diagnóstico por imagem , Humanos , Degeneração Macular/diagnóstico por imagem , Oftalmologistas
12.
Front Genet ; 10: 923, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31543905

RESUMO

[This corrects the article DOI: 10.3389/fgene.2019.00432.].

13.
Sci Rep ; 9(1): 10883, 2019 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-31350428

RESUMO

The gray level run length matrix (GLRLM) whose entries are statistics recording distribution and relationship of images pixels is a widely used method for extracting statistical features for medical images, e.g., magnetic resonance (MR) images. Recently these features are usually employed in some artificial neural networks to identify and distinguish texture patterns. But GLRLM construction and features extraction are tedious and computationally intensive while the images are too big with high resolution, or there are too many small or intermediate Regions of Interest (ROI) to process in a single image, which makes the preprocess a time consuming stage. Hence, it is of great importance to accelerate the procedure which is nowadays possible with the rapid development of massively parallel Graphics Processing Unit, i.e. the GPU computing technology. In this article, we propose a new paradigm based on mature parallel primitives for generating GLRLMs and extracting multiple features for many ROIs simultaneously in a single image. Experiments show that such a paradigm is easy to implement and offers an acceleration over 5 fold increase in speed than an optimized serial counterpart.

14.
Front Genet ; 10: 432, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31191597

RESUMO

The human genome consists of 98.5% non-coding DNA sequences, and most of them have no known function. However, a majority of disease-associated variants lie in these regions. Therefore, it is critical to predict the function of non-coding DNA. Hence, we propose the NCNet, which integrates deep residual learning and sequence-to-sequence learning networks, to predict the transcription factor (TF) binding sites, which can then be used to predict non-coding functions. In NCNet, deep residual learning networks are used to enhance the identification rate of regulatory patterns of motifs, so that the sequence-to-sequence learning network may make the most out of the sequential dependency between the patterns. With the identity shortcut technique and deep architectures of the networks, NCNet achieves significant improvement compared to the original hybrid model in identifying regulatory markers.

16.
Artigo em Inglês | MEDLINE | ID: mdl-29295690

RESUMO

AIM AND OBJECTIVE: In the past decade, the drug design technologies have been improved enormously. The computer-aided drug design (CADD) has played an important role in analysis and prediction in drug development, which makes the procedure more economical and efficient. However, computation with big data, such as ZINC containing more than 60 million compounds data and GDB-13 with more than 930 million small molecules, is a noticeable issue of time-consuming problem. Therefore, we propose a novel heterogeneous high performance computing method, named as Hadoop-MCC, integrating Hadoop and GPU, to copy with big chemical structure data efficiently. MATERIALS AND METHODS: Hadoop-MCC gains the high availability and fault tolerance from Hadoop, as Hadoop is used to scatter input data to GPU devices and gather the results from GPU devices. Hadoop framework adopts mapper/reducer computation model. In the proposed method, mappers response for fetching SMILES data segments and perform LINGO method on GPU, then reducers collect all comparison results produced by mappers. Due to the high availability of Hadoop, all of LINGO computational jobs on mappers can be completed, even if some of the mappers encounter problems. RESULTS: A comparison of LINGO is performed on each the GPU device in parallel. According to the experimental results, the proposed method on multiple GPU devices can achieve better computational performance than the CUDA-MCC on a single GPU device. CONCLUSION: Hadoop-MCC is able to achieve scalability, high availability, and fault tolerance granted by Hadoop, and high performance as well by integrating computational power of both of Hadoop and GPU. It has been shown that using the heterogeneous architecture as Hadoop-MCC effectively can enhance better computational performance than on a single GPU device.


Assuntos
Algoritmos , Desenho Assistido por Computador , Metodologias Computacionais , Desenho de Fármacos , Big Data , Estrutura Molecular , Relação Quantitativa Estrutura-Atividade
17.
Evol Bioinform Online ; 13: 1176934317734220, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29051701

RESUMO

A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

19.
Int J Genomics ; 2015: 761063, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26568953

RESUMO

The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

20.
Int J Genomics ; 2015: 950905, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26491652

RESUMO

Compound comparison is an important task for the computational chemistry. By the comparison results, potential inhibitors can be found and then used for the pharmacy experiments. The time complexity of a pairwise compound comparison is O(n (2)), where n is the maximal length of compounds. In general, the length of compounds is tens to hundreds, and the computation time is small. However, more and more compounds have been synthesized and extracted now, even more than tens of millions. Therefore, it still will be time-consuming when comparing with a large amount of compounds (seen as a multiple compound comparison problem, abbreviated to MCC). The intrinsic time complexity of MCC problem is O(k (2) n (2)) with k compounds of maximal length n. In this paper, we propose a GPU-based algorithm for MCC problem, called CUDA-MCC, on single- and multi-GPUs. Four LINGO-based load-balancing strategies are considered in CUDA-MCC in order to accelerate the computation speed among thread blocks on GPUs. CUDA-MCC was implemented by C+OpenMP+CUDA. CUDA-MCC achieved 45 times and 391 times faster than its CPU version on a single NVIDIA Tesla K20m GPU card and a dual-NVIDIA Tesla K20m GPU card, respectively, under the experimental results.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA