Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Oncol ; 12: 971871, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36387085

RESUMO

Objectives: To propose a deep learning-based classification framework, which can carry out patient-level benign and malignant tumors classification according to the patient's multi-plane images and clinical information. Methods: A total of 430 cases of spinal tumor, including axial and sagittal plane images by MRI, of which 297 cases for training (14072 images), and 133 cases for testing (6161 images) were included. Based on the bipartite graph and attention learning, this study proposed a multi-plane attention learning framework, BgNet, for benign and malignant tumor diagnosis. In a bipartite graph structure, the tumor area in each plane is used as the vertex of the graph, and the matching between different planes is used as the edge of the graph. The tumor areas from different plane images are spliced at the input layer. And based on the convolutional neural network ResNet and visual attention learning model Swin-Transformer, this study proposed a feature fusion model named ResNetST for combining both global and local information to extract the correlation features of multiple planes. The proposed BgNet consists of five modules including a multi-plane fusion module based on the bipartite graph, input layer fusion module, feature layer fusion module, decision layer fusion module, and output module. These modules are respectively used for multi-level fusion of patient multi-plane image data to realize the comprehensive diagnosis of benign and malignant tumors at the patient level. Results: The accuracy (ACC: 79.7%) of the proposed BgNet with multi-plane was higher than that with a single plane, and higher than or equal to the four doctors' ACC (D1: 70.7%, p=0.219; D2: 54.1%, p<0.005; D3: 79.7%, p=0.006; D4: 72.9%, p=0.178). Moreover, the diagnostic accuracy and speed of doctors can be further improved with the aid of BgNet, the ACC of D1, D2, D3, and D4 improved by 4.5%, 21.8%, 0.8%, and 3.8%, respectively. Conclusions: The proposed deep learning framework BgNet can classify benign and malignant tumors effectively, and can help doctors improve their diagnostic efficiency and accuracy. The code is available at https://github.com/research-med/BgNet.

2.
Insights Imaging ; 13(1): 87, 2022 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-35536493

RESUMO

BACKGROUND: The application of deep learning has allowed significant progress in medical imaging. However, few studies have focused on the diagnosis of benign and malignant spinal tumors using medical imaging and age information at the patient level. This study proposes a multi-model weighted fusion framework (WFF) for benign and malignant diagnosis of spinal tumors based on magnetic resonance imaging (MRI) images and age information. METHODS: The proposed WFF included a tumor detection model, sequence classification model, and age information statistic module based on sagittal MRI sequences obtained from 585 patients with spinal tumors (270 benign, 315 malignant) between January 2006 and December 2019 from the cooperative hospital. The experimental results of the WFF were compared with those of one radiologist (D1) and two spine surgeons (D2 and D3). RESULTS: In the case of reference age information, the accuracy (ACC) (0.821) of WFF was higher than three doctors' ACC (D1: 0.686; D2: 0.736; D3: 0.636). Without age information, the ACC (0.800) of the WFF was also higher than that of the three doctors (D1: 0.750; D2: 0.664; D3:0.614). CONCLUSIONS: The proposed WFF is effective in the diagnosis of benign and malignant spinal tumors with complex histological types on MRI.

3.
Front Oncol ; 12: 858453, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35494021

RESUMO

Molecular subtypes of breast cancer are important references to personalized clinical treatment. For cost and labor savings, only one of the patient's paraffin blocks is usually selected for subsequent immunohistochemistry (IHC) to obtain molecular subtypes. Inevitable block sampling error is risky due to the tumor heterogeneity and could result in a delay in treatment. Molecular subtype prediction from conventional H&E pathological whole slide images (WSI) using the AI method is useful and critical to assist pathologists to pre-screen proper paraffin block for IHC. It is a challenging task since only WSI-level labels of molecular subtypes from IHC can be obtained without detailed local region information. Gigapixel WSIs are divided into a huge amount of patches to be computationally feasible for deep learning, while with coarse slide-level labels, patch-based methods may suffer from abundant noise patches, such as folds, overstained regions, or non-tumor tissues. A weakly supervised learning framework based on discriminative patch selection and multi-instance learning was proposed for breast cancer molecular subtype prediction from H&E WSIs. Firstly, co-teaching strategy using two networks was adopted to learn molecular subtype representations and filter out some noise patches. Then, a balanced sampling strategy was used to handle the imbalance in subtypes in the dataset. In addition, a noise patch filtering algorithm that used local outlier factor based on cluster centers was proposed to further select discriminative patches. Finally, a loss function integrating local patch with global slide constraint information was used to fine-tune MIL framework on obtained discriminative patches and further improve the prediction performance of molecular subtyping. The experimental results confirmed the effectiveness of the proposed AI method and our models outperformed even senior pathologists, which has the potential to assist pathologists to pre-screen paraffin blocks for IHC in clinic.

4.
Sci Rep ; 10(1): 8591, 2020 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-32433560

RESUMO

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

5.
Sci Rep ; 9(1): 882, 2019 01 29.
Artigo em Inglês | MEDLINE | ID: mdl-30696894

RESUMO

Supervised learning methods are commonly applied in medical image analysis. However, the success of these approaches is highly dependent on the availability of large manually detailed annotated dataset. Thus an automatic refined segmentation of whole-slide image (WSI) is significant to alleviate the annotation workload of pathologists. But most of the current ways can only output a rough prediction of lesion areas and consume much time in each slide. In this paper, we propose a fast and refined cancer regions segmentation framework v3_DCNN, which first preselects tumor regions using a classification model Inception-v3 and then employs a semantic segmentation model DCNN for refined segmentation. Our framework can generate a dense likelihood heatmap with the 1/8 side of original WSI in 11.5 minutes on the Camelyon16 dataset, which saves more than one hour for each WSI compared with the initial DCNN model. Experimental results show that our approach achieves a higher FROC score 83.5% with the champion's method of Camelyon16 challenge 80.7%. Based on v3 DCNN model, we further automatically produce heatmap of WSI and extract polygons of lesion regions for doctors, which is very helpful for their pathological diagnosis, detailed annotation and thus contributes to developing a more powerful deep learning model.


Assuntos
Neoplasias da Mama/patologia , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Mama/patologia , Aprendizado Profundo , Feminino , Humanos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado
6.
Sensors (Basel) ; 16(12)2016 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-27999321

RESUMO

Most existing wearable gait analysis methods focus on the analysis of data obtained from inertial sensors. This paper proposes a novel, low-cost, wireless and wearable gait analysis system which uses microphone sensors to collect footstep sound signals during walking. This is the first time a microphone sensor is used as a wearable gait analysis device as far as we know. Based on this system, a gait analysis algorithm for estimating the temporal parameters of gait is presented. The algorithm fully uses the fusion of two feet footstep sound signals and includes three stages: footstep detection, heel-strike event and toe-on event detection, and calculation of gait temporal parameters. Experimental results show that with a total of 240 data sequences and 1732 steps collected using three different gait data collection strategies from 15 healthy subjects, the proposed system achieves an average 0.955 F1-measure for footstep detection, an average 94.52% accuracy rate for heel-strike detection and 94.25% accuracy rate for toe-on detection. Using these detection results, nine temporal related gait parameters are calculated and these parameters are consistent with their corresponding normal gait temporal parameters and labeled data calculation results. The results verify the effectiveness of our proposed system and algorithm for temporal gait parameter estimation.


Assuntos
Marcha/fisiologia , Dispositivos Eletrônicos Vestíveis , Acústica , Adulto , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Probabilidade , Software , Fatores de Tempo , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA