Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Genes (Basel) ; 14(9)2023 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-37761908

RESUMO

Up to 30% of breast cancer (BC) patients will develop distant metastases (DM), for which there is no cure. Here, statistical and machine learning (ML) models were developed to estimate the risk of site-specific DM following local-regional therapy. This retrospective study cohort included 175 patients diagnosed with invasive BC who later developed DM. Clinicopathological information was collected for analysis. Outcome variables were the first site of metastasis (brain, bone or visceral) and the time interval (months) to developing DM. Multivariate statistical analysis and ML-based multivariable gradient boosting machines identified factors associated with these outcomes. Machine learning models predicted the site of DM, demonstrating an area under the curve of 0.74, 0.75, and 0.73 for brain, bone and visceral sites, respectively. Overall, most patients (57%) developed bone metastases, with increased odds associated with estrogen receptor (ER) positivity. Human epidermal growth factor receptor-2 (HER2) positivity and non-anthracycline chemotherapy regimens were associated with a decreased risk of bone DM, while brain metastasis was associated with ER-negativity. Furthermore, non-anthracycline chemotherapy alone was a significant predictor of visceral metastasis. Here, clinicopathologic and treatment variables used in ML prediction models predict the first site of metastasis in BC. Further validation may guide focused patient-specific surveillance practices.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/tratamento farmacológico , Estudos Retrospectivos , Mama , Encéfalo , Aprendizado de Máquina
2.
Curr Oncol ; 28(6): 4298-4316, 2021 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-34898544

RESUMO

BACKGROUND: Evaluating histologic grade for breast cancer diagnosis is standard and associated with prognostic outcomes. Current challenges include the time required for manual microscopic evaluation and interobserver variability. This study proposes a computer-aided diagnostic (CAD) pipeline for grading tumors using artificial intelligence. METHODS: There were 138 patients included in this retrospective study. Breast core biopsy slides were prepared using standard laboratory techniques, digitized, and pre-processed for analysis. Deep convolutional neural networks (CNNs) were developed to identify the regions of interest containing malignant cells and to segment tumor nuclei. Imaging-based features associated with spatial parameters were extracted from the segmented regions of interest (ROIs). Clinical datasets and pathologic biomarkers (estrogen receptor, progesterone receptor, and human epidermal growth factor 2) were collected from all study subjects. Pathologic, clinical, and imaging-based features were input into machine learning (ML) models to classify histologic grade, and model performances were tested against ground-truth labels at the patient-level. Classification performances were evaluated using receiver-operating characteristic (ROC) analysis. RESULTS: Multiparametric feature sets, containing both clinical and imaging-based features, demonstrated high classification performance. Using imaging-derived markers alone, the classification performance demonstrated an area under the curve (AUC) of 0.745, while modeling these features with other pathologic biomarkers yielded an AUC of 0.836. CONCLUSION: These results demonstrate an association between tumor nuclear spatial features and tumor grade. If further validated, these systems may be implemented into pathology CADs and can assist pathologists to expeditiously grade tumors at the time of diagnosis and to help guide clinical decisions.


Assuntos
Neoplasias da Mama , Inteligência Artificial , Biomarcadores , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Redes Neurais de Computação , Estudos Retrospectivos
3.
Sci Rep ; 11(1): 8025, 2021 04 13.
Artigo em Inglês | MEDLINE | ID: mdl-33850222

RESUMO

Breast cancer is currently the second most common cause of cancer-related death in women. Presently, the clinical benchmark in cancer diagnosis is tissue biopsy examination. However, the manual process of histopathological analysis is laborious, time-consuming, and limited by the quality of the specimen and the experience of the pathologist. This study's objective was to determine if deep convolutional neural networks can be trained, with transfer learning, on a set of histopathological images independent of breast tissue to segment tumor nuclei of the breast. Various deep convolutional neural networks were evaluated for the study, including U-Net, Mask R-CNN, and a novel network (GB U-Net). The networks were trained on a set of Hematoxylin and Eosin (H&E)-stained images of eight diverse types of tissues. GB U-Net demonstrated superior performance in segmenting sites of invasive diseases (AJI = 0.53, mAP = 0.39 & AJI = 0.54, mAP = 0.38), validated on two hold-out datasets exclusively containing breast tissue images of approximately 7,582 annotated cells. The results of the networks, trained on images independent of breast tissue, demonstrated that tumor nuclei of the breast could be accurately segmented.


Assuntos
Neoplasias da Mama , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos
4.
Sensors (Basel) ; 20(18)2020 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-32911771

RESUMO

Real-time acquisition of large amounts of machine operating data is now increasingly common due to recent advances in Industry 4.0 technologies. A key benefit to factory operators of this large scale data acquisition is in the ability to perform real-time condition monitoring and early-stage fault detection and diagnosis on industrial machinery-with the potential to reduce machine down-time and thus operating costs. The main contribution of this work is the development of an intelligent fault diagnosis method capable of operating on these real-time data streams to provide early detection of developing problems under variable operating conditions. We propose a novel dual-path recurrent neural network with a wide first kernel and deep convolutional neural network pathway (RNN-WDCNN) capable of operating on raw temporal signals such as vibration data to diagnose rolling element bearing faults in data acquired from electromechanical drive systems. RNN-WDCNN combines elements of recurrent neural networks (RNNs) and convolutional neural networks (CNNs) to capture distant dependencies in time series data and suppress high-frequency noise in the input signals. Experimental results on the benchmark Case Western Reserve University (CWRU) bearing fault dataset show RNN-WDCNN outperforms current state-of-the-art methods in both domain adaptation and noise rejection tasks.

5.
Crit Care Explor ; 2(5): e0115, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32671346

RESUMO

OBJECTIVES: To determine whether time-series analysis and Shannon information entropy of facial expressions predict acute clinical deterioration in patients on general hospital wards. DESIGN: Post hoc analysis of a prospective observational feasibility study (Visual Early Warning Score study). SETTING: General ward patients in a community hospital. PATIENTS: Thirty-four patients at risk of clinical deterioration. INTERVENTIONS: A 3-minute video (153,000 frames) for each of the patients enrolled into the Visual Early Warning Score study database was analyzed by a trained psychologist for facial expressions measured as action units using the Facial Action Coding System. MEASUREMENTS AND MAIN RESULTS: Three-thousand six-hundred eighty-eight action unit were analyzed over the 34 3-minute study periods. The action unit time variables considered were onset, apex, offset, and total time duration. A generalized linear regression model and time-series analyses were performed. Shannon information entropy (Hn) and diversity (Dn) were calculated from the frequency and repertoire of facial expressions. Patients subsequently admitted to critical care displayed a reduced frequency rate (95% CI moving average of the mean: 9.5-10.9 vs 26.1-28.9 in those not admitted), a higher Shannon information entropy (0.30 ± 0.06 vs 0.26 ± 0.05; p = 0.019) and diversity index (1.36 ± 0.08 vs 1.30 ± 0.07; p = 0.020) and a prolonged action unit reaction time (23.5 vs 9.4 s) compared with patients not admitted to ICU. The number of action unit identified per window within the time-series analysis predicted admission to critical care with an area under the curve of 0.88. The area under the curve for National Early Warning Score alone, Hn alone, National Early Warning Score plus Hn, and National Early Warning Score plus Hn plus Dn were 0.53, 0.75, 0.76, and 0.81, respectively. CONCLUSIONS: Patients who will be admitted to intensive care have a decrease in the number of facial expressions per unit of time and an increase in their diversity.

6.
Comput Biol Med ; 102: 327-335, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30031535

RESUMO

Atrial Fibrillation (AF), either permanent or intermittent (paroxysnal AF), increases the risk of cardioembolic stroke. Accurate diagnosis of AF is obligatory for initiation of effective treatment to prevent stroke. Long term cardiac monitoring improves the likelihood of diagnosing paroxysmal AF. We used a deep learning system to detect AF beats in Heart Rate (HR) signals. The data was partitioned with a sliding window of 100 beats. The resulting signal blocks were directly fed into a deep Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The system was validated and tested with data from the MIT-BIH Atrial Fibrillation Database. It achieved 98.51% accuracy with 10-fold cross-validation (20 subjects) and 99.77% with blindfold validation (3 subjects). The proposed system structure is straight forward, because there is no need for information reduction through feature extraction. All the complexity resides in the deep learning system, which gets the entire information from a signal block. This setup leads to the robust performance for unknown data, as measured with the blind fold validation. The proposed Computer-Aided Diagnosis (CAD) system can be used for long-term monitoring of the human heart. To the best of our knowledge, the proposed system is the first to incorporate deep learning for AF beat detection.


Assuntos
Fibrilação Atrial/diagnóstico , Diagnóstico por Computador/métodos , Eletrocardiografia , Processamento Eletrônico de Dados , Processamento de Sinais Assistido por Computador , Algoritmos , Coleta de Dados , Bases de Dados Factuais , Aprendizado Profundo , Frequência Cardíaca , Humanos , Monitorização Fisiológica , Redes Neurais de Computação , Reprodutibilidade dos Testes , Risco , Sensibilidade e Especificidade , Software , Máquina de Vetores de Suporte
7.
Crit Care Med ; 46(7): 1057-1062, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29578879

RESUMO

OBJECTIVES: To identify facial expressions occurring in patients at risk of deterioration in hospital wards. DESIGN: Prospective observational feasibility study. SETTING: General ward patients in a London Community Hospital, United Kingdom. PATIENTS: Thirty-four patients at risk of clinical deterioration. INTERVENTIONS: A 5-minute video (25 frames/s; 7,500 images) was recorded, encrypted, and subsequently analyzed for action units by a trained facial action coding system psychologist blinded to outcome. MEASUREMENTS AND MAIN RESULTS: Action units of the upper face, head position, eyes position, lips and jaw position, and lower face were analyzed in conjunction with clinical measures collected within the National Early Warning Score. The most frequently detected action units were action unit 43 (73%) for upper face, action unit 51 (11.7%) for head position, action unit 62 (5.8%) for eyes position, action unit 25 (44.1%) for lips and jaw, and action unit 15 (67.6%) for lower face. The presence of certain combined face displays was increased in patients requiring admission to intensive care, namely, action units 43 + 15 + 25 (face display 1, p < 0.013), action units 43 + 15 + 51/52 (face display 2, p < 0.003), and action units 43 + 15 + 51 + 25 (face display 3, p < 0.002). Having face display 1, face display 2, and face display 3 increased the risk of being admitted to intensive care eight-fold, 18-fold, and as a sure event, respectively. A logistic regression model with face display 1, face display 2, face display 3, and National Early Warning Score as independent covariates described admission to intensive care with an average concordance statistic (C-index) of 0.71 (p = 0.009). CONCLUSIONS: Patterned facial expressions can be identified in deteriorating general ward patients. This tool may potentially augment risk prediction of current scoring systems.


Assuntos
Deterioração Clínica , Expressão Facial , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Viabilidade , Feminino , Hospitalização , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Medição de Risco/métodos , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA