Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cancers (Basel) ; 16(2)2024 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-38254870

RESUMO

This review explores the interconnection between precursor lesions of breast cancer (typical ductal hyperplasia, atypical ductal/lobular hyperplasia) and the subclinical of multiple organ failure syndrome, both representing early stages marked by alterations preceding clinical symptoms, undetectable through conventional diagnostic methods. Addressing the question "Why patients with breast cancer exhibit a tendency to deteriorate", this study investigates the biological progression from a subclinical multiple organ failure syndrome, characterized by insidious but indisputable lesions, to an acute (clinical) state resembling a cascade akin to a waterfall or domino effect, often culminating in the patient's demise. A comprehensive literature search was conducted using PubMed, Google Scholar, and Scopus databases in October 2023, employing keywords such as "MODS", "SIRS", "sepsis", "pathophysiology of MODS", "MODS in cancer patients", "multiple organ failure", "risk factors", "cancer", "ICU", "quality of life", and "breast cancer". Supplementary references were extracted from the retrieved articles. This study emphasizes the importance of early identification and prevention of the multiple organ failure cascade at the inception of the malignant state, aiming to enhance the quality of life and extend survival. This pursuit contributes to a deeper understanding of risk factors and viable therapeutic options. Despite the existence of the subclinical multiple organ failure syndrome, current diagnostic methodologies remain inadequate, prompting consideration of AI as an increasingly crucial tool for early identification in the diagnostic process.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 10850-10869, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37030794

RESUMO

Denoising diffusion models represent a recent emerging topic in computer vision, demonstrating remarkable results in the area of generative modeling. A diffusion model is a deep generative model that is based on two stages, a forward diffusion stage and a reverse diffusion stage. In the forward diffusion stage, the input data is gradually perturbed over several steps by adding Gaussian noise. In the reverse stage, a model is tasked at recovering the original input data by learning to gradually reverse the diffusion process, step by step. Diffusion models are widely appreciated for the quality and diversity of the generated samples, despite their known computational burdens, i.e., low speeds due to the high number of steps involved during sampling. In this survey, we provide a comprehensive review of articles on denoising diffusion models applied in vision, comprising both theoretical and practical contributions in the field. First, we identify and present three generic diffusion modeling frameworks, which are based on denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations. We further discuss the relations between diffusion models and other deep generative models, including variational auto-encoders, generative adversarial networks, energy-based models, autoregressive models and normalizing flows. Then, we introduce a multi-perspective categorization of diffusion models applied in computer vision. Finally, we illustrate the current limitations of diffusion models and envision some interesting directions for future research.

3.
J Digit Imaging ; 35(5): 1326-1349, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35445341

RESUMO

The class distribution of a training dataset is an important factor which influences the performance of a deep learning-based system. Understanding the optimal class distribution is therefore crucial when building a new training set which may be costly to annotate. This is the case for histological images used in cancer diagnosis where image annotation requires domain experts. In this paper, we tackle the problem of finding the optimal class distribution of a training set to be able to train an optimal model that detects cancer in histological images. We formulate several hypotheses which are then tested in scores of experiments with hundreds of trials. The experiments have been designed to account for both segmentation and classification frameworks with various class distributions in the training set, such as natural, balanced, over-represented cancer, and over-represented non-cancer. In the case of cancer detection, the experiments show several important results: (a) the natural class distribution produces more accurate results than the artificially generated balanced distribution; (b) the over-representation of non-cancer/negative classes (healthy tissue and/or background classes) compared to cancer/positive classes reduces the number of samples which are falsely predicted as cancer (false positive); (c) the least expensive to annotate non-ROI (non-region-of-interest) data can be useful in compensating for the performance loss in the system due to a shortage of expensive to annotate ROI data; (d) the multi-label examples are more useful than the single-label ones to train a segmentation model; and (e) when the classification model is tuned with a balanced validation set, it is less affected than the segmentation model by the class distribution of the training set.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/diagnóstico por imagem
4.
Mach Vis Appl ; 33(1): 12, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34955610

RESUMO

We study a series of recognition tasks in two realistic scenarios requiring the analysis of faces under strong occlusion. On the one hand, we aim to recognize facial expressions of people wearing virtual reality headsets. On the other hand, we aim to estimate the age and identify the gender of people wearing surgical masks. For all these tasks, the common ground is that half of the face is occluded. In this challenging setting, we show that convolutional neural networks trained on fully visible faces exhibit very low performance levels. While fine-tuning the deep learning models on occluded faces is extremely useful, we show that additional performance gains can be obtained by distilling knowledge from models trained on fully visible faces. To this end, we study two knowledge distillation methods, one based on teacher-student training and one based on triplet loss. Our main contribution consists in a novel approach for knowledge distillation based on triplet loss, which generalizes across models and tasks. Furthermore, we consider combining distilled models learned through conventional teacher-student training or through our novel teacher-student training based on triplet loss. We provide empirical evidence showing that, in most cases, both individual and combined knowledge distillation methods bring statistically significant performance improvements. We conduct experiments with three different neural models (VGG-f, VGG-face and ResNet-50) on various tasks (facial expression recognition, gender recognition, age estimation), showing consistent improvements regardless of the model or task.

5.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 4505-4523, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33881990

RESUMO

Abnormal event detection in video is a complex computer vision problem that has attracted significant attention in recent years. The complexity of the task arises from the commonly-adopted definition of an abnormal event, that is, a rarely occurring event that typically depends on the surrounding context. Following the standard formulation of abnormal event detection as outlier detection, we propose a background-agnostic framework that learns from training videos containing only normal events. Our framework is composed of an object detector, a set of appearance and motion auto-encoders, and a set of classifiers. Since our framework only looks at object detections, it can be applied to different scenes, provided that normal events are defined identically across scenes and that the single main factor of variation is the background. This makes our method background agnostic, as we rely strictly on objects that can cause anomalies, and not on the background. To overcome the lack of abnormal data during training, we propose an adversarial learning strategy for the auto-encoders. We create a scene-agnostic set of out-of-domain pseudo-abnormal examples, which are correctly reconstructed by the auto-encoders before applying gradient ascent on the pseudo-abnormal examples. We further utilize the pseudo-abnormal examples to serve as abnormal examples when training appearance-based and motion-based binary classifiers to discriminate between normal and abnormal latent features and reconstructions. Furthermore, to ensure that the auto-encoders focus only on the main object inside each bounding box image, we introduce a branch that learns to segment the main object. We compare our framework with the state-of-the-art methods on four benchmark data sets, using various evaluation metrics. Compared to existing methods, the empirical results indicate that our approach achieves favorable performance on all data sets. In addition, we provide region-based and track-based annotations for two large-scale abnormal event detection data sets from the literature, namely ShanghaiTech and Subway.

6.
Sensors (Basel) ; 20(19)2020 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-33019508

RESUMO

In this paper, we present our system for the RSNA Intracranial Hemorrhage Detection challenge, which is based on the RSNA 2019 Brain CT Hemorrhage dataset. The proposed system is based on a lightweight deep neural network architecture composed of a convolutional neural network (CNN) that takes as input individual CT slices, and a Long Short-Term Memory (LSTM) network that takes as input multiple feature embeddings provided by the CNN. For efficient processing, we consider various feature selection methods to produce a subset of useful CNN features for the LSTM. Furthermore, we reduce the CT slices by a factor of 2×, which enables us to train the model faster. Even if our model is designed to balance speed and accuracy, we report a weighted mean log loss of 0.04989 on the final test set, which places us in the top 30 ranking (2%) from a total of 1345 participants. While our computing infrastructure does not allow it, processing CT slices at their original scale is likely to improve performance. In order to enable others to reproduce our results, we provide our code as open source. After the challenge, we conducted a subjective intracranial hemorrhage detection assessment by radiologists, indicating that the performance of our deep model is on par with that of doctors specialized in reading CT scans. Another contribution of our work is to integrate Grad-CAM visualizations in our system, providing useful explanations for its predictions. We therefore consider our system as a viable option when a fast diagnosis or a second opinion on intracranial hemorrhage detection are needed.


Assuntos
Hemorragias Intracranianas/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos
7.
PLoS One ; 9(8): e104006, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25133391

RESUMO

Recent tools for aligning short DNA reads have been designed to optimize the trade-off between correctness and speed. This paper introduces a method for assigning a set of short DNA reads to a reference genome, under Local Rank Distance (LRD). The rank-based aligner proposed in this work aims to improve correctness over speed. However, some indexing strategies to speed up the aligner are also investigated. The LRD aligner is improved in terms of speed by storing [Formula: see text]-mer positions in a hash table for each read. Another improvement, that produces an approximate LRD aligner, is to consider only the positions in the reference that are likely to represent a good positional match of the read. The proposed aligner is evaluated and compared to other state of the art alignment tools in several experiments. A set of experiments are conducted to determine the precision and the recall of the proposed aligner, in the presence of contaminated reads. In another set of experiments, the proposed aligner is used to find the order, the family, or the species of a new (or unknown) organism, given only a set of short Next-Generation Sequencing DNA reads. The empirical results show that the aligner proposed in this work is highly accurate from a biological point of view. Compared to the other evaluated tools, the LRD aligner has the important advantage of being very accurate even for a very low base coverage. Thus, the LRD aligner can be considered as a good alternative to standard alignment tools, especially when the accuracy of the aligner is of high importance. Source code and UNIX binaries of the aligner are freely available for future development and use at http://lrd.herokuapp.com/aligners. The software is implemented in C++ and Java, being supported on UNIX and MS Windows.


Assuntos
Alinhamento de Sequência/métodos , Software , Animais , Análise por Conglomerados , DNA Mitocondrial/genética , Genes Bacterianos , Humanos , Filogenia , Análise de Sequência de DNA , Vibrio/genética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...