Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Radiology ; 302(1): 175-184, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34581626

RESUMO

Background Many studies emphasize the role of structured reports (SRs) because they are readily accessible for further automated analyses. However, using SR data obtained in clinical routine for research purposes is not yet well represented in literature. Purpose To compare the performance of the Qanadli scoring system with a clot burden score mined from structured pulmonary embolism (PE) reports from CT angiography. Materials and Methods In this retrospective study, a rule-based text mining pipeline was developed to extract descriptors of PE and right heart strain from SR of patients with suspected PE between March 2017 and February 2020. From standardized PE reporting, a pulmonary artery obstruction index (PAOI) clot burden score (PAOICBS) was derived and compared with the Qanadli score (PAOIQ). Scoring time and confidence from two independent readings were compared. Interobserver and interscore agreement was tested by using the intraclass correlation coefficient (ICC) and Bland-Altman analysis. To assess conformity and diagnostic performance of both scores, areas under the receiver operating characteristic curve (AUCs) were calculated to predict right heart strain incidence, as were optimal cutoff values for maximum sensitivity and specificity. Results SR content authored by 67 residents and signed off by 32 consultants from 1248 patients (mean age, 63 years ± 17 [standard deviation]; 639 men) was extracted accurately and allowed for PAOICBS calculation in 304 of 357 (85.2%) PE-positive reports. The PAOICBS strongly correlated with the PAOIQ (r = 0.94; P < .001). Use of PAOICBS yielded overall time savings (1.3 minutes ± 0.5 vs 3.0 minutes ± 1.7), higher confidence levels (4.2 ± 0.6 vs 3.6 ± 1.0), and a higher ICC (ICC, 0.99 vs 0.95), respectively, compared with PAOIQ (each, P < .001). AUCs were similar for PAOICBS (AUC, 0.75; 95% CI: 0.70, 0.81) and PAOIQ (AUC, 0.77; 95% CI: 0.72, 0.83; P = .68), with cutoff values of 27.5% for both scores. Conclusion Data mining of structured reports enabled the development of a CT angiography scoring system that simplified the Qanadli score as a semiquantitative estimate of thrombus burden in patients with pulmonary embolism. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Hunsaker in this issue.


Assuntos
Angiografia por Tomografia Computadorizada/métodos , Embolia Pulmonar/diagnóstico por imagem , Embolia Pulmonar/patologia , Trombose/diagnóstico por imagem , Trombose/patologia , Mineração de Dados , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Artéria Pulmonar/diagnóstico por imagem , Artéria Pulmonar/patologia , Estudos Retrospectivos , Sensibilidade e Especificidade
2.
Sensors (Basel) ; 20(18)2020 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-32932585

RESUMO

The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load.


Assuntos
Inteligência Artificial , Betacoronavirus , Cegueira/reabilitação , COVID-19/prevenção & controle , Infecções por Coronavirus/prevenção & controle , Pandemias/prevenção & controle , Pneumonia Viral/prevenção & controle , Auxiliares Sensoriais , Dispositivos Eletrônicos Vestíveis , Acústica , Adulto , Algoritmos , Inteligência Artificial/estatística & dados numéricos , Cegueira/psicologia , Visão de Cores , Sistemas Computacionais/estatística & dados numéricos , Infecções por Coronavirus/epidemiologia , Desenho de Equipamento , Feminino , Alemanha/epidemiologia , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Masculino , Distanciamento Físico , Pneumonia Viral/epidemiologia , Robótica , SARS-CoV-2 , Semântica , Óculos Inteligentes/estatística & dados numéricos , Pessoas com Deficiência Visual/reabilitação , Dispositivos Eletrônicos Vestíveis/estatística & dados numéricos
3.
Diagnostics (Basel) ; 12(5)2022 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35626379

RESUMO

Detector-based spectral CT offers the possibility of obtaining spectral information from which discrete acquisitions at different energy levels can be derived, yielding so-called virtual monoenergetic images (VMI). In this study, we aimed to develop a jointly optimized deep-learning framework based on dual-energy CT pulmonary angiography (DE-CTPA) data to generate synthetic monoenergetic images (SMI) for improving automatic pulmonary embolism (PE) detection in single-energy CTPA scans. For this purpose, we used two datasets: our institutional DE-CTPA dataset D1, comprising polyenergetic arterial series and the corresponding VMI at low-energy levels (40 keV) with 7892 image pairs, and a 10% subset of the 2020 RSNA Pulmonary Embolism CT Dataset D2, which consisted of 161,253 polyenergetic images with dichotomous slice-wise annotations (PE/no PE). We trained a fully convolutional encoder-decoder on D1 to generate SMI from single-energy CTPA scans of D2, which were then fed into a ResNet50 network for training of the downstream PE classification task. The quantitative results on the reconstruction ability of our framework revealed high-quality visual SMI predictions with reconstruction results of 0.984 ± 0.002 (structural similarity) and 41.706 ± 0.547 dB (peak signal-to-noise ratio). PE classification resulted in an AUC of 0.84 for our model, which achieved improved performance compared to other naïve approaches with AUCs up to 0.81. Our study stresses the role of using joint optimization strategies for deep-learning algorithms to improve automatic PE detection. The proposed pipeline may prove to be beneficial for computer-aided detection systems and could help rescue CTPA studies with suboptimal opacification of the pulmonary arteries from single-energy CT scanners.

4.
IEEE Trans Image Process ; 30: 1866-1881, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33434128

RESUMO

Semantic segmentation, unifying most navigational perception tasks at the pixel level has catalyzed striking progress in the field of autonomous transportation. Modern Convolution Neural Networks (CNNs) are able to perform semantic segmentation both efficiently and accurately, particularly owing to their exploitation of wide context information. However, most segmentation CNNs are benchmarked against pinhole images with limited Field of View (FoV). Despite the growing popularity of panoramic cameras to sense the surroundings, semantic segmenters have not been comprehensively evaluated on omnidirectional wide-FoV data, which features rich and distinct contextual information. In this paper, we propose a concurrent horizontal and vertical attention module to leverage width-wise and height-wise contextual priors markedly available in the panoramas. To yield semantic segmenters suitable for wide-FoV images, we present a multi-source omni-supervised learning scheme with panoramic domain covered in the training via data distillation. To facilitate the evaluation of contemporary CNNs in panoramic imagery, we put forward the Wild PAnoramic Semantic Segmentation (WildPASS) dataset, comprising images from all around the globe, as well as adverse and unconstrained scenes, which further reflects perception challenges of navigation applications in the real world. A comprehensive variety of experiments demonstrates that the proposed methods enable our high-efficiency architecture to attain significant accuracy gains, outperforming the state of the art in panoramic imagery domains.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Semântica
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 6529-6532, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31947337

RESUMO

Recent breakthroughs in computer vision offer an exciting avenue to develop new remote, and non-intrusive patient monitoring techniques. A very challenging topic to address is the automated recognition of breathing disorders during sleep. Due to its complexity, this task has rarely been explored in the literature on real patients using such marker-free approaches. Here, we propose an approach based on deep learning architectures capable of classifying breathing disorders. The classification is performed on depth maps recorded with 3D cameras from 76 patients referred to a sleep laboratory that present a range of breathing disorders. Our system is capable of classifying individual breathing events as normal or abnormal with an accuracy of 61.8%, hence our results show that computer vision and deep learning are viable tools for assessing locally or remotely breathing quality during sleep.


Assuntos
Aprendizado Profundo , Respiração , Humanos , Sono
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2099-2105, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946315

RESUMO

In epilepsy, semiology refers to the study of patient behavior and movement, and their temporal evolution during epileptic seizures. Understanding semiology provides clues to the cerebral networks underpinning the epileptic episode and is a vital resource in the pre-surgical evaluation. Recent advances in video analytics have been helpful in capturing and quantifying epileptic seizures. Nevertheless, the automated representation of the evolution of semiology, as examined by neurologists, has not been appropriately investigated. From initial seizure symptoms until seizure termination, motion patterns of isolated clinical manifestations vary over time. Furthermore, epileptic seizures frequently evolve from one clinical manifestation to another, and their understanding cannot be overlooked during a presurgery evaluation. Here, we propose a system capable of computing motion signatures from videos of face and hand semiology to provide quantitative information on the motion, and the correlation between motions. Each signature is derived from a sparse saliency representation established by the magnitude of the optical flow field. The developed computer-aided tool provides a novel approach for physicians to analyze semiology as a flow of signals without interfering in the healthcare environment. We detect and quantify semiology using detectors based on deep learning and via a novel signature scheme, which is independent of the amount of data and seizure differences. The system reinforces the benefits of computer vision for non-obstructive clinical applications to quantify epileptic seizures recorded in real-life healthcare conditions.


Assuntos
Diagnóstico por Computador , Epilepsia/diagnóstico , Movimento , Convulsões/diagnóstico , Eletroencefalografia , Face , Mãos , Humanos , Gravação em Vídeo
7.
PLoS One ; 10(7): e0130316, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26201078

RESUMO

In recent years it has become apparent that a Gaussian center bias can serve as an important prior for visual saliency detection, which has been demonstrated for predicting human eye fixations and salient object detection. Tseng et al. have shown that the photographer's tendency to place interesting objects in the center is a likely cause for the center bias of eye fixations. We investigate the influence of the photographer's center bias on salient object detection, extending our previous work. We show that the centroid locations of salient objects in photographs of Achanta and Liu's data set in fact correlate strongly with a Gaussian model. This is an important insight, because it provides an empirical motivation and justification for the integration of such a center bias in salient object detection algorithms and helps to understand why Gaussian models are so effective. To assess the influence of the center bias on salient object detection, we integrate an explicit Gaussian center bias model into two state-of-the-art salient object detection algorithms. This way, first, we quantify the influence of the Gaussian center bias on pixel- and segment-based salient object detection. Second, we improve the performance in terms of F1 score, Fß score, area under the recall-precision curve, area under the receiver operating characteristic curve, and hit-rate on the well-known data set by Achanta and Liu. Third, by debiasing Cheng et al.'s region contrast model, we exemplarily demonstrate that implicit center biases are partially responsible for the outstanding performance of state-of-the-art algorithms. Last but not least, we introduce a non-biased salient object detection method, which is of interest for applications in which the image data is not likely to have a photographer's center bias (e.g., image data of surveillance cameras or autonomous robots).


Assuntos
Atenção/fisiologia , Fixação Ocular/fisiologia , Algoritmos , Humanos , Modelos Estatísticos , Distribuição Normal , Estimulação Luminosa , Percepção Visual/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA