Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
IEEE Rev Biomed Eng ; PP2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38829752

RESUMO

Increasing demands on medical imaging departments are taking a toll on the radiologist's ability to deliver timely and accurate reports. Recent technological advances in artificial intelligence have demonstrated great potential for automatic radiology report generation (ARRG), sparking an explosion of research. This survey paper conducts a methodological review of contemporary ARRG approaches by way of (i) assessing datasets based on characteristics, such as availability, size, and adoption rate, (ii) examining deep learning training methods, such as contrastive learning and reinforcement learning, (iii) exploring state-of-the-art model architectures, including variations of CNN and transformer models, (iv) outlining techniques integrating clinical knowledge through multimodal inputs and knowledge graphs, and (v) scrutinising current model evaluation techniques, including commonly applied NLP metrics and qualitative clinical reviews. Furthermore, the quantitative results of the reviewed models are analysed, where the top performing models are examined to seek further insights. Finally, potential new directions are highlighted, with the adoption of additional datasets from other radiological modalities and improved evaluation methods predicted as important areas of future development.

2.
Sci Data ; 10(1): 918, 2023 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-38123584

RESUMO

Parkinson's disease (PD) is a neurodegenerative disorder characterised by motor symptoms such as gait dysfunction and postural instability. Technological tools to continuously monitor outcomes could capture the hour-by-hour symptom fluctuations of PD. Development of such tools is hampered by the lack of labelled datasets from home settings. To this end, we propose REMAP (REal-world Mobility Activities in Parkinson's disease), a human rater-labelled dataset collected in a home-like setting. It includes people with and without PD doing sit-to-stand transitions and turns in gait. These discrete activities are captured from periods of free-living (unobserved, unstructured) and during clinical assessments. The PD participants withheld their dopaminergic medications for a time (causing increased symptoms), so their activities are labelled as being "on" or "off" medications. Accelerometry from wrist-worn wearables and skeleton pose video data is included. We present an open dataset, where the data is coarsened to reduce re-identifiability, and a controlled dataset available on application which contains more refined data. A use-case for the data to estimate sit-to-stand speed and duration is illustrated.


Assuntos
Doença de Parkinson , Humanos , Acelerometria , Marcha , Tempo
3.
Digit Biomark ; 7(1): 92-103, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37588481

RESUMO

Introduction: Technology holds the potential to track disease progression and response to neuroprotective therapies in Parkinson's disease (PD). The sit-to-stand (STS) transition is a frequently occurring event which is important to people with PD. The aim of this study was to demonstrate an automatic approach to quantify STS duration and speed using a real-world free-living dataset and look at clinical correlations of the outcomes, including whether STS parameters change when someone withholds PD medications. Methods: Eighty-five hours of video data were collected from 24 participants staying in pairs for 5-day periods in a naturalistic setting. Skeleton joints were extracted from the video data; the head trajectory was estimated and used to estimate the STS parameters of duration and speed. Results: 3.14 STS transitions were seen per hour per person on average. Significant correlations were seen between automatic and manual STS duration (Pearson rho - 0.419, p = 0.042) and between automatic STS speed and manual STS duration (Pearson rho - 0.780, p < 0.001). Significant and strong correlations were seen between the gold-standard clinical rating scale scores and both STS duration and STS speed; these correlations were not seen in the STS transitions when the participants were carrying something in their hand(s). Significant differences were seen at the cohort level between control and PD participants' ON medications' STS duration (U = 6,263, p = 0.018) and speed (U = 9,965, p < 0.001). At an individual level, only two participants with PD became significantly slower to STS when they were OFF medications; withholding medications did not significantly change STS duration at an individual level in any participant. Conclusion: We demonstrate a novel approach to automatically quantify and ecologically validate two STS parameters which correlate with gold-standard clinical tools measuring disease severity in PD.

4.
JMIR Form Res ; 6(9): e33606, 2022 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-36103223

RESUMO

BACKGROUND: Calorimetry is both expensive and obtrusive but provides the only way to accurately measure energy expenditure in daily living activities of any specific person, as different people can use different amounts of energy despite performing the same actions in the same manner. Deep learning video analysis techniques have traditionally required a lot of data to train; however, recent advances in few-shot learning, where only a few training examples are necessary, have made developing personalized models without a calorimeter a possibility. OBJECTIVE: The primary aim of this study is to determine which activities are most well suited to calibrate a vision-based personalized deep learning calorie estimation system for daily living activities. METHODS: The SPHERE (Sensor Platform for Healthcare in a Residential Environment) Calorie data set is used, which features 10 participants performing 11 daily living activities totaling 4.5 hours of footage. Calorimeter and video data are available for all recordings. A deep learning method is used to regress calorie predictions from video. RESULTS: Models are personalized with 32 seconds from all 11 actions in the data set, and mean square error (MSE) is taken against a calorimeter ground truth. The best single action for calibration is wipe (1.40 MSE). The best pair of actions are sweep and sit (1.09 MSE). This compares favorably to using a whole 30-minute sequence containing 11 actions to calibrate (1.06 MSE). CONCLUSIONS: A vision-based deep learning energy expenditure estimation system for a wide range of daily living activities can be calibrated to a specific person with footage and calorimeter data from 32 seconds of sweeping and 32 seconds of sitting.

5.
Comput Med Imaging Graph ; 99: 102089, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35738186

RESUMO

Although, predicting ischaemic stroke evolution and treatment outcome provide important information one step towards individual treatment planning, classifying functional outcome and modelling the brain tissue evolution remains a challenge due to data complexity and visually subtle changes in the brain. We propose a novel deep learning approach, Feature Matching Auto-encoder (FeMA) that consists of two stages, predicting ischaemic stroke evolution at one week without voxel-wise annotation and predicting ischaemic stroke treatment outcome at 90 days from a baseline scan. In the first stage, we introduce feature similarity and consistency objective, and in the second stage, we show that adding stroke evolution information increase the performance of functional outcome prediction. Comparative experiments demonstrate that our proposed method is more effective to extract representative follow-up features and achieves the best results for functional outcome of stroke treatment.


Assuntos
Isquemia Encefálica , AVC Isquêmico , Acidente Vascular Cerebral , Encéfalo , Isquemia Encefálica/diagnóstico por imagem , Isquemia Encefálica/tratamento farmacológico , Humanos , AVC Isquêmico/diagnóstico por imagem , AVC Isquêmico/tratamento farmacológico , Acidente Vascular Cerebral/diagnóstico por imagem , Resultado do Tratamento
6.
Sensors (Basel) ; 21(12)2021 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-34208690

RESUMO

Parkinson's disease (PD) is a chronic neurodegenerative condition that affects a patient's everyday life. Authors have proposed that a machine learning and sensor-based approach that continuously monitors patients in naturalistic settings can provide constant evaluation of PD and objectively analyse its progression. In this paper, we make progress toward such PD evaluation by presenting a multimodal deep learning approach for discriminating between people with PD and without PD. Specifically, our proposed architecture, named MCPD-Net, uses two data modalities, acquired from vision and accelerometer sensors in a home environment to train variational autoencoder (VAE) models. These are modality-specific VAEs that predict effective representations of human movements to be fused and given to a classification module. During our end-to-end training, we minimise the difference between the latent spaces corresponding to the two data modalities. This makes our method capable of dealing with missing modalities during inference. We show that our proposed multimodal method outperforms unimodal and other multimodal approaches by an average increase in F1-score of 0.25 and 0.09, respectively, on a data set with real patients. We also show that our method still outperforms other approaches by an average increase in F1-score of 0.17 when a modality is missing during inference, demonstrating the benefit of training on multiple modalities.


Assuntos
Doença de Parkinson , Humanos , Aprendizado de Máquina , Monitorização Fisiológica
7.
BMJ Open ; 10(11): e041303, 2020 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-33257491

RESUMO

INTRODUCTION: The impact of disease-modifying agents on disease progression in Parkinson's disease is largely assessed in clinical trials using clinical rating scales. These scales have drawbacks in terms of their ability to capture the fluctuating nature of symptoms while living in a naturalistic environment. The SPHERE (Sensor Platform for HEalthcare in a Residential Environment) project has designed a multi-sensor platform with multimodal devices designed to allow continuous, relatively inexpensive, unobtrusive sensing of motor, non-motor and activities of daily living metrics in a home or a home-like environment. The aim of this study is to evaluate how the SPHERE technology can measure aspects of Parkinson's disease. METHODS AND ANALYSIS: This is a small-scale feasibility and acceptability study during which 12 pairs of participants (comprising a person with Parkinson's and a healthy control participant) will stay and live freely for 5 days in a home-like environment embedded with SPHERE technology including environmental, appliance monitoring, wrist-worn accelerometry and camera sensors. These data will be collected alongside clinical rating scales, participant diary entries and expert clinician annotations of colour video images. Machine learning will be used to look for a signal to discriminate between Parkinson's disease and control, and between Parkinson's disease symptoms 'on' and 'off' medications. Additional outcome measures including bradykinesia, activity level, sleep parameters and some activities of daily living will be explored. Acceptability of the technology will be evaluated qualitatively using semi-structured interviews. ETHICS AND DISSEMINATION: Ethical approval has been given to commence this study; the results will be disseminated as widely as appropriate.


Assuntos
Doença de Parkinson , Atividades Cotidianas , Estudos de Viabilidade , Humanos , Avaliação de Resultados em Cuidados de Saúde , Doença de Parkinson/diagnóstico , Avaliação de Sintomas , Tecnologia
8.
Sensors (Basel) ; 20(18)2020 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-32942561

RESUMO

We propose a view-invariant method towards the assessment of the quality of human movements which does not rely on skeleton data. Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. We release the only publicly-available, multi-view, non-skeleton, non-mocap, rehabilitation movement dataset (QMAR), and provide results for both cross-subject and cross-view scenarios on this dataset. We show that VI-Net achieves average rank correlation of 0.66 on cross-subject and 0.65 on unseen views when trained on only two views. We also evaluate the proposed method on the single-view rehabilitation dataset KIMORE and obtain 0.66 rank correlation against a baseline of 0.62.


Assuntos
Movimento , Redes Neurais de Computação , Humanos
9.
Sensors (Basel) ; 20(9)2020 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-32369960

RESUMO

The use of visual sensors for monitoring people in their living environments is critical in processing more accurate health measurements, but their use is undermined by the issue of privacy. Silhouettes, generated from RGB video, can help towards alleviating the issue of privacy to some considerable degree. However, the use of silhouettes would make it rather complex to discriminate between different subjects, preventing a subject-tailored analysis of the data within a free-living, multi-occupancy home. This limitation can be overcome with a strategic fusion of sensors that involves wearable accelerometer devices, which can be used in conjunction with the silhouette video data, to match video clips to a specific patient being monitored. The proposed method simultaneously solves the problem of Person ReID using silhouettes and enables home monitoring systems to employ sensor fusion techniques for data analysis. We develop a multimodal deep-learning detection framework that maps short video clips and accelerations into a latent space where the Euclidean distance can be measured to match video and acceleration streams. We train our method on the SPHERE Calorie Dataset, for which we show an average area under the ROC curve of 76.3% and an assignment accuracy of 77.4%. In addition, we propose a novel triplet loss for which we demonstrate improving performances and convergence speed.


Assuntos
Monitorização Fisiológica , Dispositivos Eletrônicos Vestíveis , Aceleração , Computadores , Humanos
10.
Sensors (Basel) ; 19(3)2019 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-30720749

RESUMO

Wellbeing is often affected by health-related conditions. Among them are nutrition-related health conditions, which can significantly decrease the quality of life. We envision a system that monitors the kitchen activities of patients and that based on the detected eating behaviour could provide clinicians with indicators for improving a patient's health. To be successful, such system has to reason about the person's actions and goals. To address this problem, we introduce a symbolic behaviour recognition approach, called Computational Causal Behaviour Models (CCBM). CCBM combines symbolic representation of person's behaviour with probabilistic inference to reason about one's actions, the type of meal being prepared, and its potential health impact. To evaluate the approach, we use a cooking dataset of unscripted kitchen activities, which contains data from various sensors in a real kitchen. The results show that the approach is able to reason about the person's cooking actions. It is also able to recognise the goal in terms of type of prepared meal and whether it is healthy. Furthermore, we compare CCBM to state-of-the-art approaches such as Hidden Markov Models (HMM) and decision trees (DT). The results show that our approach performs comparable to the HMM and DT when used for activity recognition. It outperformed the HMM for goal recognition of the type of meal with median accuracy of 1 compared to median accuracy of 0.12 when applying the HMM. Our approach also outperformed the HMM for recognising whether a meal is healthy with a median accuracy of 1 compared to median accuracy of 0.5 with the HMM.


Assuntos
Comportamentos Relacionados com a Saúde/fisiologia , Algoritmos , Culinária/métodos , Humanos , Modelos Teóricos
11.
IEEE Trans Biomed Eng ; 65(6): 1421-1431, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29787997

RESUMO

OBJECTIVE: We propose a novel depth-based photoplethysmography (dPPG) approach to reduce motion artifacts in respiratory volume-time data and improve the accuracy of remote pulmonary function testing (PFT) measures. METHOD: Following spatial and temporal calibration of two opposing RGB-D sensors, a dynamic three-dimensional model of the subject performing PFT is reconstructed and used to decouple trunk movements from respiratory motions. Depth-based volume-time data is then retrieved, calibrated, and used to compute 11 clinical PFT measures for forced vital capacity and slow vital capacity spirometry tests. RESULTS: A dataset of 35 subjects (298 sequences) was collected and used to evaluate the proposed dPPG method by comparing depth-based PFT measures to the measures provided by a spirometer. Other comparative experiments between the dPPG and the single Kinect approach, such as Bland-Altman analysis, similarity measures performance, intra-subject error analysis, and statistical analysis of tidal volume and main effort scaling factors, all show the superior accuracy of the dPPG approach. CONCLUSION: We introduce a depth-based whole body photoplethysmography approach, which reduces motion artifacts in depth-based volume-time data and highly improves the accuracy of depth-based computed measures. SIGNIFICANCE: The proposed dPPG method remarkably drops the error mean and standard deviation of FEF , FEF , FEF, IC , and ERV measures by half, compared to the single Kinect approach. These significant improvements establish the potential for unconstrained remote respiratory monitoring and diagnosis.


Assuntos
Fotopletismografia/métodos , Tecnologia de Sensoriamento Remoto/métodos , Testes de Função Respiratória/métodos , Processamento de Sinais Assistido por Computador , Imagem Corporal Total/métodos , Adulto , Artefatos , Feminino , Humanos , Imageamento Tridimensional/métodos , Masculino , Movimento (Física)
12.
Front Physiol ; 8: 65, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28223945

RESUMO

Introduction: There is increasing interest in technologies that may enable remote monitoring of respiratory disease. Traditional methods for assessing respiratory function such as spirometry can be expensive and require specialist training to perform and interpret. Remote, non-contact tracking of chest wall movement has been explored in the past using structured light, accelerometers and impedance pneumography, but these have often been costly and clinical utility remains to be defined. We present data from a 3-Dimensional time-of-flight camera (found in gaming consoles) used to estimate chest volume during routine spirometry maneuvres. Methods: Patients were recruited from a general respiratory physiology laboratory. Spirometry was performed according to international standards using an unmodified spirometer. A Microsoft Kinect V2 time-of-flight depth sensor was used to reconstruct 3-dimensional models of the subject's thorax to estimate volume-time and flow-time curves following the introduction of a scaling factor to transform measurements to volume estimates. The Bland-Altman method was used to assess agreement of model estimation with simultaneous recordings from the spirometer. Patient characteristics were used to assess predictors of error using regression analysis and to further explore the scaling factors. Results: The chest volume change estimated by the Kinect camera during spirometry tracked respiratory rate accurately and estimated forced vital capacity (FVC) and vital capacity to within ± <1%. Forced expiratory volume estimation did not demonstrate acceptable limits of agreement, with 61.9% of readings showing >150 ml difference. Linear regression including age, gender, height, weight, and pack years of smoking explained 37.0% of the variance in the scaling factor for volume estimation. This technique had a positive predictive value of 0.833 to detect obstructive spirometry. Conclusion: These data illustrate the potential of 3D time-of-flight cameras to remotely monitor respiratory rate. This is not a replacement for conventional spirometry and needs further refinement. Further algorithms are being developed to allow its independence from spirometry. Benefits include simplicity of set-up, no specialist training, and cost. This technique warrants further refinement and validation in larger cohorts.

13.
IEEE Trans Biomed Eng ; 64(8): 1943-1958, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-27925582

RESUMO

OBJECTIVE: We propose a remote, noninvasive approach to develop pulmonary function testing (PFT) using a depth sensor. METHOD: After generating a point cloud from scene depth values, we construct a three-dimensional model of the subject's chest. Then, by estimating the chest volume variation throughout a sequence, we generate volume-time and flow-time data for two prevalent spirometry tests: forced vital capacity (FVC) and slow vital capacity (SVC). Tidal volume and main effort sections of volume-time data are analyzed and calibrated separately to remove the effects of a subject's torso motion. After automatic extraction of keypoints from the volume-time and flow-time curves, seven FVC ( FVC, FEV1, PEF, FEF 25%, FEF 50%, FEF 75%, and FEF [Formula: see text]) and four SVC measures ( VC, IC, TV, and ERV) are computed and then validated against measures from a spirometer. A dataset of 85 patients (529 sequences in total), attending respiratory outpatient service for spirometry, was collected and used to evaluate the proposed method. RESULTS: High correlation for FVC and SVC measures on intra-test and intra-subject measures between the proposed method and the spirometer. CONCLUSION: Our proposed depth-based approach is able to remotely compute eleven clinical PFT measures, which gives highly accurate results when evaluated against a spirometer on a dataset comprising 85 patients. SIGNIFICANCE: Experimental results computed over an unprecedented number of clinical patients confirm that chest surface motion is linearly related to the changes in volume of lungs, which establishes the potential toward an accurate, low-cost, and remote alternative to traditional cumbersome methods, such as spirometry.


Assuntos
Diagnóstico por Computador/métodos , Imageamento Tridimensional/métodos , Monitorização Ambulatorial/métodos , Mecânica Respiratória/fisiologia , Tórax/fisiologia , Volume de Ventilação Pulmonar/fisiologia , Diagnóstico por Computador/instrumentação , Humanos , Imageamento Tridimensional/instrumentação , Monitorização Ambulatorial/instrumentação , Reprodutibilidade dos Testes , Testes de Função Respiratória/instrumentação , Testes de Função Respiratória/métodos , Sensibilidade e Especificidade
14.
IEEE Trans Image Process ; 25(9): 4379-4393, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27390176

RESUMO

We address the problem of object modeling from 3D and 3D+T data made up of images, which contain different parts of an object of interest, are separated by large spaces, and are misaligned with respect to each other. These images have only a limited number of intersections, hence making their registration particularly challenging. Furthermore, such data may result from various medical imaging modalities and can, therefore, present very diverse spatial configurations. Previous methods perform registration and object modeling (segmentation and interpolation) sequentially. However, sequential registration is ill-suited for the case of images with few intersections. We propose a new methodology, which, regardless of the spatial configuration of the data, performs the three stages of registration, segmentation, and shape interpolation from spaced and misaligned images simultaneously. We integrate these three processes in a level set framework, in order to benefit from their synergistic interactions. We also propose a new registration method that exploits segmentation information rather than pixel intensities, and that accounts for the global shape of the object of interest, for increased robustness and accuracy. The accuracy of registration is compared against traditional mutual information based methods, and the total modeling framework is assessed against traditional sequential processing and validated on artificial, CT, and MRI data.

15.
Artigo em Inglês | MEDLINE | ID: mdl-25333178

RESUMO

Accurate automated segmentation of the right ventricle is difficult due in part to the large shape variation found between patients. We explore the ability of manifold learning based shape models to represent the complexity of shape variation found within an RV dataset as compared to a typical PCA based model. This is empirically evaluated with the manifold model displaying a greater ability to represent complex shapes. Furthermore, we present a combined manifold shape model and Markov Random Field Segmentation framework. The novelty of this method is the iterative generation of targeted shape priors from the manifold using image information and a current estimate of the segmentation; a process that can be seen as a traversal across the manifold. We apply our method to the independently evaluated MICCAI 2012 RV Segmentation Challenge data set. Our method performs similarly or better than the state-of-the-art methods.


Assuntos
Imagem de Difusão por Ressonância Magnética/métodos , Ventrículos do Coração/patologia , Interpretação de Imagem Assistida por Computador/métodos , Imagem Cinética por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Disfunção Ventricular Direita/patologia , Algoritmos , Simulação por Computador , Humanos , Aumento da Imagem/métodos , Cadeias de Markov , Modelos Cardiovasculares , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
IEEE Trans Image Process ; 23(1): 110-25, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24158475

RESUMO

We address the two inherently related problems of segmentation and interpolation of 3D and 4D sparse data and propose a new method to integrate these stages in a level set framework. The interpolation process uses segmentation information rather than pixel intensities for increased robustness and accuracy. The method supports any spatial configurations of sets of 2D slices having arbitrary positions and orientations. We achieve this by introducing a new level set scheme based on the interpolation of the level set function by radial basis functions. The proposed method is validated quantitatively and/or subjectively on artificial data and MRI and CT scans and is compared against the traditional sequential approach, which interpolates the images first, using a state-of-the-art image interpolation method, and then segments the interpolated volume in 3D or 4D. In our experiments, the proposed framework yielded similar segmentation results to the sequential approach but provided a more robust and accurate interpolation. In particular, the interpolation was more satisfactory in cases of large gaps, due to the method taking into account the global shape of the object, and it recovered better topologies at the extremities of the shapes where the objects disappear from the image slices. As a result, the complete integrated framework provided more satisfactory shape reconstructions than the sequential approach.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Tomografia Computadorizada por Raios X/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade , Integração de Sistemas
17.
IEEE Trans Image Process ; 21(8): 3757-69, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22510949

RESUMO

In this paper, we present an automatic restoration system targeting on dirt and blotches in digitized archive films. The system is composed of mainly two modules: defect detection and defect removal. In defect detection, we locate the defects by combing temporal and spatial information across a number of frames. An HMM is trained for normal observation sequences and then applied within a framework to detect defective pixels. The resulting defect maps are refined in a two-stage false alarm elimination process and then passed over to the defect removal procedure. A labelled (degraded) pixels is restored in a multiscale framework by first searching the optimal replacement in its dynamically generated, random walk based region of candidate pixel-exemplars and then updating all its features (intensity, motion and texture). Finally, the proposed system is compared against state-of-the-art methods to demonstrate improved accuracy in both detection and restoration using synthetic and real degraded image sequences.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Arquivos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
IEEE Trans Image Process ; 21(3): 1231-45, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21908256

RESUMO

A new fully automatic object tracking and segmentation framework is proposed. The framework consists of a motion-based bootstrapping algorithm concurrent to a shape-based active contour. The shape-based active contour uses finite shape memory that is automatically and continuously built from both the bootstrap process and the active-contour object tracker. A scheme is proposed to ensure that the finite shape memory is continuously updated but forgets unnecessary information. Two new ways of automatically extracting shape information from image data given a region of interest are also proposed. Results demonstrate that the bootstrapping stage provides important motion and shape information to the object tracker. This information is found to be essential for good (fully automatic) initialization of the active contour. Further results also demonstrate convergence properties of the content of the finite shape memory and similar object tracking performance in comparison with an object tracker with unlimited shape memory. Tests with an active contour using a fixed-shape prior also demonstrate superior performance for the proposed bootstrapped finite-shape-memory framework and similar performance when compared with a recently proposed active contour that uses an alternative online learning model.

19.
Med Image Comput Comput Assist Interv ; 15(Pt 3): 329-36, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23286147

RESUMO

Fluorescently-tagged proteins located on vesicles can fuse with the surface membrane (visualised as a 'puff') or undock and return back into the bulk of the cell. Detection and quantitative measurement of these events from time-lapse videos has proven difficult. We propose a novel approach to detect fusion and undocking events by first searching for docked vesicles that 'disappear' from the field of view, and then using a diffusion model to classify them as either fusion or undocking events. We can also use the same searching method to identify docking events. We present comparative results against existing algorithms.


Assuntos
Membrana Celular/ultraestrutura , Rastreamento de Células/métodos , Interpretação de Imagem Assistida por Computador/métodos , Fusão de Membrana , Microscopia de Fluorescência/métodos , Modelos Biológicos , Vesículas Transportadoras/ultraestrutura , Membrana Celular/fisiologia , Simulação por Computador , Difusão , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Pattern Anal Mach Intell ; 30(4): 632-46, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18276969

RESUMO

We propose an active contour model using an external force field that is based on magnetostatics and hypothesized magnetic interactions between the active contour and object boundaries. The major contribution of the method is that the interaction of its forces can greatly improve the active contour in capturing complex geometries and dealing with difficult initializations, weak edges and broken boundaries. The proposed method is shown to achieve significant improvements when compared against six well-known and state-of-the-art shape recovery methods, including the geodesic snake, the generalized version of GVF snake, the combined geodesic and GVF snake, and the charged particle model.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Eletricidade Estática , Simulação por Computador , Campos Eletromagnéticos , Aumento da Imagem/métodos , Modelos Teóricos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA