Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4071-4089, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35976841

RESUMO

Labeling data is often expensive and time-consuming, especially for tasks such as object detection and instance segmentation, which require dense labeling of the image. While few-shot object detection is about training a model on novel (unseen) object classes with little data, it still requires prior training on many labeled examples of base (seen) classes. On the other hand, self-supervised methods aim at learning representations from unlabeled data which transfer well to downstream tasks such as object detection. Combining few-shot and self-supervised object detection is a promising research direction. In this survey, we review and characterize the most recent approaches on few-shot and self-supervised object detection. Then, we give our main takeaways and discuss future research directions. Project page: https://gabrielhuang.github.io/fsod-survey/.

2.
Acad Radiol ; 29(7): 994-1003, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35490114

RESUMO

RATIONALE AND OBJECTIVES: Hard data labels for automated algorithm training are binary and cannot incorporate uncertainty between labels. We proposed and evaluated a soft labeling methodology to quantify opacification and percent well-aerated lung (%WAL) on chest CT, that considers uncertainty in segmenting pulmonary opacifications and reduces labeling burden. MATERIALS AND METHODS: We retrospectively sourced 760 COVID-19 chest CT scans from five international centers between January and June 2020. We created pixel-wise labels for >27,000 axial slices that classify three pulmonary opacification patterns: pure ground-glass, crazy-paving, consolidation. We also quantified %WAL as the total area of lung without opacifications. Inter-user hard label variability was quantified using Shannon entropy (range=0-1.39, low-high entropy/variability). We incorporated a soft labeling and modeling cycle following an initial model with hard labels and compared performance using point-wise accuracy and intersection-over-union of opacity labels with ground-truth, and correlation with ground-truth %WAL. RESULTS: Hard labels annotated by 12 radiologists demonstrated large inter-user variability (3.37% of pixels achieved complete agreement). Our soft labeling approach increased point-wise accuracy from 60.0% to 84.3% (p=0.01) compared to hard labeling at predicting opacification type and area involvement. The soft label model accurately predicted %WAL (R=0.900) compared to the hard label model (R=0.856), but the improvement was not statistically significant (p=0.349). CONCLUSION: Our soft labeling approach increased accuracy for automated quantification and classification of pulmonary opacification on chest CT. Although we developed the model on COVID-19, our intent is broad application for pulmonary opacification contexts and to provide a foundation for future development using soft labeling methods.


Assuntos
COVID-19 , Algoritmos , COVID-19/diagnóstico por imagem , Humanos , Pulmão/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Incerteza
3.
Sci Rep ; 11(1): 17379, 2021 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-34462458

RESUMO

Estimating fish body measurements like length, width, and mass has received considerable research due to its potential in boosting productivity in marine and aquaculture applications. Some methods are based on manual collection of these measurements using tools like a ruler which is time consuming and labour intensive. Others rely on fully-supervised segmentation models to automatically acquire these measurements but require collecting per-pixel labels which are also time consuming. It can take up to 2 minutes per fish to acquire accurate segmentation labels. To address this problem, we propose a segmentation model that can efficiently train on images labeled with point-level supervision, where each fish is annotated with a single click. This labeling scheme takes an average of only 1 second per fish. Our model uses a fully convolutional neural network with one branch that outputs per-pixel scores and another that outputs an affinity matrix. These two outputs are aggregated using a random walk to get the final, refined per-pixel output. The whole model is trained end-to-end using the localization-based counting fully convolutional neural network (LCFCN) loss and thus we call our method Affinity-LCFCN (A-LCFCN). We conduct experiments on the DeepFish dataset, which contains several fish habitats from north-eastern Australia. The results show that A-LCFCN outperforms a fully-supervised segmentation model when the annotation budget is fixed. They also show that A-LCFCN achieves better segmentation results than LCFCN and a standard baseline.


Assuntos
Peixes/fisiologia , Redes Neurais de Computação , Animais , Bases de Dados Factuais , Ecossistema , Processamento de Imagem Assistida por Computador/métodos , Interface Usuário-Computador
4.
IEEE J Biomed Health Inform ; 25(10): 3865-3873, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34057902

RESUMO

Health professionals extensively use Two-Dimensional (2D) Ultrasound (US) videos and images to visualize and measure internal organs for various purposes including evaluation of muscle architectural changes. US images can be used to measure abdominal muscles dimensions for the diagnosis and creation of customized treatment plans for patients with Low Back Pain (LBP), however, they are difficult to interpret. Due to high variability, skilled professionals with specialized training are required to take measurements to avoid low intra-observer reliability. This variability stems from the challenging nature of accurately finding the correct spatial location of measurement endpoints in abdominal US images. In this paper, we use a Deep Learning (DL) approach to automate the measurement of the abdominal muscle thickness in 2D US images. By treating the problem as a localization task, we develop a modified Fully Convolutional Network (FCN) architecture to generate blobs of coordinate locations of measurement endpoints, similar to what a human operator does. We demonstrate that using the TrA400 US image dataset, our network achieves a Mean Absolute Error (MAE) of 0.3125 on the test set, which almost matches the performance of skilled ultrasound technicians. Our approach can facilitate next steps for automating the process of measurements in 2D US images, while reducing inter-observer as well as intra-observer variability for more effective clinical outcomes.


Assuntos
Aprendizado Profundo , Músculos Abdominais/diagnóstico por imagem , Humanos , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Ultrassonografia
5.
Sci Rep ; 10(1): 14671, 2020 09 04.
Artigo em Inglês | MEDLINE | ID: mdl-32887922

RESUMO

Visual analysis of complex fish habitats is an important step towards sustainable fisheries for human consumption and environmental protection. Deep Learning methods have shown great promise for scene analysis when trained on large-scale datasets. However, current datasets for fish analysis tend to focus on the classification task within constrained, plain environments which do not capture the complexity of underwater fish habitats. To address this limitation, we present DeepFish as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks. The dataset consists of approximately 40 thousand images collected underwater from 20 habitats in the marine-environments of tropical Australia. The dataset originally contained only classification labels. Thus, we collected point-level and segmentation labels to have a more comprehensive fish analysis benchmark. These labels enable models to learn to automatically monitor fish count, identify their locations, and estimate their sizes. Our experiments provide an in-depth analysis of the dataset characteristics, and the performance evaluation of several state-of-the-art approaches based on our benchmark. Although models pre-trained on ImageNet have successfully performed on this benchmark, there is still room for improvement. Therefore, this benchmark serves as a testbed to motivate further development in this challenging domain of underwater computer vision.


Assuntos
Comportamento Animal/fisiologia , Aprendizado Profundo , Ecossistema , Peixes/fisiologia , Animais , Austrália , Monitorização de Parâmetros Ecológicos/métodos , Pesqueiros
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA