Your browser doesn't support javascript.
loading
Versatile multiple object tracking in sparse 2D/3D videos via deformable image registration.
Ryu, James; Nejatbakhsh, Amin; Torkashvand, Mahdi; Gangadharan, Sahana; Seyedolmohadesin, Maedeh; Kim, Jinmahn; Paninski, Liam; Venkatachalam, Vivek.
Afiliação
  • Ryu J; Department of Physics, Northeastern University, Boston, Massachusetts, United States of America.
  • Nejatbakhsh A; Department of Neuroscience, Columbia University, New York, New York, United States of America.
  • Torkashvand M; Department of Physics, Northeastern University, Boston, Massachusetts, United States of America.
  • Gangadharan S; Department of Physics, Northeastern University, Boston, Massachusetts, United States of America.
  • Seyedolmohadesin M; Department of Physics, Northeastern University, Boston, Massachusetts, United States of America.
  • Kim J; Department of Physics, Northeastern University, Boston, Massachusetts, United States of America.
  • Paninski L; Department of Neuroscience, Columbia University, New York, New York, United States of America.
  • Venkatachalam V; Department of Physics, Northeastern University, Boston, Massachusetts, United States of America.
PLoS Comput Biol ; 20(5): e1012075, 2024 May.
Article em En | MEDLINE | ID: mdl-38768230
ABSTRACT
Tracking body parts in behaving animals, extracting fluorescence signals from cells embedded in deforming tissue, and analyzing cell migration patterns during development all require tracking objects with partially correlated motion. As dataset sizes increase, manual tracking of objects becomes prohibitively inefficient and slow, necessitating automated and semi-automated computational tools. Unfortunately, existing methods for multiple object tracking (MOT) are either developed for specific datasets and hence do not generalize well to other datasets, or require large amounts of training data that are not readily available. This is further exacerbated when tracking fluorescent sources in moving and deforming tissues, where the lack of unique features and sparsely populated images create a challenging environment, especially for modern deep learning techniques. By leveraging technology recently developed for spatial transformer networks, we propose ZephIR, an image registration framework for semi-supervised MOT in 2D and 3D videos. ZephIR can generalize to a wide range of biological systems by incorporating adjustable parameters that encode spatial (sparsity, texture, rigidity) and temporal priors of a given data class. We demonstrate the accuracy and versatility of our approach in a variety of applications, including tracking the body parts of a behaving mouse and neurons in the brain of a freely moving C. elegans. We provide an open-source package along with a web-based graphical user interface that allows users to provide small numbers of annotations to interactively improve tracking results.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Biologia Computacional Limite: Animals Idioma: En Revista: PLoS Comput Biol Assunto da revista: BIOLOGIA / INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Biologia Computacional Limite: Animals Idioma: En Revista: PLoS Comput Biol Assunto da revista: BIOLOGIA / INFORMATICA MEDICA Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos