Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Nature ; 615(7953): 712-719, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36922590

RESUMO

Mitochondria are critical to the governance of metabolism and bioenergetics in cancer cells1. The mitochondria form highly organized networks, in which their outer and inner membrane structures define their bioenergetic capacity2,3. However, in vivo studies delineating the relationship between the structural organization of mitochondrial networks and their bioenergetic activity have been limited. Here we present an in vivo structural and functional analysis of mitochondrial networks and bioenergetic phenotypes in non-small cell lung cancer (NSCLC) using an integrated platform consisting of positron emission tomography imaging, respirometry and three-dimensional scanning block-face electron microscopy. The diverse bioenergetic phenotypes and metabolic dependencies we identified in NSCLC tumours align with distinct structural organization of mitochondrial networks present. Further, we discovered that mitochondrial networks are organized into distinct compartments within tumour cells. In tumours with high rates of oxidative phosphorylation (OXPHOSHI) and fatty acid oxidation, we identified peri-droplet mitochondrial networks wherein mitochondria contact and surround lipid droplets. By contrast, we discovered that in tumours with low rates of OXPHOS (OXPHOSLO), high glucose flux regulated perinuclear localization of mitochondria, structural remodelling of cristae and mitochondrial respiratory capacity. Our findings suggest that in NSCLC, mitochondrial networks are compartmentalized into distinct subpopulations that govern the bioenergetic capacity of tumours.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Metabolismo Energético , Neoplasias Pulmonares , Mitocôndrias , Humanos , Carcinoma Pulmonar de Células não Pequenas/metabolismo , Carcinoma Pulmonar de Células não Pequenas/patologia , Carcinoma Pulmonar de Células não Pequenas/ultraestrutura , Ácidos Graxos/metabolismo , Glucose/metabolismo , Gotículas Lipídicas/metabolismo , Neoplasias Pulmonares/metabolismo , Neoplasias Pulmonares/patologia , Neoplasias Pulmonares/ultraestrutura , Microscopia Eletrônica , Mitocôndrias/metabolismo , Mitocôndrias/ultraestrutura , Fosforilação Oxidativa , Fenótipo , Tomografia por Emissão de Pósitrons
2.
Proc Natl Acad Sci U S A ; 121(18): e2307304121, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38640257

RESUMO

Over the past few years, machine learning models have significantly increased in size and complexity, especially in the area of generative AI such as large language models. These models require massive amounts of data and compute capacity to train, to the extent that concerns over the training data (such as protected or private content) cannot be practically addressed by retraining the model "from scratch" with the questionable data removed or altered. Furthermore, despite significant efforts and controls dedicated to ensuring that training corpora are properly curated and composed, the sheer volume required makes it infeasible to manually inspect each datum comprising a training corpus. One potential approach to training corpus data defects is model disgorgement, by which we broadly mean the elimination or reduction of not only any improperly used data, but also the effects of improperly used data on any component of an ML model. Model disgorgement techniques can be used to address a wide range of issues, such as reducing bias or toxicity, increasing fidelity, and ensuring responsible use of intellectual property. In this paper, we survey the landscape of model disgorgement methods and introduce a taxonomy of disgorgement techniques that are applicable to modern ML systems. In particular, we investigate the various meanings of "removing the effects" of data on the trained model in a way that does not require retraining from scratch.


Assuntos
Idioma , Aprendizado de Máquina
3.
Entropy (Basel) ; 23(7)2021 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-34356463

RESUMO

We introduce the Redundant Information Neural Estimator (RINE), a method that allows efficient estimation for the component of information about a target variable that is common to a set of sources, known as the "redundant information". We show that existing definitions of the redundant information can be recast in terms of an optimization over a family of functions. In contrast to previous information decompositions, which can only be evaluated for discrete variables over small alphabets, we show that optimizing over functions enables the approximation of the redundant information for high-dimensional and continuous predictors. We demonstrate this on high-dimensional image classification and motor-neuroscience tasks.

4.
Entropy (Basel) ; 22(1)2020 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-33285876

RESUMO

This paper is a step towards developing a geometric understanding of a popular algorithm for training deep neural networks named stochastic gradient descent (SGD). We built upon a recent result which observed that the noise in SGD while training typical networks is highly non-isotropic. That motivated a deterministic model in which the trajectories of our dynamical systems are described via geodesics of a family of metrics arising from a certain diffusion matrix; namely, the covariance of the stochastic gradients in SGD. Our model is analogous to models in general relativity: the role of the electromagnetic field in the latter is played by the gradient of the loss function of a deep network in the former.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38652615

RESUMO

Negative flips are errors introduced in a classification system when a legacy model is updated. Existing methods to reduce the negative flip rate (NFR) either do so at the expense of overall accuracy by forcing a new model to imitate the old models, or use ensembles, which multiply inference cost prohibitively. We analyze the role of ensembles in reducing NFR and observe that they remove negative flips that are typically not close to the decision boundary, but often exhibit large deviations in the distance among their logits. Based on the observation, we present a method, called Ensemble Logit Difference Inhibition (ELODI), to train a classification system that achieves paragon performance in both error rate and NFR, at the inference cost of a single model. The method distills a homogeneous ensemble to a single student model which is used to update the classification system. ELODI also introduces a generalized distillation objective, Logit Difference Inhibition (LDI), which only penalizes the logit difference of a subset of classes with the highest logit values. On multiple image classification benchmarks, model updates with ELODI demonstrate superior accuracy retention and NFR reduction.

6.
Neural Netw ; 139: 348-357, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33887584

RESUMO

We present a stochastic first-order optimization algorithm, named block-cyclic stochastic coordinate descent (BCSC), that adds a cyclic constraint to stochastic block-coordinate descent in the selection of both data and parameters. It uses different subsets of the data to update different subsets of the parameters, thus limiting the detrimental effect of outliers in the training set. Empirical tests in image classification benchmark datasets show that BCSC outperforms state-of-the-art optimization methods in generalization leading to higher accuracy within the same number of update iterations. The improvements are consistent across different architectures and datasets, and can be combined with other training techniques and regularizations.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Benchmarking , Classificação/métodos , Conjuntos de Dados como Assunto , Processamento de Imagem Assistida por Computador/normas , Reconhecimento Automatizado de Padrão/normas , Processos Estocásticos
7.
Artigo em Inglês | MEDLINE | ID: mdl-31880553

RESUMO

We present an adaptive regularization scheme for optimizing composite energy functionals arising in image analysis problems. The scheme automatically trades off data fidelity and regularization depending on the current data fit during the iterative optimization, so that regularization is strongest initially, and wanes as data fidelity improves, with the weight of the regularizer being minimized at convergence. We also introduce a Huber loss function in both data fidelity and regularization terms, and present an efficient convex optimization algorithm based on the alternating direction method of multipliers (ADMM) using the equivalent relation between the Huber function and the proximal operator of the one-norm. We illustrate and validate our adaptive Huber-Huber model on synthetic and real images in segmentation, motion estimation, and denoising problems.

8.
IEEE Trans Pattern Anal Mach Intell ; 30(3): 518-31, 2008 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-18195444

RESUMO

Defocus can be modeled as a diffusion process and represented mathematically using the heat equation, where image blur corresponds to the diffusion of heat. This analogy can be extended to non-planar scenes by allowing a space-varying diffusion coefficient. The inverse problem of reconstructing 3-D structure from blurred images corresponds to an "inverse diffusion" that is notoriously ill-posed. We show how to bypass this problem by using the notion of relative blur. Given two images, within each neighborhood, the amount of diffusion necessary to transform the sharper image into the blurrier one depends on the depth of the scene. This can be used to devise a global algorithm to estimate the depth profile of the scene without recovering the deblurred image, using only forward diffusion.


Assuntos
Algoritmos , Artefatos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Difusão , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
9.
IEEE Trans Pattern Anal Mach Intell ; 40(12): 2897-2905, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29994167

RESUMO

The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.

10.
Int J Med Robot ; 14(6): e1949, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30152081

RESUMO

BACKGROUND: With the development of laser-assisted platforms, the outcomes of cataract surgery have been improved by automating several procedures. The cataract-extraction step continues to be manually performed, but due to deficiencies in sensing capabilities, surgical complications such as posterior capsule rupture and incomplete cataract removal remain. METHODS: An optical coherence tomography (OCT) system is integrated into our intraocular robotic interventional surgical system (IRISS) robot. The OCT images are used for preoperative planning and intraoperative intervention in a series of automated procedures. Real-time intervention allows surgeons to evaluate the progress and override the operation. RESULTS: The developed system was validated by performing lens extraction on 30 postmortem pig eyes. Complete lens extraction was achieved on 25 eyes, and "almost complete" extraction was achieved on the remainder due to an inability to image small lens particles behind the iris. No capsule rupture was found. CONCLUSION: The IRISS successfully demonstrated semiautomated OCT-guided lens removal with real-time supervision and intervention.


Assuntos
Extração de Catarata/instrumentação , Catarata , Tomografia de Coerência Óptica/instrumentação , Animais , Automação , Extração de Catarata/métodos , Desenho de Equipamento , Humanos , Procedimentos Cirúrgicos Robóticos , Software , Suínos , Tomografia de Coerência Óptica/métodos
11.
IEEE Trans Pattern Anal Mach Intell ; 29(11): 1958-72, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17848777

RESUMO

We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Modelos Biológicos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Biometria/métodos , Simulação por Computador , Marcha , Humanos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
IEEE Trans Pattern Anal Mach Intell ; 29(8): 1322-38, 2007 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-17568138

RESUMO

This paper addresses the problem of calibrating camera parameters using variational methods. One problem addressed is the severe lens distortion in low-cost cameras. For many computer vision algorithms aiming at reconstructing reliable representations of 3D scenes, the camera distortion effects will lead to inaccurate 3D reconstructions and geometrical measurements if not accounted for. A second problem is the color calibration problem caused by variations in camera responses that result in different color measurements and affects the algorithms that depend on these measurements. We also address the extrinsic camera calibration that estimates relative poses and orientations of multiple cameras in the system and the intrinsic camera calibration that estimates focal lengths and the skew parameters of the cameras. To address these calibration problems, we present multiview stereo techniques based on variational methods that utilize partial and ordinary differential equations. Our approach can also be considered as a coordinated refinement of camera calibration parameters. To reduce computational complexity of such algorithms, we utilize prior knowledge on the calibration object, making a piecewise smooth surface assumption, and evolve the pose, orientation, and scale parameters of such a 3D model object without requiring a 2D feature extraction from camera views. We derive the evolution equations for the distortion coefficients, the color calibration parameters, the extrinsic and intrinsic parameters of the cameras, and present experimental results.

13.
IEEE Trans Pattern Anal Mach Intell ; 28(12): 2006-19, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17108373

RESUMO

We propose a model of the joint variation of shape and appearance of portions of an image sequence. The model is conditionally linear, and can be thought of as an extension of active appearance models to exploit the temporal correlation of adjacent image frames. Inference of the model parameters can be performed efficiently using established numerical optimization techniques borrowed from finite-element analysis and system identification techniques.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
14.
IEEE Trans Pattern Anal Mach Intell ; 28(10): 1602-18, 2006 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-16986542

RESUMO

For shapes represented as closed planar contours, we introduce a class of functionals which are invariant with respect to the Euclidean group and which are obtained by performing integral operations. While such integral invariants enjoy some of the desirable properties of their differential counterparts, such as locality of computation (which allows matching under occlusions) and uniqueness of representation (asymptotically), they do not exhibit the noise sensitivity associated with differential quantities and, therefore, do not require presmoothing of the input shape. Our formulation allows the analysis of shapes at multiple scales. Based on integral invariants, we define a notion of distance between shapes. The proposed distance measure can be computed efficiently and allows warping the shape boundaries onto each other; its computation results in optimal point correspondence as an intermediate step. Numerical results on shape matching demonstrate that this framework can match shapes despite the deformation of subparts, missing parts and noise. As a quantitative analysis, we report matching scores for shape retrieval from a database.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração
15.
IEEE Trans Pattern Anal Mach Intell ; 37(1): 151-60, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26353215

RESUMO

We present a shape descriptor based on integral kernels. Shape is represented in an implicit form and it is characterized by a series of isotropic kernels that provide desirable invariance properties. The shape features are characterized at multiple scales which form a signature that is a compact description of shape over a range of scales. The shape signature is designed to be invariant with respect to group transformations which include translation, rotation, scaling, and reflection. In addition, the integral kernels that characterize local shape geometry enable the shape signature to be robust with respect to undesirable perturbations while retaining discriminative power. Use of our shape signature is demonstrated for shape matching based on a number of synthetic and real examples.

16.
IEEE Trans Image Process ; 24(6): 1777-90, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-25769155

RESUMO

Natural images exhibit geometric structures that are informative of the properties of the underlying scene. Modern image processing algorithms respect such characteristics by employing regularizers that capture the statistics of natural images. For instance, total variation (TV) respects the highly kurtotic distribution of the pointwise gradient by allowing for large magnitude outlayers. However, the gradient magnitude alone does not capture the directionality and scale of local structures in natural images. The structure tensor provides a more meaningful description of gradient information as it describes both the size and orientation of the image gradients in a neighborhood of each point. Based on this observation, we propose a variational model for image reconstruction that employs a regularization functional adapted to the local geometry of image by means of its structure tensor. Our method alternates two minimization steps: 1) robust estimation of the structure tensor as a semidefinite program and 2) reconstruction of the image with an adaptive regularizer defined from this tensor. This two-step procedure allows us to extend anisotropic diffusion into the convex setting and develop robust, efficient, and easy-to-code algorithms for image denoising, deblurring, and compressed sensing. Our method extends naturally to nonlocal regularization, where it exploits the local self-similarity of natural images to improve nonlocal TV and diffusion operators. Our experiments show a consistent accuracy improvement over classic regularization.

17.
IEEE Trans Pattern Anal Mach Intell ; 34(10): 1942-51, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22201065

RESUMO

We describe an approach for segmenting a moving image into regions that correspond to surfaces in the scene that are partially surrounded by the medium. It integrates both appearance and motion statistics into a cost functional that is seeded with occluded regions and minimized efficiently by solving a linear programming problem. Where a short observation time is insufficient to determine whether the object is detachable, the results of the minimization can be used to seed a more costly optimization based on a longer sequence of video data. The result is an entirely unsupervised scheme to detect and segment an arbitrary and unknown number of objects. We test our scheme to highlight the potential, as well as limitations, of our approach.

18.
Inf Process Med Imaging ; 22: 424-35, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21761675

RESUMO

We propose a method to efficiently compute mutual information between high-dimensional distributions of image patches. This in turn is used to perform accurate registration of images captured under different modalities, while exploiting their local structure otherwise missed in traditional mutual information definition. We achieve this by organizing the space of image patches into orbits under the action of Euclidean transformations of the image plane, and estimating the modes of a distribution in such an orbit space using affinity propagation. This way, large collections of patches that are equivalent up to translations and rotations are mapped to the same representative, or "dictionary element". We then show analytically that computing mutual information for a joint distribution in this space reduces to computing mutual information between the (scalar) label maps, and between the transformations mapping each patch into its closest dictionary element. We show that our approach improves registration performance compared with the state of the art in multimodal registration, using both synthetic and real images with quantitative ground truth.


Assuntos
Algoritmos , Encéfalo/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
Comput Med Imaging Graph ; 35(7-8): 653-9, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21531538

RESUMO

Using the fusion of pre-operative MRI and real time intra-procedural transrectal ultrasound (TRUS) to guide prostate biopsy has been shown as a very promising approach to yield better clinical outcome than the routinely performed TRUS only guided biopsy. In several situations of the MRI/TRUS fusion guided biopsy, it is important to know the exact location of the deployed biopsy needle, which is imaged in the TRUS video. In this paper, we present a method to automatically detect and segment the biopsy needle in TRUS. To achieve this goal, we propose to combine information from multiple resources, including ultrasound probe stability, TRUS video background model, and the prior knowledge of needle orientation and position. The proposed algorithm was tested on TRUS video sequences which have in total more than 25,000 frames. The needle deployments were successfully detected and segmented in the sequences with high accuracy and low false-positive detection rate.


Assuntos
Biópsia por Agulha , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia de Intervenção/normas , Algoritmos , Humanos , Aumento da Imagem , Masculino , Reto/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA