Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Philos Trans A Math Phys Eng Sci ; 374(2065): 20150193, 2016 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-26953175

RESUMEN

A new method is proposed to determine the time-frequency content of time-dependent signals consisting of multiple oscillatory components, with time-varying amplitudes and instantaneous frequencies. Numerical experiments as well as a theoretical analysis are presented to assess its effectiveness.

2.
J Electrocardiol ; 48(1): 21-8, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25464986

RESUMEN

In this report we provide a method for automated detection of J wave, defined as a notch or slur in the descending slope of the terminal positive wave of the QRS complex, using signal processing and functional data analysis techniques. Two different sets of ECG tracings were selected from the EPICARE ECG core laboratory, Wake Forest School of Medicine, Winston Salem, NC. The first set was a training set comprised of 100 ECGs of which 50 ECGs had J-wave and the other 50 did not. The second set was a test set (n=116 ECGs) in which the J-wave status (present/absent) was only known by the ECG Center staff. All ECGs were recorded using GE MAC 1200 (GE Marquette, Milwaukee, Wisconsin) at 10mm/mV calibration, speed of 25mm/s and 500HZ sampling rate. All ECGs were initially inspected visually for technical errors and inadequate quality, and then automatically processed with the GE Marquette 12-SL program 2001 version (GE Marquette, Milwaukee, WI). We excluded ECG tracings with major abnormalities or rhythm disorder. Confirmation of the presence or absence of a J wave was done visually by the ECG Center staff and verified once again by three of the coauthors. There was no disagreement in the identification of the J wave state. The signal processing and functional data analysis techniques applied to the ECGs were conducted at Duke University and the University of Toronto. In the training set, the automated detection had sensitivity of 100% and specificity of 94%. For the test set, sensitivity was 89% and specificity was 86%. In conclusion, test results of the automated method we developed show a good J wave detection accuracy, suggesting possible utility of this approach for defining and detection of other complex ECG waveforms.


Asunto(s)
Algoritmos , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Frecuencia Cardíaca/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
3.
Proc Natl Acad Sci U S A ; 108(45): 18221-6, 2011 Nov 08.
Artículo en Inglés | MEDLINE | ID: mdl-22025685

RESUMEN

We describe approaches for distances between pairs of two-dimensional surfaces (embedded in three-dimensional space) that use local structures and global information contained in interstructure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This approach is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This necessity renders these studies inaccessible to nonmorphologists and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences, our approach does not require any preliminary marking of special features or landmarks by the user. It also differs from other seminal work in computational geometry in that our algorithms are polynomial in nature and thus faster, making pairwise comparisons feasible for significantly larger numbers of digitized surfaces. We illustrate our approach using three datasets representing teeth and different bones of primates and humans, and show that it leads to highly accurate results.


Asunto(s)
Algoritmos , Modelos Anatómicos
4.
IEEE Trans Image Process ; 32: 2931-2946, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37200124

RESUMEN

X-radiography (X-ray imaging) is a widely used imaging technique in art investigation. It can provide information about the condition of a painting as well as insights into an artist's techniques and working methods, often revealing hidden information invisible to the naked eye. X-radiograpy of double-sided paintings results in a mixed X-ray image and this paper deals with the problem of separating this mixed image. Using the visible color images (RGB images) from each side of the painting, we propose a new Neural Network architecture, based upon 'connected' auto-encoders, designed to separate the mixed X-ray image into two simulated X-ray images corresponding to each side. This connected auto-encoders architecture is such that the encoders are based on convolutional learned iterative shrinkage thresholding algorithms (CLISTA) designed using algorithm unrolling techniques, whereas the decoders consist of simple linear convolutional layers; the encoders extract sparse codes from the visible image of the front and rear paintings and mixed X-ray image, whereas the decoders reproduce both the original RGB images and the mixed X-ray image. The learning algorithm operates in a totally self-supervised fashion without requiring a sample set that contains both the mixed X-ray images and the separated ones. The methodology was tested on images from the double-sided wing panels of the Ghent Altarpiece, painted in 1432 by the brothers Hubert and Jan van Eyck. These tests show that the proposed approach outperforms other state-of-the-art X-ray image separation methods for art investigation applications.

5.
Proc Natl Acad Sci U S A ; 106(30): 12267-72, 2009 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-19617537

RESUMEN

We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.


Asunto(s)
Algoritmos , Simulación por Computador , Industrias/normas , Modelos Teóricos , Reproducibilidad de los Resultados
6.
IEEE Trans Image Process ; 31: 4458-4473, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35763481

RESUMEN

In this paper, we focus on X-ray images (X-radiographs) of paintings with concealed sub-surface designs (e.g., deriving from reuse of the painting support or revision of a composition by the artist), which therefore include contributions from both the surface painting and the concealed features. In particular, we propose a self-supervised deep learning-based image separation approach that can be applied to the X-ray images from such paintings to separate them into two hypothetical X-ray images. One of these reconstructed images is related to the X-ray image of the concealed painting, while the second one contains only information related to the X-ray image of the visible painting. The proposed separation network consists of two components: the analysis and the synthesis sub-networks. The analysis sub-network is based on learned coupled iterative shrinkage thresholding algorithms (LCISTA) designed using algorithm unrolling techniques, and the synthesis sub-network consists of several linear mappings. The learning algorithm operates in a totally self-supervised fashion without requiring a sample set that contains both the mixed X-ray images and the separated ones. The proposed method is demonstrated on a real painting with concealed content, Do na Isabel de Porcel by Francisco de Goya, to show its effectiveness.

7.
Philos Trans A Math Phys Eng Sci ; 374(2065): 20150207, 2016 Apr 13.
Artículo en Inglés | MEDLINE | ID: mdl-26953179
8.
Am J Phys Anthropol ; 145(2): 247-61, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21469070

RESUMEN

Inferred dietary preference is a major component of paleoecologies of extinct primates. Molar occlusal shape correlates with diet in living mammals, so teeth are a potentially useful structure from which to reconstruct diet in extinct taxa. We assess the efficacy of Dirichlet normal energy (DNE) calculated for molar tooth surfaces for reflecting diet. We evaluate DNE, which uses changes in normal vectors to characterize curvature, by directly comparing this metric to metrics previously used in dietary inference. We also test whether combining methods improves diet reconstructions. The study sample consisted of 146 lower (mandibular) second molars belonging to 24 euarchontan taxa. Five shape quantification metrics were calculated on each molar: DNE, shearing quotient, shearing ratio, relief index, and orientation patch count rotated (OPCR). Statistical analyses were completed for each variable to assess effects of taxon and diet. Discriminant function analysis was used to assess ability of combinations of variables to predict diet. Values differ significantly by diets for all variables, although shearing ratios and OPCR do not distinguish statistically between insectivores and folivores or omnivores and frugivores. Combined analyses were much more effective at predicting diet than any metric alone. Alone, relief index and DNE were most effective at predicting diet. OPCR was the least effective alone but is still valuable as the only quantitative measure of surface complexity. Of all methods considered, DNE was the least methodologically sensitive, and its effectiveness suggests it will be a valuable tool for dietary reconstruction.


Asunto(s)
Dieta , Diente Molar/anatomía & histología , Diente Molar/patología , Corona del Diente/anatomía & histología , Corona del Diente/patología , Desgaste de los Dientes/patología , Análisis de Varianza , Animales , Procesamiento de Imagen Asistido por Computador , Diente Molar/diagnóstico por imagen , Strepsirhini/anatomía & histología , Corona del Diente/diagnóstico por imagen , Desgaste de los Dientes/diagnóstico por imagen , Tupaia/anatomía & histología , Microtomografía por Rayos X
9.
BMC Ecol Evol ; 21(1): 60, 2021 04 21.
Artículo en Inglés | MEDLINE | ID: mdl-33882818

RESUMEN

BACKGROUND: Lemurs once rivalled the diversity of rest of the primate order despite thier confinement to the island of Madagascar. We test the adaptive radiation model of Malagasy lemur diversity using a novel combination of phylogenetic comparative methods and geometric methods for quantifying tooth shape. RESULTS: We apply macroevolutionary model fitting approaches and disparity through time analysis to dental topography metrics associated with dietary adaptation, an aspect of mammalian ecology which appears to be closely related to diversification in many clades. Metrics were also reconstructed at internal nodes of the lemur tree and these reconstructions were combined to generate dietary classification probabilities at internal nodes using discriminant function analysis. We used these reconstructions to calculate rates of transition toward folivory per million-year intervals. Finally, lower second molar shape was reconstructed at internal nodes by modelling the change in shape of 3D meshes using squared change parsimony along the branches of the lemur tree. Our analyses of dental topography metrics do not recover an early burst in rates of change or a pattern of early partitioning of subclade disparity. However, rates of change in adaptations for folivory were highest during the Oligocene, an interval of possible forest expansion on the island. CONCLUSIONS: There was no clear phylogenetic signal of bursts of morphological evolution early in lemur history. Reconstruction of the molar morphologies corresponding to the ancestral nodes of the lemur tree suggest that this may have been driven by a shift toward defended plant resources, however. This suggests a response to the ecological opportunity offered by expanding forests, but not necessarily a classic adaptive radiation initiated by dispersal to Madagascar.


Asunto(s)
Lemur , Strepsirhini , Animales , Dieta , Madagascar , Filogenia
10.
Anat Rec (Hoboken) ; 301(4): 636-658, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29024541

RESUMEN

Automated geometric morphometric methods are promising tools for shape analysis in comparative biology, improving researchers' abilities to quantify variation extensively (by permitting more specimens to be analyzed) and intensively (by characterizing shapes with greater fidelity). Although use of these methods has increased, published automated methods have some notable limitations: pairwise correspondences are frequently inaccurate and pairwise mappings are not globally consistent (i.e., they lack transitivity across the full sample). Here, we reassess the accuracy of published automated methods-cPDist (Boyer et al. Proc Nat Acad Sci 108 () 18221-18226) and auto3Dgm (Boyer et al.: Anat Rec 298 () 249-276)-and evaluate several modifications to these methods. We show that a substantial percentage of alignments and pairwise maps between specimens of dissimilar geometries were inaccurate in the study of Boyer et al. (Proc Nat Acad Sci 108 () 18221-18226), despite a taxonomically partitioned variance structure of continuous Procrustes distances. We show these inaccuracies are remedied using a globally informed methodology within a collection of shapes, rather than relying on pairwise comparisons (c.f. Boyer et al.: Anat Rec 298 () 249-276). Unfortunately, while global information generally enhances maps between dissimilar objects, it can degrade the quality of correspondences between similar objects due to the accumulation of numerical error. We explore a number of approaches to mitigate this degradation, quantify their performance, and compare the generated pairwise maps (and the shape space characterized by these maps) to a "ground truth" obtained from landmarks manually collected by geometric morphometricians. Novel methods both improve the quality of the pairwise correspondences relative to cPDist and achieve a taxonomic distinctiveness comparable to auto3Dgm. Anat Rec, 301:636-658, 2018. © 2017 Wiley Periodicals, Inc.


Asunto(s)
Imagenología Tridimensional/métodos , Animales , Conjuntos de Datos como Asunto
11.
IEEE Trans Image Process ; 26(1): 160-171, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-28113181

RESUMEN

We address the removal of canvas artifacts from high-resolution digital photographs and X-ray images of paintings on canvas. Both imaging modalities are common investigative tools in art history and art conservation. Canvas artifacts manifest themselves very differently according to the acquisition modality; they can hamper the visual reading of the painting by art experts, for instance, in preparing a restoration campaign. Computer-aided canvas removal is desirable for restorers when the painting on canvas they are preparing to restore has acquired over the years a much more salient texture. We propose a new algorithm that combines a cartoon-texture decomposition method with adaptive multiscale thresholding in the frequency domain to isolate and suppress the canvas components. To illustrate the strength of the proposed method, we provide various examples, for acquisitions in both imaging modalities, for paintings with different types of canvas and from different periods. The proposed algorithm outperforms previous methods proposed for visual photographs such as morphological component analysis and Wiener filtering and it also works for the digital removal of canvas artifacts in X-ray images.

12.
IEEE Trans Image Process ; 26(2): 751-764, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27831873

RESUMEN

In support of art investigation, we propose a new source separation method that unmixes a single X-ray scan acquired from double-sided paintings. In this problem, the X-ray signals to be separated have similar morphological characteristics, which brings previous source separation methods to their limits. Our solution is to use photographs taken from the front-and back-side of the panel to drive the separation process. The crux of our approach relies on the coupling of the two imaging modalities (photographs and X-rays) using a novel coupled dictionary learning framework able to capture both common and disparate features across the modalities using parsimonious representations; the common component captures features shared by the multi-modal images, whereas the innovation component captures modality-specific information. As such, our model enables the formulation of appropriately regularized convex optimization procedures that lead to the accurate separation of the X-rays. Our dictionary learning framework can be tailored both to a single- and a multi-scale framework, with the latter leading to a significant performance improvement. Moreover, to improve further on the visual quality of the separated images, we propose to train coupled dictionaries that ignore certain parts of the painting corresponding to craquelure. Experimentation on synthetic and real data - taken from digital acquisition of the Ghent Altarpiece (1432) - confirms the superiority of our method against the state-of-the-art morphological component analysis technique that uses either fixed or trained dictionaries to perform image separation.

13.
Methods Inf Med ; 55(5): 463-472, 2016 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-27626806

RESUMEN

BACKGROUND: With recent advances in sensor and computer technologies, the ability to monitor peripheral pulse activity is no longer limited to the laboratory and clinic. Now inexpensive sensors, which interface with smartphones or other computer-based devices, are expanding into the consumer market. When appropriate algorithms are applied, these new technologies enable ambulatory monitoring of dynamic physiological responses outside the clinic in a variety of applications including monitoring fatigue, health, workload, fitness, and rehabilitation. Several of these applications rely upon measures derived from peripheral pulse waves measured via contact or non-contact photoplethysmography (PPG). As technologies move from contact to non-contact PPG, there are new challenges. The technology necessary to estimate average heart rate over a few seconds from a noncontact PPG is available. However, a technology to precisely measure instantaneous heat rate (IHR) from non-contact sensors, on a beat-to-beat basis, is more challenging. OBJECTIVES: The objective of this paper is to develop an algorithm with the ability to accurately monitor IHR from peripheral pulse waves, which provides an opportunity to measure the neural regulation of the heart from the beat-to-beat heart rate pattern (i.e., heart rate variability). METHODS: The adaptive harmonic model is applied to model the contact or non-contact PPG signals, and a new methodology, the Synchrosqueezing Transform (SST), is applied to extract IHR. The body sway rhythm inherited in the non-contact PPG signal is modeled and handled by the notion of wave-shape function. RESULTS: The SST optimizes the extraction of IHR from the PPG signals and the technique functions well even during periods of poor signal to noise. We contrast the contact and non-contact indices of PPG derived heart rate with a criterion electrocardiogram (ECG). ECG and PPG signals were monitored in 21 healthy subjects performing tasks with different physical demands. The root mean square error of IHR estimated by SST is significantly better than commonly applied methods such as autoregressive (AR) method. In the walking situation, while AR method fails, SST still provides a reasonably good result. CONCLUSIONS: The SST processed PPG data provided an accurate estimate of the ECG derived IHR and consistently performed better than commonly applied methods such as autoregressive method.


Asunto(s)
Algoritmos , Frecuencia Cardíaca/fisiología , Pulso Arterial , Procesamiento de Señales Asistido por Computador , Análisis de Ondículas , Humanos , Modelos Teóricos , Análisis Numérico Asistido por Computador , Fotopletismografía
14.
Am J Cardiol ; 118(6): 811-815, 2016 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-27596326

RESUMEN

The association between the J wave, a key component of the early repolarization pattern, and adverse cardiovascular outcomes remains unclear. Inconsistencies have stemmed from the different methods used to measure the J wave. We examined the association between the J wave, detected by an automated method, and adverse cardiovascular outcomes in 14,592 (mean age = 54 ± 5.8 years; 56% women; 26% black) participants from the Atherosclerosis Risk In Communities (ARIC) study. The J wave was detected at baseline (1987 to 1989) and during follow-up study visits (1990 to 1992, 1993 to 1995, and 1996 to 1998) using a fully automated method. Sudden cardiac death, coronary heart disease death, and cardiovascular mortality were ascertained from hospital discharge records, death certificates, and autopsy data through December 31, 2010. A total of 278 participants (1.9%) had evidence of a J wave. Over a median follow-up of 22 years, 4,376 of the participants (30%) died. In a multivariable Cox regression analysis adjusted for demographics, cardiovascular risk factors, and potential confounders, the J wave was not associated with an increased risk of sudden cardiac death (hazard ratio [HR] 0.74, 95% CI 0.36 to 1.50), coronary heart disease death (HR 0.72, 95% CI 0.40 to 1.32), or cardiovascular mortality (HR 1.16, 95% CI 0.87 to 1.56). An interaction was detected for cardiovascular mortality by gender with men (HR 1.54, 95% CI 1.09 to 2.19) having a stronger association than women (HR 0.74, 95% CI 0.43 to 1.25; P-interaction = 0.030). In conclusion, our findings suggest that the J wave is a benign entity that is not associated with an increased risk for sudden cardiac arrest in middle-aged adults in the United States.


Asunto(s)
Síndrome de Brugada/epidemiología , Enfermedades Cardiovasculares/mortalidad , Enfermedad Coronaria/mortalidad , Muerte Súbita Cardíaca/epidemiología , Electrocardiografía , Negro o Afroamericano , Trastorno del Sistema de Conducción Cardíaco , Enfermedades Cardiovasculares/epidemiología , Estudios de Cohortes , Enfermedad Coronaria/epidemiología , Femenino , Estudios de Seguimiento , Humanos , Masculino , Persona de Mediana Edad , Análisis Multivariante , Modelos de Riesgos Proporcionales , Estudios Prospectivos , Factores de Riesgo , Factores Sexuales , Estados Unidos/epidemiología , Población Blanca
15.
IEEE Trans Pattern Anal Mach Intell ; 37(2): 346-58, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26353246

RESUMEN

Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Teorema de Bayes , Humanos , Estadísticas no Paramétricas
16.
Anat Rec (Hoboken) ; 298(1): 249-76, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25529243

RESUMEN

Three-dimensional geometric morphometric (3DGM) methods for placing landmarks on digitized bones have become increasingly sophisticated in the last 20 years, including greater degrees of automation. One aspect shared by all 3DGM methods is that the researcher must designate initial landmarks. Thus, researcher interpretations of homology and correspondence are required for and influence representations of shape. We present an algorithm allowing fully automatic placement of correspondence points on samples of 3D digital models representing bones of different individuals/species, which can then be input into standard 3DGM software and analyzed with dimension reduction techniques. We test this algorithm against several samples, primarily a dataset of 106 primate calcanei represented by 1,024 correspondence points per bone. Results of our automated analysis of these samples are compared to a published study using a traditional 3DGM approach with 27 landmarks on each bone. Data were analyzed with morphologika(2.5) and PAST. Our analyses returned strong correlations between principal component scores, similar variance partitioning among components, and similarities between the shape spaces generated by the automatic and traditional methods. While cluster analyses of both automatically generated and traditional datasets produced broadly similar patterns, there were also differences. Overall these results suggest to us that automatic quantifications can lead to shape spaces that are as meaningful as those based on observer landmarks, thereby presenting potential to save time in data collection, increase completeness of morphological quantification, eliminate observer error, and allow comparisons of shape diversity between different types of bones. We provide an R package for implementing this analysis.


Asunto(s)
Algoritmos , Anatomía Comparada/métodos , Automatización/métodos , Calcáneo/anatomía & histología , Matemática/métodos , Animales , Humanos , Imagenología Tridimensional , Modelos Biológicos , Filogenia , Análisis de Componente Principal , Programas Informáticos
17.
IEEE Trans Biomed Eng ; 61(3): 736-44, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24235294

RESUMEN

Oscillatory phenomena abound in many types of signals. Identifying the individual oscillatory components that constitute an observed biological signal leads to profound understanding about the biological system. The instantaneous frequency (IF), the amplitude modulation (AM), and their temporal variability are widely used to describe these oscillatory phenomena. In addition, the shape of the oscillatory pattern, repeated in time for an oscillatory component, is also an important characteristic that can be parametrized appropriately. These parameters can be viewed as phenomenological surrogates for the hidden dynamics of the biological system. To estimate jointly the IF, AM, and shape, this paper applies a novel and robust time-frequency analysis tool, referred to as the synchrosqueezing transform (SST). The usefulness of the model and SST are shown directly in predicting the clinical outcome of ventilator weaning. Compared with traditional respiration parameters, the breath-to-breath variability has been reported to be a better predictor of the outcome of the weaning procedure. So far, however, all these indices normally require at least 20 min of data acquisition to ensure predictive power. Moreover, the robustness of these indices to the inevitable noise is rarely discussed. We find that based on the proposed model, SST and only 3 min of respiration data, the ROC area under curve of the prediction accuracy is 0.76. The high predictive power that is achieved in the weaning problem, despite a shorter evaluation period, and the stability to noise suggest that other similar kinds of signal may likewise benefit from the proposed model and SST.


Asunto(s)
Modelos Biológicos , Procesamiento de Señales Asistido por Computador , Desconexión del Ventilador/métodos , Electrocardiografía , Frecuencia Cardíaca/fisiología , Humanos , Curva ROC , Reproducibilidad de los Resultados , Respiración
18.
Nat Commun ; 1: 146, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-21266996

RESUMEN

Whether all the infectious herpesvirus particles entering a cell are able to replicate and/or express their genomes is not known. Here, we developed a general method to determine the number of viral genomes expressed in an infected cell. We constructed and analysed fluorophore expression from a recombinant pseudorabies virus (PRV263) carrying a Brainbow cassette (Cre-conditional expression of different fluorophores). Using three isogenic strains derived from PRV263, each expressing a single fluorophore, we analysed the colour composition of cells infected with these three viruses at different multiplicities. We estimate that fewer than seven incoming genomes are expressed per cell. In addition, those templates that are expressed are the genomes selected for replication and packaging into virions. This finite limit on the number of viral genomes that can be expressed is an intrinsic property of the infected cell and may be influenced by viral and cellular factors.


Asunto(s)
Replicación del ADN/genética , Genoma Viral/genética , Herpesviridae/genética , Animales , Línea Celular , Microscopía Fluorescente , Modelos Teóricos , Porcinos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA