Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
AJNR Am J Neuroradiol ; 43(5): 721-726, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35483905

RESUMO

BACKGROUND AND PURPOSE: Prioritizing reading of noncontrast head CT examinations through an automated triage system may improve time to care for patients with acute neuroradiologic findings. We present a natural language-processing approach for labeling findings in noncontrast head CT reports, which permits creation of a large, labeled dataset of head CT images for development of emergent-finding detection and reading-prioritization algorithms. MATERIALS AND METHODS: In this retrospective study, 1002 clinical radiology reports from noncontrast head CTs collected between 2008 and 2013 were manually labeled across 12 common neuroradiologic finding categories. Each report was then encoded using an n-gram model of unigrams, bigrams, and trigrams. A logistic regression model was then trained to label each report for every common finding. Models were trained and assessed using a combination of L2 regularization and 5-fold cross-validation. RESULTS: Model performance was strongest for the fracture, hemorrhage, herniation, mass effect, pneumocephalus, postoperative status, and volume loss models in which the area under the receiver operating characteristic curve exceeded 0.95. Performance was relatively weaker for the edema, hydrocephalus, infarct, tumor, and white-matter disease models (area under the receiver operating characteristic curve > 0.85). Analysis of coefficients revealed finding-specific words among the top coefficients in each model. Class output probabilities were found to be a useful indicator of predictive error on individual report examples in higher-performing models. CONCLUSIONS: Combining logistic regression with n-gram encoding is a robust approach to labeling common findings in noncontrast head CT reports.


Assuntos
Cabeça , Processamento de Linguagem Natural , Algoritmos , Humanos , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
2.
Class Quantum Gravity ; 34(No 6)2017.
Artigo em Inglês | MEDLINE | ID: mdl-29722360

RESUMO

With the first direct detection of gravitational waves, the advanced laser interferometer gravitational-wave observatory (LIGO) has initiated a new field of astronomy by providing an alternative means of sensing the universe. The extreme sensitivity required to make such detections is achieved through exquisite isolation of all sensitive components of LIGO from non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to a variety of instrumental and environmental sources of noise that contaminate the data. Of particular concern are noise features known as glitches, which are transient and non-Gaussian in their nature, and occur at a high enough rate so that accidental coincidence between the two LIGO detectors is non-negligible. Glitches come in a wide range of time-frequency-amplitude morphologies, with new morphologies appearing as the detector evolves. Since they can obscure or mimic true gravitational-wave signals, a robust characterization of glitches is paramount in the effort to achieve the gravitational-wave detection rates that are predicted by the design sensitivity of LIGO. This proves a daunting task for members of the LIGO Scientific Collaboration alone due to the sheer amount of data. In this paper we describe an innovative project that combines crowdsourcing with machine learning to aid in the challenging task of categorizing all of the glitches recorded by the LIGO detectors. Through the Zooniverse platform, we engage and recruit volunteers from the public to categorize images of time-frequency representations of glitches into pre-identified morphological classes and to discover new classes that appear as the detectors evolve. In addition, machine learning algorithms are used to categorize images after being trained on human-classified examples of the morphological classes. Leveraging the strengths of both classification methods, we create a combined method with the aim of improving the efficiency and accuracy of each individual classifier. The resulting classification and characterization should help LIGO scientists to identify causes of glitches and subsequently eliminate them from the data or the detector entirely, thereby improving the rate and accuracy of gravitational-wave observations. We demonstrate these methods using a small subset of data from LIGO's first observing run.

3.
Int Conf Signal Process Proc ; : 670-674, 2013 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-25089515

RESUMO

The rapid advance in three-dimensional (3D) confocal imaging technologies is rapidly increasing the availability of 3D cellular images. However, the lack of robust automated methods for the extraction of cell or organelle shapes from the images is hindering researchers ability to take full advantage of the increase in experimental output. The lack of appropriate methods is particularly significant when the density of the features of interest in high, such as in the developing eye of the fruit fly. Here, we present a novel and efficient nuclei segmentation algorithm based on the combination of graph cut and convex shape prior. The main characteristic of the algorithm is that it segments nuclei foreground using a graph cut algorithm and splits overlapping or touching cell nuclei by simple convex and concavity analysis, using a convex shape assumption for nuclei contour. We evaluate the performance of our method by applying it to a library of publicly-available two-dimensional (2D) images that were hand-labeled by experts. Our algorithm yields a substantial quantitative improvement over other methods for this benchmark. For example, our method achieves a decrease of 3.2 in the Hausdorff distance and an decrease of 1.8 per slice in the merged nuclei error.

4.
IEEE Trans Neural Netw ; 13(4): 900-15, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-18244486

RESUMO

Emerging broadband communication systems promise a future of multimedia telephony, e.g. the addition of visual information to telephone conversations. It is useful to consider the problem of generating the critical information useful for speechreading, based on existing narrowband communications systems used for speech. This paper focuses on the problem of synthesizing visual articulatory movements given the acoustic speech signal. In this application, the acoustic speech signal is analyzed and the corresponding articulatory movements are synthesized for speechreading. This paper describes a hidden Markov model (HMM)-based visual speech synthesizer. The key elements in the application of HMMs to this problem are the decomposition of the overall modeling task into key stages and the judicious determination of the observation vector's components for each stage. The main contribution of this paper is a novel correlation HMM model that is able to integrate independently trained acoustic and visual HMMs for speech-to-visual synthesis. This model allows increased flexibility in choosing model topologies for the acoustic and visual HMMs. Moreover the propose model reduces the amount of training data compared to early integration modeling techniques. Results from objective experiments analysis show that the propose approach can reduce time alignment errors by 37.4% compared to conventional temporal scaling method. Furthermore, subjective results indicated that the purpose model can increase speech understanding.

5.
IEEE Trans Biomed Eng ; 48(1): 28-40, 2001 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-11235588

RESUMO

Signal compression is an important problem encountered in many applications. Various techniques have been proposed over the years for addressing the problem. In this paper, we present a time domain algorithm based on the coding of line segments which are used to approximate the signal. These segments are fit in a way that is optimal in the rate distortion sense. Although the approach is applicable to any type of signal, we focus, in this paper, on the compression of electrocardiogram (ECG) signals. ECG signal compression has traditionally been tackled by heuristic approaches. However, it has been demonstrated [1] that exact optimization algorithms outperform these heuristic approaches by a wide margin with respect to reconstruction error. By formulating the compression problem as a graph theory problem, known optimization theory can be applied in order to yield optimal compression. In this paper, we present an algorithm that will guarantee the smallest possible distortion among all methods applying linear interpolation given an upper bound on the available number of bits. Using a varied signal test set, extensive coding experiments are presented. We compare the results from our coding method to traditional time domain ECG compression methods, as well as, to more recently developed frequency domain methods. Evaluation is based both on percentage root-mean-square difference (PRD) performance measure and visual inspection of the reconstructed signals. The results demonstrate that the exact optimization methods have superior performance compared to both traditional ECG compression methods and the frequency domain methods.


Assuntos
Algoritmos , Eletrocardiografia , Processamento de Sinais Assistido por Computador , Apresentação de Dados
6.
IEEE Trans Image Process ; 10(2): 278-87, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-18249618

RESUMO

We propose an iterative algorithm for enhancing the resolution of monochrome and color image sequences. Various approaches toward motion estimation are investigated and compared. Improving the spatial resolution of an image sequence critically depends upon the accuracy of the motion estimator. The problem is complicated by the fact that the motion field is prone to significant errors since the original high-resolution images are not available. Improved motion estimates may be obtained by using a more robust and accurate motion estimator, such as a pel-recursive scheme instead of block matching, in processing color image sequences, there is the added advantage of having more flexibility in how the final motion estimates are obtained, and further improvement in the accuracy of the motion field is therefore possible. This is because there are three different intensity fields (channels) conveying the same motion information. In this paper, the choice of which motion estimator to use versus how the final estimates are obtained is weighed to see which issue is more critical in improving the estimated high-resolution sequences. Toward this end, an iterative algorithm is proposed, and two sets of experiments are presented. First, several different experiments using the same motion estimator but three different data fusion approaches to merge the individual motion fields were performed. Second, estimated high-resolution images using the block matching estimator were compared to those obtained by employing a pel-recursive scheme. Experiments were performed on a real color image sequence, and performance was measured by the peak signal to noise ratio (PSNR).

7.
IEEE Trans Image Process ; 10(11): 1613-20, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-18255503

RESUMO

In this paper, we introduce a new methodology for signal-to-noise ratio (SNR) video scalability based on the partitioning of the DCT coefficients. The DCT coefficients of the displaced frame difference (DFD) for inter-blocks or the intensity for intra-blocks are partitioned into a base layer and one or more enhancement layers, thus, producing an embedded bitstream. Subsets of this bitstream can be transmitted with increasing video quality as measured by the SNR. Given a bit budget for the base and enhancement layers the partitioning of the DCT coefficients is done in a way that is optimal in the operational rate-distortion sense. The optimization is performed using Lagrangian relaxation and dynamic programming (DP). Experimental results are presented and conclusions are drawn.

8.
IEEE Trans Image Process ; 9(10): 1784-97, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-18262916

RESUMO

In this paper, we examine the restoration problem when the point-spread function (PSF) of the degradation system is partially known. For this problem, the PSF is assumed to be the sum of a known deterministic and an unknown random component. This problem has been examined before; however, in most previous works the problem of estimating the parameters that define the restoration filters was not addressed. In this paper, two iterative algorithms that simultaneously restore the image and estimate the parameters of the restoration filter are proposed using evidence analysis (EA) within the hierarchical Bayesian framework. We show that the restoration step of the first of these algorithms is in effect almost identical to the regularized constrained total least-squares (RCTLS) filter, while the restoration step of the second is identical to the linear minimum mean square-error (LMMSE) filter for this problem. Therefore, in this paper we provide a solution to the parameter estimation problem of the RCTLS filter. We further provide an alternative approach to the expectation-maximization (EM) framework to derive a parameter estimation algorithm for the LMMSE filter. These iterative algorithms are derived in the discrete Fourier transform (DFT) domain; therefore, they are computationally efficient even for large images. Numerical experiments are presented that test and compare the proposed algorithms.

9.
IEEE Trans Image Process ; 9(7): 1200-15, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-18262958

RESUMO

With block-based compression approaches for both still images and sequences of images annoying blocking artifacts are exhibited, primarily at high compression ratios. They are due to the independent processing (quantization) of the block transformed values of the intensity or the displaced frame difference. We propose the application of the hierarchical Bayesian paradigm to the reconstruction of block discrete cosine transform (BDCT) compressed images and the estimation of the required parameters. We derive expressions for the iterative evaluation of these parameters applying the evidence analysis within the hierarchical Bayesian paradigm. The proposed method allows for the combination of parameters estimated at the coder and decoder. The performance of the proposed algorithms is demonstrated experimentally.

10.
IEEE Trans Image Process ; 8(2): 231-46, 1999.
Artigo em Inglês | MEDLINE | ID: mdl-18267470

RESUMO

In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We show analytically that the analysis provided by the evidence approach is more realistic and appropriate than the MAP approach for the image restoration problem. We furthermore study the relationship between the evidence and an iterative approach resulting from the set theoretic regularization approach for estimating the two hyperparameters, or their ratio, defined as the regularization parameter. Finally the proposed algorithms are tested experimentally.

11.
IEEE Trans Image Process ; 7(1): 13-26, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18267376

RESUMO

In this paper, we present fast and efficient methods for the lossy encoding of object boundaries that are given as eight-connect chain codes. We approximate the boundary by a polygon, and consider the problem of finding the polygon which leads to the smallest distortion for a given number of bits. We also address the dual problem of finding the polygon which leads to the smallest bit rate for a given distortion. We consider two different classes of distortion measures. The first class is based on the maximum operator and the second class is based on the summation operator. For the first class, we derive a fast and optimal scheme that is based on a shortest path algorithm for a weighted directed acyclic graph. For the second class we propose a solution approach that is based on the Lagrange multiplier method, which uses the above-mentioned shortest path algorithm. Since the Lagrange multiplier method can only find solutions on the convex hull of the operational rate distortion function, we also propose a tree-pruning-based algorithm that can find all the optimal solutions. Finally, we present results of the proposed schemes using objects from the Miss America sequence.

12.
IEEE Trans Image Process ; 7(11): 1505-23, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18276217

RESUMO

We propose an optimal quadtree (QT)-based motion estimator for video compression. It is optimal in the sense that for a given bit budget for encoding the displacement vector field (DVF) and the QT segmentation, the scheme finds a DVF and a QT segmentation which minimizes the energy of the resulting displaced frame difference (DFD). We find the optimal QT decomposition and the optimal DVF jointly using the Lagrangian multiplier method and a multilevel dynamic program. We introduce a new, very fast convex search for the optimal Lagrangian multiplier lambda(*), which results in a very fast convergence of the Lagrangian multiplier method. The resulting DVF is spatially inhomogeneous, since large blocks are used in areas with simple motion and small blocks in areas with complex motion. We also propose a novel motion-compensated interpolation scheme which uses the same mathematical tools developed for the QT-based motion estimator. One of the advantages of this scheme is the globally optimal control of the tradeoff between the interpolation error energy and the DVF smoothness. Another advantage is that no interpolation of the DVF is required since we directly estimate the DVF and the QT-segmentation for the frame which needs to be interpolated. We present results with the proposed QT-based motion estimator which show that for the same DFD energy the proposed estimator uses about 25% fewer bits than the commonly used block matching algorithm. We also experimentally compare the interpolated frames using the proposed motion compensated interpolation scheme with the reconstructed original frames.

13.
IEEE Trans Image Process ; 7(12): 1684-99, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18276235

RESUMO

A hybrid multidimensional image segmentation algorithm is proposed, which combines edge and region-based techniques through the morphological algorithm of watersheds. An edge-preserving statistical noise reduction approach is used as a preprocessing stage in order to compute an accurate estimate of the image gradient. Then, an initial partitioning of the image into primitive regions is produced by applying the watershed transform on the image gradient magnitude. This initial segmentation is the input to a computationally efficient hierarchical (bottom-up) region merging process that produces the final segmentation. The latter process uses the region adjacency graph (RAG) representation of the image regions. At each step, the most similar pair of regions is determined (minimum cost RAG edge), the regions are merged and the RAG is updated. Traditionally, the above is implemented by storing all RAG edges in a priority queue. We propose a significantly faster algorithm, which additionally maintains the so-called nearest neighbor graph, due to which the priority queue size and processing time are drastically reduced. The final segmentation provides, due to the RAG, one-pixel wide, closed, and accurately localized contours/surfaces. Experimental results obtained with two-dimensional/three-dimensional (2-D/3-D) magnetic resonance images are presented.

14.
IEEE Trans Image Process ; 6(11): 1487-502, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18282908

RESUMO

We present a theory for the optimal bit allocation among quadtree (QT) segmentation, displacement vector field (DVF), and displaced frame difference (DFD). The theory is applicable to variable block size motion-compensated video coders (VBSMCVC), where the variable block sizes are encoded using the QT structure, the DVF is encoded by first-order differential pulse code modulation (DPCM), the DFD is encoded by a block-based scheme, and an additive distortion measure is employed. We derive an optimal scanning path for a QT that is based on a Hilbert curve. We consider the case of a lossless VBSMCVC first, for which we develop the optimal bit allocation algorithm using dynamic programming (DP). We then consider a lossy VBSMCVC, for which we use Lagrangian relaxation, and show how an iterative scheme, which employs the DP-based solution, can be used to find the optimal solution. We finally present a VBSMCVC, which is based on the proposed theory, which employs a DCT-based DFD encoding scheme. We compare the proposed coder with H.263. The results show that it outperforms H.263 significantly in the rate distortion sense, as well as in the subjective sense.

15.
IEEE Trans Image Process ; 6(5): 774-8, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18282972

RESUMO

In this correspondence, a constrained least-squares multichannel image restoration approach is proposed, in which no prior knowledge of the noise variance at each channel or the degree of smoothness of the original image is required. The regularization functional for each channel is determined by incorporating both within-channel and cross-channel information. It is shown that the proposed smoothing functional has a global minimizer.

16.
IEEE Trans Image Process ; 5(4): 619-34, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-18285150

RESUMO

In this paper, we present a new spatially adaptive approach to the restoration of noisy blurred images, which is particularly effective at producing sharp deconvolution while suppressing the noise in the flat regions of an image. This is accomplished through a multiscale Kalman smoothing filter applied to a prefiltered observed image in the discrete, separable, 2-D wavelet domain. The prefiltering step involves constrained least-squares filtering based on optimal choices for the regularization parameter. This leads to a reduction in the support of the required state vectors of the multiscale restoration filter in the wavelet domain and improvement in the computational efficiency of the multiscale filter. The proposed method has the benefit that the majority of the regularization, or noise suppression, of the restoration is accomplished by the efficient multiscale filtering of wavelet detail coefficients ordered on quadtrees. Not only does this lead to potential parallel implementation schemes, but it permits adaptivity to the local edge information in the image. In particular, this method changes filter parameters depending on scale, local signal-to-noise ratio (SNR), and orientation. Because the wavelet detail coefficients are a manifestation of the multiscale edge information in an image, this algorithm may be viewed as an "edge-adaptive" multiscale restoration approach.

17.
IEEE Trans Image Process ; 4(4): 416-29, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18289991

RESUMO

A recursive model-based algorithm for obtaining the maximum a posteriori (MAP) estimate of the displacement vector field (DVF) from successive image frames of an image sequence is presented. To model the DVF, we develop a nonstationary vector field model called the vector coupled Gauss-Markov (VCGM) model. The VCGM model consists of two levels: an upper level, which is made up of several submodels with various characteristics, and a lower level or line process, which governs the transitions between the submodels. A detailed line process is proposed. The VCGM model is well suited for estimating the DVF since the resulting estimates preserve the boundaries between the differently moving areas in an image sequence. A Kalman type estimator results, followed by a decision criterion for choosing the appropriate line process. Several experiments demonstrate the superior performance of the proposed algorithm with respect to prediction error, interpolation error, and robustness to noise.

18.
IEEE Trans Image Process ; 4(5): 594-602, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18290009

RESUMO

The determination of the regularization parameter is an important issue in regularized image restoration, since it controls the trade-off between fidelity to the data and smoothness of the solution. A number of approaches have been developed in determining this parameter. In this paper, a new paradigm is adopted, according to which the required prior information is extracted from the available data at the previous iteration step, i.e., the partially restored image at each step. We propose the use of a regularization functional instead of a constant regularization parameter. The properties such a regularization functional should satisfy are investigated, and two specific forms of it are proposed. An iterative algorithm is proposed for obtaining a restored image. The regularization functional is defined in terms of the restored image at each iteration step, therefore allowing for the simultaneous determination of its value and the restoration of the degraded image. Both proposed iteration adaptive regularization functionals are shown to result in a smoothing functional with a global minimum, so that its iterative optimization does not depend on the initial conditions. The convergence of the algorithm is established and experimental results are shown.

19.
IEEE Trans Image Process ; 4(6): 743-51, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18290025

RESUMO

We develop an algorithm for obtaining the maximum likelihood (ML) estimate of the displacement vector field (DVP) from two consecutive image frames of an image sequence acquired under quantum-limited conditions. The estimation of the DVF has applications in temporal filtering, object tracking, stereo matching, and frame registration in low-light level image sequences as well as low-dose clinical X-ray image sequences. In the latter case, a controlled X-ray dosage reduction may be utilized to lower the radiation exposure to the patient and the medical staff. The quantum-limited effect is modeled as an undesirable, Poisson-distributed, signal-dependent noise artifact. A Fisher-Bayesian formulation is used to estimate the DVF and a block component search algorithm is employed in obtaining the solution. Several experiments involving a phantom sequence and a teleconferencing image sequence with realistic motion demonstrate the effectiveness of this estimator in obtaining the DVF under severe quantum noise conditions (20-25 events/pixel).

20.
IEEE Trans Image Process ; 4(6): 752-73, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18290026

RESUMO

This paper considers the concept of robust estimation in regularized image restoration. Robust functionals are employed for the representation of both the noise and the signal statistics. Such functionals allow the efficient suppression of a wide variety of noise processes and permit the reconstruction of sharper edges than their quadratic counterparts. A new class of robust entropic functionals is introduced, which operates only on the high-frequency content of the signal and reflects sharp deviations in the signal distribution. This class of functionals can also incorporate prior structural information regarding the original image, in a way similar to the maximum information principle. The convergence properties of robust iterative algorithms are studied for continuously and noncontinuously differentiable functionals. The definition of the robust approach is completed by introducing a method for the optimal selection of the regularization parameter. This method utilizes the structure of robust estimators that lack analytic specification. The properties of robust algorithms are demonstrated through restoration examples in different noise environments.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...