Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
J Digit Imaging ; 36(4): 1826-1850, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37038039

RESUMEN

The growing use of multimodal high-resolution volumetric data in pre-clinical studies leads to challenges related to the management and handling of the large amount of these datasets. Contrarily to the clinical context, currently there are no standard guidelines to regulate the use of image compression in pre-clinical contexts as a potential alleviation of this problem. In this work, the authors study the application of lossy image coding to compress high-resolution volumetric biomedical data. The impact of compression on the metrics and interpretation of volumetric data was quantified for a correlated multimodal imaging study to characterize murine tumor vasculature, using volumetric high-resolution episcopic microscopy (HREM), micro-computed tomography (µCT), and micro-magnetic resonance imaging (µMRI). The effects of compression were assessed by measuring task-specific performances of several biomedical experts who interpreted and labeled multiple data volumes compressed at different degrees. We defined trade-offs between data volume reduction and preservation of visual information, which ensured the preservation of relevant vasculature morphology at maximum compression efficiency across scales. Using the Jaccard Index (JI) and the average Hausdorff Distance (HD) after vasculature segmentation, we could demonstrate that, in this study, compression that yields to a 256-fold reduction of the data size allowed to keep the error induced by compression below the inter-observer variability, with minimal impact on the assessment of the tumor vasculature across scales.


Asunto(s)
Compresión de Datos , Neoplasias , Humanos , Animales , Ratones , Compresión de Datos/métodos , Microtomografía por Rayos X , Imagen por Resonancia Magnética , Imagen Multimodal , Procesamiento de Imagen Asistido por Computador/métodos
3.
IEEE Trans Image Process ; 31: 1708-1722, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35100115

RESUMEN

Common representations of light fields use four-dimensional data structures, where a given pixel is closely related not only to its spatial neighbours within the same view, but also to its angular neighbours, co-located in adjacent views. Such structure presents increased redundancy between pixels, when compared with regular single-view images. Then, these redundancies are exploited to obtain compressed representations of the light field, using prediction algorithms specifically tailored to estimate pixel values based on both spatial and angular references. This paper proposes new encoding schemes which take advantage of the four-dimensional light field data structures to improve the coding performance of Minimum Rate Predictors. The proposed methods expand previous research on lossless coding beyond the current state-of-the-art. The experimental results, obtained using both traditional datasets and others more challenging, show bit-rate savings no smaller than 10%, when compared with existing methods for lossless light field compression.

4.
Med Image Anal ; 75: 102254, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34649195

RESUMEN

Medical image classification through learning-based approaches has been increasingly used, namely in the discrimination of melanoma. However, for skin lesion classification in general, such methods commonly rely on dermoscopic or other 2D-macro RGB images. This work proposes to exploit beyond conventional 2D image characteristics, by considering a third dimension (depth) that characterises the skin surface rugosity, which can be obtained from light-field images, such as those available in the SKINL2 dataset. To achieve this goal, a processing pipeline was deployed using a morlet scattering transform and a CNN model, allowing to perform a comparison between using 2D information, only 3D information, or both. Results show that discrimination between Melanoma and Nevus reaches an accuracy of 84.00, 74.00 or 94.00% when using only 2D, only 3D, or both, respectively. An increase of 14.29pp in sensitivity and 8.33pp in specificity is achieved when expanding beyond conventional 2D information by also using depth. When discriminating between Melanoma and all other types of lesions (a further imbalanced setting), an increase of 28.57pp in sensitivity and decrease of 1.19pp in specificity is achieved for the same test conditions. Overall the results of this work demonstrate significant improvements over conventional approaches.


Asunto(s)
Melanoma , Nevo , Neoplasias Cutáneas , Dermoscopía , Humanos , Melanoma/diagnóstico por imagen , Neoplasias Cutáneas/diagnóstico por imagen
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2726-2731, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34891814

RESUMEN

Machine learning algorithms are progressively assuming important roles as computational tools to support clinical diagnosis, namely in the classification of pigmented skin lesions using RGB images. Most current classification methods rely on common 2D image features derived from shape, colour or texture, which does not always guarantee the best results. This work presents a contribution to this field, by exploiting the lesions' border line characteristics using a new dimension - depth, which has not been thoroughly investigated so far. A selected group of features is extracted from the depth information of 3D images, which are then used for classification using a quadratic Support Vector Machine. Despite class imbalance often present in medical image datasets, the proposed algorithm achieves a top geometric mean of 94.87%, comprising 100.00% sensitivity and 90.00% specificity, using only depth information for the detection of Melanomas. Such results show that potential gains can be achieved by extracting information from this often overlooked dimension, which provides more balanced results in terms of sensitivity and specificity than other settings.


Asunto(s)
Melanoma , Enfermedades de la Piel , Neoplasias Cutáneas , Dermoscopía , Humanos , Interpretación de Imagen Asistida por Computador , Melanoma/diagnóstico por imagen , Neoplasias Cutáneas/diagnóstico
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3905-3908, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946726

RESUMEN

Light field imaging technology has been attracting increasing interest because it enables capturing enriched visual information and expands the processing capabilities of traditional 2D imaging systems. Dense multiview, accurate depth maps and multiple focus planes are examples of different types of visual information enabled by light fields. This technology is also emerging in medical imaging research, like dermatology, allowing to find new features and improve classification algorithms, namely those based on machine learning approaches. This paper presents a contribution for the research community, in the form of a publicly available light field image dataset of skin lesions (named SKINL2 v1.0). This dataset contains 250 light fields, captured with a focused plenoptic camera and classified into eight clinical categories, according to the type of lesion. Each light field is comprised of 81 different views of the same lesion. The database also includes the dermatoscopic image of each lesion. A representative subset of 17 central view images of the light fields is further characterised in terms of spatial information (SI), colourfulness (CF) and compressibility. This dataset has high potential for advancing medical imaging research and development of new classification algorithms based on light fields, as well as in clinically-oriented dermatology studies.


Asunto(s)
Dermoscopía/métodos , Aprendizaje Automático , Enfermedades de la Piel/diagnóstico por imagen , Algoritmos , Humanos
7.
IEEE Trans Image Process ; 17(9): 1640-53, 2008 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-18701400

RESUMEN

In this paper, we exploit a recently introduced coding algorithm called multidimensional multiscale parser (MMP) as an alternative to the traditional transform quantization-based methods. MMP uses approximate pattern matching with adaptive multiscale dictionaries that contain concatenations of scaled versions of previously encoded image blocks. We propose the use of predictive coding schemes that modify the source's probability distribution, in order to favour the efficiency of MMP's dictionary adaptation. Statistical conditioning is also used, allowing for an increased coding efficiency of the dictionaries' symbols. New dictionary design methods, that allow for an effective compromise between the introduction of new dictionary elements and the reduction of codebook redundancy, are also proposed. Experimental results validate the proposed techniques by showing consistent improvements in PSNR performance over the original MMP algorithm. When compared with state-of-the-art methods, like JPEG2000 and H.264/AVC, the proposed algorithm achieves relevant gains (up to 6 dB) for nonsmooth images and very competitive results for smooth images. These results strongly suggest that the new paradigm posed by MMP can be regarded as an alternative to the one traditionally used in image coding, for a wide range of image types.


Asunto(s)
Algoritmos , Compresión de Datos/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Grabación en Video/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
8.
IEEE Trans Med Imaging ; 36(11): 2250-2260, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28613165

RESUMEN

This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3-D-MRP, is based on the principle of minimum rate predictors (MRPs), which is one of the state-of-the-art lossless compression technologies presented in the data compression literature. The main features of the proposed method include the use of 3-D predictors, 3-D-block octree partitioning and classification, volume-based optimization, and support for 16-b-depth images. Experimental results demonstrate the efficiency of the 3-D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8- and 16-bit-depth contents, respectively, when compared with JPEG-LS, JPEG2000, CALIC, and HEVC, as well as other proposals based on the MRP algorithm.


Asunto(s)
Algoritmos , Compresión de Datos/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Humanos
9.
IEEE Trans Image Process ; 25(9): 4046-60, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-27333603

RESUMEN

Directional intra prediction plays an important role in current state-of-the-art video coding standards. In directional prediction, neighbouring samples are projected along a specific direction to predict a block of samples. Ultimately, each prediction mode can be regarded as a set of very simple linear predictors, a different one for each pixel of a block. Therefore, a natural question that arises is whether one could use the theory of linear prediction in order to generate intra prediction modes that provide increased coding efficiency. However, such an interpretation of each directional mode as a set of linear predictors is too poor to provide useful insights for their design. In this paper, we introduce an interpretation of directional prediction as a particular case of linear prediction, which uses the first-order linear filters and a set of geometric transformations. This interpretation motivated the proposal of a generalized intra prediction framework, whereby the first-order linear filters are replaced by adaptive linear filters with sparsity constraints. In this context, we investigate the use of efficient sparse linear models, adaptively estimated for each block through the use of different algorithms, such as matching pursuit, least angle regression, least absolute shrinkage and selection operator, or elastic net. The proposed intra prediction framework was implemented and evaluated within the state-of-the-art high efficiency video coding standard. Experiments demonstrated the advantage of this predictive solution, mainly in the presence of images with complex features and textured areas, achieving higher average bitrate savings than other related sparse representation methods proposed in the literature.

10.
IEEE Trans Image Process ; 24(11): 4055-68, 2015 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-26353355

RESUMEN

A complete encoding solution for efficient intra-based depth map compression is proposed in this paper. The algorithm, denominated predictive depth coding (PDC), was specifically developed to efficiently represent the characteristics of depth maps, mostly composed by smooth areas delimited by sharp edges. At its core, PDC involves a directional intra prediction framework and a straightforward residue coding method, combined with an optimized flexible block partitioning scheme. In order to improve the algorithm in the presence of depth edges that cannot be efficiently predicted by the directional modes, a constrained depth modeling mode, based on explicit edge representation, was developed. For residue coding, a simple and low complexity approach was investigated, using constant and linear residue modeling, depending on the prediction mode. The performance of the proposed intra depth map coding approach was evaluated based on the quality of the synthesized views using the encoded depth maps and original texture views. The experimental tests based on all intra configuration demonstrated the superior rate-distortion performance of PDC, with average bitrate savings of 6%, when compared with the current state-of-the-art intra depth map coding solution present in the 3D extension of a high-efficiency video coding (3D-HEVC) standard. By using view synthesis optimization in both PDC and 3D-HEVC encoders, the average bitrate savings increase to 14.3%. This suggests that the proposed method, without using transform-based residue coding, is an efficient alternative to the current 3D-HEVC algorithm for intra depth map coding.

11.
IEEE Trans Image Process ; 19(10): 2712-24, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20423803

RESUMEN

In this paper, we propose a new encoder for scanned compound documents, based upon a recently introduced coding paradigm called multidimensional multiscale parser (MMP). MMP uses approximate pattern matching, with adaptive multiscale dictionaries that contain concatenations of scaled versions of previously encoded image blocks. These features give MMP the ability to adjust to the input image's characteristics, resulting in high coding efficiencies for a wide range of image types. This versatility makes MMP a good candidate for compound digital document encoding. The proposed algorithm first classifies the image blocks as smooth (texture) and nonsmooth (text and graphics). Smooth and nonsmooth blocks are then compressed using different MMP-based encoders, adapted for encoding either type of blocks. The adaptive use of these two types of encoders resulted in performance gains over the original MMP algorithm, further increasing the performance advantage over the current state-of-the-art image encoders for scanned compound images, without compromising the performance for other image types.

12.
IEEE Trans Biomed Eng ; 56(3): 896-900, 2009 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-19389688

RESUMEN

This paper presents the results of a multiscale pattern-matching-based ECG encoder, which employs simple preprocessing techniques for adapting the input signal. Experiments carried out with records from the Massachusetts Institute of Technology-Beth Israel Hospital database show that the proposed scheme is effective, outperforming some state-of-the-art schemes described in the literature.


Asunto(s)
Compresión de Datos/métodos , Electrocardiografía , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos
13.
IEEE Trans Biomed Eng ; 55(7): 1923-6, 2008 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-18595813

RESUMEN

In this brief, we present new preprocessing techniques for electrocardiogram signals, namely, dc equalization and complexity sorting, which when applied can improve current 2-D compression algorithms. The experimental results with signals from the Massachusetts Institute of Technology - Beth Israel Hospital (MIT-BIH) database outperform the ones from many state-of-the-art schemes described in the literature.


Asunto(s)
Algoritmos , Compresión de Datos/métodos , Electrocardiografía/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Procesamiento de Señales Asistido por Computador , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA