Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
Expert Syst Appl ; 192: 116366, 2022 Apr 15.
Article in English | MEDLINE | ID: mdl-34937995

ABSTRACT

Chest imaging can represent a powerful tool for detecting the Coronavirus disease 2019 (COVID-19). Among the available technologies, the chest Computed Tomography (CT) scan is an effective approach for reliable and early detection of the disease. However, it could be difficult to rapidly identify by human inspection anomalous area in CT images belonging to the COVID-19 disease. Hence, it becomes necessary the exploitation of suitable automatic algorithms able to quick and precisely identify the disease, possibly by using few labeled input data, because large amounts of CT scans are not usually available for the COVID-19 disease. The method proposed in this paper is based on the exploitation of the compact and meaningful hidden representation provided by a Deep Denoising Convolutional Autoencoder (DDCAE). Specifically, the proposed DDCAE, trained on some target CT scans in an unsupervised way, is used to build up a robust statistical representation generating a target histogram. A suitable statistical distance measures how this target histogram is far from a companion histogram evaluated on an unknown test scan: if this distance is greater of a threshold, the test image is labeled as anomaly, i.e. the scan belongs to a patient affected by COVID-19 disease. Some experimental results and comparisons with other state-of-the-art methods show the effectiveness of the proposed approach reaching a top accuracy of 100% and similar high values for other metrics. In conclusion, by using a statistical representation of the hidden features provided by DDCAEs, the developed architecture is able to differentiate COVID-19 from normal and pneumonia scans with high reliability and at low computational cost.

2.
Diagnostics (Basel) ; 14(4)2024 Feb 10.
Article in English | MEDLINE | ID: mdl-38396427

ABSTRACT

Digital pathology (DP) has begun to play a key role in the evaluation of liver specimens. Recent studies have shown that a workflow that combines DP and artificial intelligence (AI) applied to histopathology has potential value in supporting the diagnosis, treatment evaluation, and prognosis prediction of liver diseases. Here, we provide a systematic review of the use of this workflow in the field of hepatology. Based on the PRISMA 2020 criteria, a search of the PubMed, SCOPUS, and Embase electronic databases was conducted, applying inclusion/exclusion filters. The articles were evaluated by two independent reviewers, who extracted the specifications and objectives of each study, the AI tools used, and the results obtained. From the 266 initial records identified, 25 eligible studies were selected, mainly conducted on human liver tissues. Most of the studies were performed using whole-slide imaging systems for imaging acquisition and applying different machine learning and deep learning methods for image pre-processing, segmentation, feature extractions, and classification. Of note, most of the studies selected demonstrated good performance as classifiers of liver histological images compared to pathologist annotations. Promising results to date bode well for the not-too-distant inclusion of these techniques in clinical practice.

3.
J Supercomput ; 79(3): 2850-2881, 2023.
Article in English | MEDLINE | ID: mdl-36042937

ABSTRACT

Bidirectional generative adversarial networks (BiGANs) and cycle generative adversarial networks (CycleGANs) are two emerging machine learning models that, up to now, have been used as generative models, i.e., to generate output data sampled from a target probability distribution. However, these models are also equipped with encoding modules, which, after weakly supervised training, could be, in principle, exploited for the extraction of hidden features from the input data. At the present time, how these extracted features could be effectively exploited for classification tasks is still an unexplored field. Hence, motivated by this consideration, in this paper, we develop and numerically test the performance of a novel inference engine that relies on the exploitation of BiGAN and CycleGAN-learned hidden features for the detection of COVID-19 disease from other lung diseases in computer tomography (CT) scans. In this respect, the main contributions of the paper are twofold. First, we develop a kernel density estimation (KDE)-based inference method, which, in the training phase, leverages the hidden features extracted by BiGANs and CycleGANs for estimating the (a priori unknown) probability density function (PDF) of the CT scans of COVID-19 patients and, then, in the inference phase, uses it as a target COVID-PDF for the detection of COVID diseases. As a second major contribution, we numerically evaluate and compare the classification accuracies of the implemented BiGAN and CycleGAN models against the ones of some state-of-the-art methods, which rely on the unsupervised training of convolutional autoencoders (CAEs) for attaining feature extraction. The performance comparisons are carried out by considering a spectrum of different training loss functions and distance metrics. The obtained classification accuracies of the proposed CycleGAN-based (resp., BiGAN-based) models outperform the corresponding ones of the considered benchmark CAE-based models of about 16% (resp., 14%).

4.
J Supercomput ; 78(9): 12024-12045, 2022.
Article in English | MEDLINE | ID: mdl-35228777

ABSTRACT

We present a probabilistic method for classifying chest computed tomography (CT) scans into COVID-19 and non-COVID-19. To this end, we design and train, in an unsupervised manner, a deep convolutional autoencoder (DCAE) on a selected training data set, which is composed only of COVID-19 CT scans. Once the model is trained, the encoder can generate the compact hidden representation (the hidden feature vectors) of the training data set. Afterwards, we exploit the obtained hidden representation to build up the target probability density function (PDF) of the training data set by means of kernel density estimation (KDE). Subsequently, in the test phase, we feed a test CT into the trained encoder to produce the corresponding hidden feature vector, and then, we utilise the target PDF to compute the corresponding PDF value of the test image. Finally, this obtained value is compared to a threshold to assign the COVID-19 label or non-COVID-19 to the test image. We numerically check our approach's performance (i.e. test accuracy and training times) by comparing it with those of some state-of-the-art methods.

5.
IEEE Trans Image Process ; 28(2): 713-722, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30222571

ABSTRACT

We consider an acquisition system where a continuous image is reconstructed from a set of irregularly distributed noisy samples. Moreover, the system is affected by a random pointing jitter which makes the actual sampling positions different from the nominal ones. We develop a model for the system and derive the optimal, minimum variance unbiased (MVU) estimate. Unfortunately, the latter estimate is not practical to compute when the data size is large. Therefore, we develop a simplified, low resolution model and derive the corresponding MVU estimate, which has a drastically lower complexity. Moreover, we analyze the estimators' performance by using both theoretical analysis and simulations. Finally, we discuss the application to the data of the Photodetector Array Camera and Spectrometer (PACS) instrument, which is an infrared photometer on board the Herschel satellite.

6.
IEEE Trans Image Process ; 26(11): 5232-5243, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28792898

ABSTRACT

We consider an acquisition system where a continuous, band-limited image is reconstructed from a set of irregularly distributed, noisy samples. An optimal estimator can be obtained by exploiting Least Squares, but it is not practical to compute when the data size is large. A simpler, widely used estimate can be obtained by properly rounding off the pointing information, but it is suboptimal and is affected by a bias, which may be large and thus limits its applicability. To solve this problem, we develop a mathematical model for the acquisition system, which accounts for the pointing information round off. Based on the model, we derive a novel optimal estimate, which has a manageable computational complexity and is largely immune from the bias, making it a better option than the suboptimal one. Moreover, the model opens a new, fruitful point of view on the estimation performance analysis. Finally, we consider the application of the novel estimate to the data of the Photodetector Array Camera and Spectrometer instrument. In this paper, we discuss several implementation aspects and investigate the performance by using both true and simulated data.

7.
IEEE Trans Image Process ; 25(9): 4458-4468, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27448349

ABSTRACT

We consider an acquisition system constituted by an array of sensors scanning an image. Each sensor produces a sequence of readouts, called a time series. In this framework, we discuss the image estimation problem when the time series are affected by noise and by a time shift. In particular, we introduce an appropriate data model and consider the least squares (LS) estimate, showing that it has no closed form. However, the LS problem has a structure that can be exploited to simplify the solution. In particular, based on two known techniques, namely, separable nonlinear LS and alternating LS, we propose and analyze several practical estimation methods. As an additional contribution, we discuss the application of these methods to the data of the photodetector array camera and spectrometer, which is an infrared photometer onboard the Herschel satellite. In this context, we investigate the accuracy and the computational complexity of the methods, using both true and simulated data.

8.
IEEE Trans Image Process ; 21(8): 3687-96, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22562757

ABSTRACT

The quality of astrophysical images produced by means of the Generalised Least Square (GLS) approach may be degraded by the presence of artificial structures, obviously not present in the sky. This problem affects in different degrees all images produced by the instruments onboard the European Space Agency (ESA) Herschel satellite. In this paper we analyse these artifacts and introduce a method to remove them. The method is based on a post-processing of GLS image that estimates and removes the artifacts subtracting them from the original image. We find that the only drawback of this method is a slight increase of the background noise which, however, can be mitigated by detecting the artifacts and by performing the subtraction only where they are detected. The efficiency of the approach is demonstrated and quantified using simulated and real data.


Subject(s)
Algorithms , Artifacts , Geographic Information Systems , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Satellite Imagery/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL