Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 163
Filtrar
1.
ArXiv ; 2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38562444

RESUMO

The latest X-ray photon-counting computed tomography (PCCT) for extremity allows multi-energy high-resolution (HR) imaging for tissue characterization and material decomposition. However, both radiation dose and imaging speed need improvement for contrast-enhanced and other studies. Despite the success of deep learning methods for 2D few-view reconstruction, applying them to HR volumetric reconstruction of extremity scans for clinical diagnosis has been limited due to GPU memory constraints, training data scarcity, and domain gap issues. In this paper, we propose a deep learning-based approach for PCCT image reconstruction at halved dose and doubled speed in a New Zealand clinical trial. Particularly, we present a patch-based volumetric refinement network to alleviate the GPU memory limitation, train network with synthetic data, and use model-based iterative refinement to bridge the gap between synthetic and real-world data. The simulation and phantom experiments demonstrate consistently improved results under different acquisition conditions on both in- and off-domain structures using a fixed network. The image quality of 8 patients from the clinical trial are evaluated by three radiologists in comparison with the standard image reconstruction with a full-view dataset. It is shown that our proposed approach is essentially identical to or better than the clinical benchmark in terms of diagnostic image quality scores. Our approach has a great potential to improve the safety and efficiency of PCCT without compromising image quality.

2.
IEEE Trans Med Imaging ; PP2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38687653

RESUMO

Metal artifact reduction (MAR) is important for clinical diagnosis with CT images. The existing state-of-the-art deep learning methods usually suppress metal artifacts in sinogram or image domains or both. However, their performance is limited by the inherent characteristics of the two domains, i.e., the errors introduced by local manipulations in the sinogram domain would propagate throughout the whole image during backprojection and lead to serious secondary artifacts, while it is difficult to distinguish artifacts from actual image features in the image domain. To alleviate these limitations, this study analyzes the desirable properties of wavelet transform in-depth and proposes to perform MAR in the wavelet domain. First, wavelet transform yields components that possess spatial correspondence with the image, thereby preventing the spread of local errors to avoid secondary artifacts. Second, using wavelet transform could facilitate identification of artifacts from image since metal artifacts are mainly high-frequency signals. Taking these advantages of the wavelet transform, this paper decomposes an image into multiple wavelet components and introduces multi-perspective regularizations into the proposed MAR model. To improve the transparency and validity of the model, all the modules in the proposed MAR model are designed to reflect their mathematical meanings. In addition, an adaptive wavelet module is also utilized to enhance the flexibility of the model. To optimize the model, an iterative algorithm is developed. The evaluation on both synthetic and real clinical datasets consistently confirms the superior performance of the proposed method over the competing methods.

3.
Phys Med Biol ; 69(10)2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38604178

RESUMO

Objective.Cardiac computed tomography (CT) is widely used for diagnosis of cardiovascular disease, the leading cause of morbidity and mortality in the world. Diagnostic performance depends strongly on the temporal resolution of the CT images. To image the beating heart, one can reduce the scanning time by acquiring limited-angle projections. However, this leads to increased image noise and limited-angle-related artifacts. The goal of this paper is to reconstruct high quality cardiac CT images from limited-angle projections.Approach. The ability to reconstruct high quality images from limited-angle projections is highly desirable and remains a major challenge. With the development of deep learning networks, such as U-Net and transformer networks, progresses have been reached on image reconstruction and processing. Here we propose a hybrid model based on the U-Net and Swin-transformer (U-Swin) networks. The U-Net has the potential to restore structural information due to missing projection data and related artifacts, then the Swin-transformer can gather a detailed global feature distribution.Main results. Using synthetic XCAT and clinical cardiac COCA datasets, we demonstrate that our proposed method outperforms the state-of-the-art deep learning-based methods.Significance. It has a great potential to freeze the beating heart with a higher temporal resolution.


Assuntos
Coração , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Coração/diagnóstico por imagem , Humanos , Aprendizado Profundo
4.
Comput Biol Med ; 173: 108313, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38531247

RESUMO

The majority of existing deep learning-based image denoising algorithms mainly focus on processing the overall image features, ignoring the fine differences between the semantic and pixel features. Hence, we propose Dual-TranSpeckle (DTS), a medical ultrasound image despeckling network built on a dual-path Transformer. The DTS introduces two different paths, named "semantic path" and "pixel path," to facilitate the parallel transfer of feature information within the image. The semantic path passes a global view of the input semantic features, and the image features are passed through a Semantic Block to extract global semantic information from pixel-level features. The pixel path is employed to transmit finer-grained pixel features. Within the dual-path network framework, two essential modules, namely Dual Block and Merge Block, are designed. These leverage the Transformer architecture during the encoding and decoding stages. The Dual Block module facilitates information interaction between the semantic and pixel features by considering the interdependencies across both paths. Meanwhile, the Merge Block module enables parallel transfer of feature information by merging the dual path features, thereby facilitating the self-attention calculations for the overall feature representation. Our DTS is extensively evaluated on two public datasets and one private dataset. The DTS network demonstrates significant enhancements in quantitative evaluation results in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), feature similarity (FSIM), and naturalness image quality evaluator (NIQE). Furthermore, our qualitative analysis confirms that the DTS has significant improvements in despeckling performance, effectively suppressing speckle noise while preserving essential image structures.


Assuntos
Algoritmos , Semântica , Ultrassonografia , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador
5.
Phys Med Biol ; 69(8)2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38373346

RESUMO

Objective. Computed Tomography (CT) has been widely used in industrial high-resolution non-destructive testing. However, it is difficult to obtain high-resolution images for large-scale objects due to their physical limitations. The objective is to develop an improved super-resolution technique that preserves small structures and details while efficiently capturing high-frequency information.Approach. The study proposes a new deep learning based method called spectrum learning (SPEAR) network for CT images super-resolution. This approach leverages both global information in the image domain and high-frequency information in the frequency domain. The SPEAR network is designed to reconstruct high-resolution images from low-resolution inputs by considering not only the main body of the images but also the small structures and other details. The symmetric property of the spectrum is exploited to reduce weight parameters in the frequency domain. Additionally, a spectrum loss is introduced to enforce the preservation of both high-frequency components and global information.Main results. The network is trained using pairs of low-resolution and high-resolution CT images, and it is fine-tuned using additional low-dose and normal-dose CT image pairs. The experimental results demonstrate that the proposed SPEAR network outperforms state-of-the-art networks in terms of image reconstruction quality. The approach successfully preserves high-frequency information and small structures, leading to better results compared to existing methods. The network's ability to generate high-resolution images from low-resolution inputs, even in cases of low-dose CT images, showcases its effectiveness in maintaining image quality.Significance. The proposed SPEAR network's ability to simultaneously capture global information and high-frequency details addresses the limitations of existing methods, resulting in more accurate and informative image reconstructions. This advancement can have substantial implications for various industries and medical diagnoses relying on accurate imaging.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
6.
J Xray Sci Technol ; 32(2): 285-301, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38217630

RESUMO

Diabetic retinopathy (DR) is one of the leading causes of blindness. However, because the data distribution of classes is not always balanced, it is challenging for automated early DR detection using deep learning techniques. In this paper, we propose an adaptive weighted ensemble learning method for DR detection based on optical coherence tomography (OCT) images. Specifically, we develop an ensemble learning model based on three advanced deep learning models for higher performance. To better utilize the cues implied in these base models, a novel decision fusion scheme is proposed based on the Bayesian theory in terms of the key evaluation indicators, to dynamically adjust the weighting distribution of base models to alleviate the negative effects potentially caused by the problem of unbalanced data size. Extensive experiments are performed on two public datasets to verify the effectiveness of the proposed method. A quadratic weighted kappa of 0.8487 and an accuracy of 0.9343 on the DRAC2022 dataset, and a quadratic weighted kappa of 0.9007 and an accuracy of 0.8956 on the APTOS2019 dataset are obtained, respectively. The results demonstrate that our method has the ability to enhance the ovearall performance of DR detection on OCT images.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Teorema de Bayes , Tomografia de Coerência Óptica/métodos , Aprendizado de Máquina
7.
IEEE Trans Image Process ; 33: 910-925, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38224516

RESUMO

Limited-angle tomographic reconstruction is one of the typical ill-posed inverse problems, leading to edge divergence with degraded image quality. Recently, deep learning has been introduced into image reconstruction and achieved great results. However, existing deep reconstruction methods have not fully explored data consistency, resulting in poor performance. In addition, deep reconstruction methods are still mathematically inexplicable and unstable. In this work, we propose an iterative residual optimization network (IRON) for limited-angle tomographic reconstruction. First, a new optimization objective function is established to overcome false negative and positive artifacts induced by limited-angle measurements. We integrate neural network priors as a regularizer to explore deep features within residual data. Furthermore, the block-coordinate descent is employed to achieve a novel iterative framework. Second, a convolution assisted transformer is carefully elaborated to capture both local and long-range pixel interactions simultaneously. Regarding the visual transformer, the multi-head attention is further redesigned to reduce computational costs and protect reconstructed image features. Third, based on the relative error convergence property of the convolution assisted transformer, a mathematical convergence analysis is also provided for our IRON. Both numerically simulated and clinically collected real cardiac datasets are employed to validate the effectiveness and advantages of the proposed IRON. The results show that IRON outperforms other state-of-the-art methods.

8.
Biosens Bioelectron ; 248: 115999, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38183791

RESUMO

Global food systems can benefit significantly from continuous monitoring of microbial food safety, a task for which tedious operations, destructive sampling, and the inability to monitor multiple pathogens remain challenging. This study reports significant improvements to a paper chromogenic array sensor - machine learning (PCA-ML) methodology sensing concentrations of volatile organic compounds (VOCs) emitted on a species-specific basis by pathogens by streamlining dye selection, sensor fabrication, database construction, and machine learning and validation. This approach enables noncontact, time-dependent, simultaneous monitoring of multiple pathogens (Listeria monocytogenes, Salmonella, and E. coli O157:H7) at levels as low as 1 log CFU/g with over 90% accuracy. The report provides theoretical and practical frameworks demonstrating that chromogenic response, including limits of detection, depends on time integrals of VOC concentrations. The paper also discusses the potential for implementing PCA-ML in the food supply chain for different food matrices and pathogens, with species- and strain-specific identification.


Assuntos
Técnicas Biossensoriais , Listeria monocytogenes , Contagem de Colônia Microbiana , Microbiologia de Alimentos , Escherichia coli , Listeria monocytogenes/fisiologia , Carne
9.
Comput Biol Med ; 168: 107819, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38064853

RESUMO

Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are crucial technologies in the field of medical imaging. Score-based models demonstrated effectiveness in addressing different inverse problems encountered in the field of CT and MRI, such as sparse-view CT and fast MRI reconstruction. However, these models face challenges in achieving accurate three dimensional (3D) volumetric reconstruction. The existing score-based models predominantly concentrate on reconstructing two-dimensional (2D) data distributions, resulting in inconsistencies between adjacent slices in the reconstructed 3D volumetric images. To overcome this limitation, we propose a novel two-and-a-half order score-based model (TOSM). During the training phase, our TOSM learns data distributions in 2D space, simplifying the training process compared to working directly on 3D volumes. However, during the reconstruction phase, the TOSM utilizes complementary scores along three directions (sagittal, coronal, and transaxial) to achieve a more precise reconstruction. The development of TOSM is built on robust theoretical principles, ensuring its reliability and efficacy. Through extensive experimentation on large-scale sparse-view CT and fast MRI datasets, our method achieved state-of-the-art (SOTA) results in solving 3D ill-posed inverse problems, averaging a 1.56 dB peak signal-to-noise ratio (PSNR) improvement over existing sparse-view CT reconstruction methods across 29 views and 0.87 dB PSNR improvement over existing fast MRI reconstruction methods with × 2 acceleration. In summary, TOSM significantly addresses the issue of inconsistency in 3D ill-posed problems by modeling the distribution of 3D data rather than 2D distribution which has achieved remarkable results in both CT and MRI reconstruction tasks.


Assuntos
Imageamento Tridimensional , Tomografia Computadorizada por Raios X , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética , Razão Sinal-Ruído , Processamento de Imagem Assistida por Computador/métodos , Algoritmos
10.
Sensors (Basel) ; 23(10)2023 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-37430884

RESUMO

Blind image quality assessment (BIQA) aims to evaluate image quality in a way that closely matches human perception. To achieve this goal, the strengths of deep learning and the characteristics of the human visual system (HVS) can be combined. In this paper, inspired by the ventral pathway and the dorsal pathway of the HVS, a dual-pathway convolutional neural network is proposed for BIQA tasks. The proposed method consists of two pathways: the "what" pathway, which mimics the ventral pathway of the HVS to extract the content features of distorted images, and the "where" pathway, which mimics the dorsal pathway of the HVS to extract the global shape features of distorted images. Then, the features from the two pathways are fused and mapped to an image quality score. Additionally, gradient images weighted by contrast sensitivity are used as the input to the "where" pathway, allowing it to extract global shape features that are more sensitive to human perception. Moreover, a dual-pathway multi-scale feature fusion module is designed to fuse the multi-scale features of the two pathways, enabling the model to capture both global features and local details, thus improving the overall performance of the model. Experiments conducted on six databases show that the proposed method achieves state-of-the-art performance.


Assuntos
Sensibilidades de Contraste , Utensílios Domésticos , Humanos , Bases de Dados Factuais , Redes Neurais de Computação
11.
J Digit Imaging ; 36(5): 2290-2305, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37386333

RESUMO

Low-dose computed tomography (LDCT) is an effective way to reduce radiation exposure for patients. However, it will increase the noise of reconstructed CT images and affect the precision of clinical diagnosis. The majority of the current deep learning-based denoising methods are built on convolutional neural networks (CNNs), which concentrate on local information and have little capacity for multiple structures modeling. Transformer structures are capable of computing each pixel's response on a global scale, but their extensive computation requirements prevent them from being widely used in medical image processing. To reduce the impact of LDCT scans on patients, this paper aims to develop an image post-processing method by combining CNN and Transformer structures. This method can obtain a high-quality images from LDCT. A hybrid CNN-Transformer (HCformer) codec network model is proposed for LDCT image denoising. A neighborhood feature enhancement (NEF) module is designed to introduce the local information into the Transformer's operation, and the representation of adjacent pixel information in the LDCT image denoising task is increased. The shifting window method is utilized to lower the computational complexity of the network model and overcome the problems that come with computing the MSA (Multi-head self-attention) process in a fixed window. Meanwhile, W/SW-MSA (Windows/Shifted window Multi-head self-attention) is alternately used in two layers of the Transformer to gain the information interaction between various Transformer layers. This approach can successfully decrease the Transformer's overall computational cost. The AAPM 2016 LDCT grand challenge dataset is employed for ablation and comparison experiments to demonstrate the viability of the proposed LDCT denoising method. Per the experimental findings, HCformer can increase the image quality metrics SSIM, HuRMSE and FSIM from 0.8017, 34.1898, and 0.6885 to 0.8507, 17.7213, and 0.7247, respectively. Additionally, the proposed HCformer algorithm will preserves image details while it reduces noise. In this paper, an HCformer structure is proposed based on deep learning and evaluated by using the AAPM LDCT dataset. Both the qualitative and quantitative comparison results confirm that the proposed HCformer outperforms other methods. The contribution of each component of the HCformer is also confirmed by the ablation experiments. HCformer can combine the advantages of CNN and Transformer, and it has great potential for LDCT image denoising and other tasks.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
12.
Phys Med Biol ; 68(10)2023 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-36958057

RESUMO

Objective.Cardiovascular disease (CVD) is a group of diseases affecting cardiac and blood vessels, and short-axis cardiac magnetic resonance (CMR) images are considered the gold standard for the diagnosis and assessment of CVD. In CMR images, accurate segmentation of cardiac structures (e.g. left ventricle) assists in the parametric quantification of cardiac function. However, the dynamic beating of the heart renders the location of the heart with respect to other tissues difficult to resolve, and the myocardium and its surrounding tissues are similar in grayscale. This makes it challenging to accurately segment the cardiac images. Our goal is to develop a more accurate CMR image segmentation approach.Approach.In this study, we propose a regional perception and multi-scale feature fusion network (RMFNet) for CMR image segmentation. We design two regional perception modules, a window selection transformer (WST) module and a grid extraction transformer (GET) module. The WST module introduces a window selection block to adaptively select the window of interest to perceive information, and a windowed transformer block to enhance global information extraction within each feature window. The WST module enhances the network performance by improving the window of interest. The GET module grids the feature maps to decrease the redundant information in the feature maps and enhances the extraction of latent feature information of the network. The RMFNet further introduces a novel multi-scale feature extraction module to improve the ability to retain detailed information.Main results.The RMFNet is validated with experiments on three cardiac data sets. The results show that the RMFNet outperforms other advanced methods in overall performance. The RMFNet is further validated for generalizability on a multi-organ data set. The results also show that the RMFNet surpasses other comparison methods.Significance.Accurate medical image segmentation can reduce the stress of radiologists and play an important role in image-guided clinical procedures.


Assuntos
Doenças Cardiovasculares , Coração , Humanos , Ventrículos do Coração , Miocárdio , Percepção , Processamento de Imagem Assistida por Computador
13.
Quant Imaging Med Surg ; 13(2): 610-630, 2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36819292

RESUMO

Background: Multi-energy computed tomography (CT) provides multiple channel-wise reconstructed images, and they can be used for material identification and k-edge imaging. Nonetheless, the projection datasets are frequently corrupted by various noises (e.g., electronic, Poisson) in the acquisition process, resulting in lower signal-noise-ratio (SNR) measurements. Multi-energy CT images have local sparsity, nonlocal self-similarity in spatial dimension, and correlation in spectral dimension. Methods: In this paper, we propose an image-spectral decomposition extended-learning assisted by sparsity (IDEAS) method to fully exploit these intrinsic priors for multi-energy CT image reconstruction. Particularly, a nonlocal low-rank Tucker decomposition (TD) is employed to utilize the correlation and nonlocal self-similarity priors. Moreover, considering the advantages of multi-task tensor dictionary learning (TDL) in sparse representation, an adaptive spatial dictionary and an adaptive spectral dictionary are trained during the iterative reconstruction process. Furthermore, a weighted total variation (TV) regularization term is employed to encourage local sparsity. Results: Numerical simulation, physical phantom, and preclinical mouse experiments are performed to validate the proposed IDEAS algorithm. Specifically, in the simulation experiments, the proposed IDEAS reconstructed high-quality images that are very close to the references. For example, the root mean square error (RMSE) of IDEAS image in energy bin 1 is as low as 0.0672, while the RMSE of other methods are higher than 0.0843. Besides, the structural similarity (SSIM) of IDEAS reconstructed image in energy bin 1 is greater than 0.98. For material decomposition, the RMSE of IDEAS bone component is as low as 0.0152, and other methods are higher than 0.0199. In addition, the computational cost of IDEAS is as low as 98.8 s for one iteration, and the competing tensor decomposition method is higher than 327 s. Conclusions: To further improve the quality of the reconstructed multi-energy CT images, multiple prior regularizations are introduced to the multi-energy CT reconstructed model, leading to an IDEAS method. Both qualitative and quantitative evaluation of our results confirm the outstanding performance of the proposed algorithm compared to the state-of-the-arts.

14.
Phys Med Biol ; 68(6)2023 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-36854190

RESUMO

Objective. Low-dose computed tomography (LDCT) denoising is an important problem in CT research. Compared to the normal dose CT, LDCT images are subjected to severe noise and artifacts. Recently in many studies, vision transformers have shown superior feature representation ability over the convolutional neural networks (CNNs). However, unlike CNNs, the potential of vision transformers in LDCT denoising was little explored so far. Our paper aims to further explore the power of transformer for the LDCT denoising problem.Approach. In this paper, we propose a Convolution-free Token2Token Dilated Vision Transformer (CTformer) for LDCT denoising. The CTformer uses a more powerful token rearrangement to encompass local contextual information and thus avoids convolution. It also dilates and shifts feature maps to capture longer-range interaction. We interpret the CTformer by statically inspecting patterns of its internal attention maps and dynamically tracing the hierarchical attention flow with an explanatory graph. Furthermore, overlapped inference mechanism is employed to effectively eliminate the boundary artifacts that are common for encoder-decoder-based denoising models.Main results. Experimental results on Mayo dataset suggest that the CTformer outperforms the state-of-the-art denoising methods with a low computational overhead.Significance. The proposed model delivers excellent denoising performance on LDCT. Moreover, low computational cost and interpretability make the CTformer promising for clinical applications.


Assuntos
Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada por Raios X/normas , Humanos
15.
J Xray Sci Technol ; 31(2): 301-317, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36617767

RESUMO

BACKGROUND: Lung cancer has the second highest cancer mortality rate in the world today. Although lung cancer screening using CT images is a common way for early lung cancer detection, accurately detecting lung nodules remains a challenged issue in clinical practice. OBJECTIVE: This study aims to develop a new weighted bidirectional recursive pyramid algorithm to address the problems of small size of lung nodules, large proportion of background region, and complex lung structures in lung nodule detection of CT images. METHODS: First, the weighted bidirectional recursive feature pyramid network (BiPRN) is proposed, which can increase the ability of network model to extract feature information and achieve multi-scale fusion information. Second, a CBAM_CSPDarknet53 structure is developed to incorporate an attention mechanism as a feature extraction module, which can aggregate both spatial information and channel information of the feature map. Third, the weighted BiRPN and CBAM_CSPDarknet53 are applied to the YOLOvX model for lung nodule detection experiments, named BiRPN-YOLOvX, where YOLOvX represents different versions of YOLO. To verify the effectiveness of our weighted BiRPN and CBAM_ CSPDarknet53 algorithm, they are fused with different models of YOLOv3, YOLOv4 and YOLOv5, and extensive experiments are carried out using the publicly available lung nodule datasets LUNA16 and LIDC-IDRI. The training set of LUNA16 contains 949 images, and the validation and testing sets each contain 118 images. There are 1987, 248 and 248 images in LIDC-IDRI's training, validation and testing sets, respectively. RESULTS: The sensitivity of lung nodule detection using BiRPN-YOLOv5 reaches 98.7% on LUNA16 and 96.2% on LIDC-IDRI, respectively. CONCLUSION: This study demonstrates that the proposed new method has potential to help improve the sensitivity of lung nodule detection in future clinical practice.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Nódulo Pulmonar Solitário/diagnóstico por imagem , Detecção Precoce de Câncer , Tomografia Computadorizada por Raios X/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Bases de Dados Factuais , Pulmão/diagnóstico por imagem , Algoritmos
16.
Comput Biol Med ; 153: 106532, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36623436

RESUMO

In view of the low diagnostic accuracy of the current classification methods of benign and malignant pulmonary nodules, this paper proposes a 3D segmentation attention network integrating asymmetric convolution (SAACNet) classification model combined with a gradient boosting machine (GBM). This can make full use of the spatial information of pulmonary nodules. First, the asymmetric convolution (AC) designed in SAACNet can not only strengthen feature extraction but also improve the network's robustness to object flip and rotation detection and improve network performance. Second, the segmentation attention network integrating AC (SAAC) block can effectively extract more fine-grained multiscale spatial information while adaptively recalibrating multidimensional channel attention weights. The SAACNet also uses a dual-path connection for feature reuse, where the model makes full use of features. In addition, this article makes the loss function pay more attention to difficult and misclassified samples by adding adjustment factors. Third, the GBM is used to splice the nodule size, originally cropped nodule pixels, and the depth features learned by SAACNet to improve the prediction accuracy of the overall model. A comprehensive ablation experiment is carried out on the public dataset LUNA16 and compared with other lung nodule classification models. The classification accuracy (ACC) is 95.18%, and the area under the curve (AUC) is 0.977. The results show that this method effectively improves the classification performance of pulmonary nodules. The proposed method has advantages in the classification of benign and malignant pulmonary nodules, and it can effectively assist radiologists in pulmonary nodule classification.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico , Tomografia Computadorizada por Raios X/métodos , Área Sob a Curva , Pulmão , Nódulo Pulmonar Solitário/diagnóstico por imagem
17.
Food Res Int ; 162(Pt B): 112052, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36461386

RESUMO

Non-destructive detection of human foodborne pathogens is critical to ensuring food safety and public health. Here, we report a new method using a paper chromogenic array coupled with a machine learning neural network (PCA-NN) to detect viable pathogens in the presence of background microflora and spoilage microbe in seafood via volatile organic compounds sensing. Morganella morganii and Shewanella putrefaciens were used as the model pathogen and spoilage bacteria. The study evaluated microbial detection in monoculture and cocktail multiplex detection. The accuracy of PCA-NN detection was first assessed on standard media and later validated on cod and salmon as real seafood models with pathogenic and spoilage bacteria, as well as background microflora. In this study PCA-NN method successfully identified pathogenic microorganisms from microflora with or without the prevalent spoilage microbe, Shewanella putrefaciens in seafood, with accuracies ranging from 90% to 99%. This approach has the potential to advance smart packaging by achieving nondestructive pathogen surveillance on food without enrichment, incubation, or other sample preparation.


Assuntos
Redes Neurais de Computação , Shewanella putrefaciens , Humanos , Aprendizado de Máquina , Inocuidade dos Alimentos , Embalagem de Produtos , Alimentos Marinhos
18.
Comput Biol Med ; 151(Pt A): 106080, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36327881

RESUMO

It is challenging to obtain good image quality in spectral computed tomography (CT) as the photon-number for the photon-counting detectors is limited for each narrow energy bin. This results in a lower signal to noise ratio (SNR) for the projections. To handle this issue, we first formulate the weight bidirectional image gradient with L0-norm constraint of spectral CT image. Then, as a new regularizer, bidirectional image gradient with L0-norm constraint is introduced into the tensor decomposition model, generating the Spectral-Image Tensor and Bidirectional Image-gradient Minimization (SITBIM) algorithm. Finally, the split-Bregman method is employed to optimize the proposed SITBIM mathematical model. The experiments on the numerical mouse phantom and real mouse experiments are designed to validate and evaluate the SITBIM method. The results demonstrate that the SITBIM can outperform other state-of-the-art methods (including TVM, TV + LR, SSCMF and NLCTF). INDEX TERMS: -spectral CT, image reconstruction, tensor decomposition, unidirectional image gradient, image similarity.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Camundongos , Animais , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Razão Sinal-Ruído
19.
Phys Med Biol ; 67(19)2022 09 28.
Artigo em Inglês | MEDLINE | ID: mdl-36096127

RESUMO

Objective. X-ray-based imaging modalities including mammography and computed tomography (CT) are widely used in cancer screening, diagnosis, staging, treatment planning, and therapy response monitoring. Over the past few decades, improvements to these modalities have resulted in substantially improved efficacy and efficiency, and substantially reduced radiation dose and cost. However, such improvements have evolved more slowly than would be ideal because lengthy preclinical and clinical evaluation is required. In many cases, new ideas cannot be evaluated due to the high cost of fabricating and testing prototypes. Wider availability of computer simulation tools could accelerate development of new imaging technologies. This paper introduces the development of a new open-access simulation environment for x-ray-based imaging. The main motivation of this work is to publicly distribute a fast but accurate ray-tracing x-ray and CT simulation tool along with realistic phantoms and 3D reconstruction capability, building on decades of developments in industry and academia.Approach. The x-ray-based Cancer Imaging Simulation Toolkit (XCIST) is developed in the context of cancer imaging, but can more broadly be applied. XCIST is physics-based, written in Python and C/C++, and currently consists of three major subsets: digital phantoms, the simulator itself (CatSim), and image reconstruction algorithms; planned future features include a fast dose-estimation tool and rigorous validation. To enable broad usage and to model and evaluate new technologies, XCIST is easily extendable by other researchers. To demonstrate XCIST's ability to produce realistic images and to show the benefits of using XCIST for insight into the impact of separate physics effects on image quality, we present exemplary simulations by varying contributing factors such as noise and sampling.Main results. The capabilities and flexibility of XCIST are demonstrated, showing easy applicability to specific simulation problems. Geometric and x-ray attenuation accuracy are shown, as well as XCIST's ability to model multiple scanner and protocol parameters, and to attribute fundamental image quality characteristics to specific parameters.Significance. This work represents an important first step toward the goal of creating an open-access platform for simulating existing and emerging x-ray-based imaging systems. While numerous simulation tools exist, we believe the combined XCIST toolset provides a unique advantage in terms of modeling capabilities versus ease of use and compute time. We publicly share this toolset to provide an environment for scientists to accelerate and improve the relevance of their research in x-ray and CT.


Assuntos
Acesso à Informação , Tomografia Computadorizada por Raios X , Algoritmos , Simulação por Computador , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Tomografia Computadorizada por Raios X/métodos , Raios X
20.
Patterns (N Y) ; 3(5): 100475, 2022 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35607615

RESUMO

Due to lack of the kernel awareness, some popular deep image reconstruction networks are unstable. To address this problem, here we introduce the bounded relative error norm (BREN) property, which is a special case of the Lipschitz continuity. Then, we perform a convergence study consisting of two parts: (1) a heuristic analysis on the convergence of the analytic compressed iterative deep (ACID) scheme (with the simplification that the CS module achieves a perfect sparsification), and (2) a mathematically denser analysis (with the two approximations: [1] AT is viewed as an inverse A- 1 in the perspective of an iterative reconstruction procedure and [2] a pseudo-inverse is used for a total variation operator H). Also, we present adversarial attack algorithms to perturb the selected reconstruction networks respectively and, more importantly, to attack the ACID workflow as a whole. Finally, we show the numerical convergence of the ACID iteration in terms of the Lipschitz constant and the local stability against noise.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...