Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Igaku Butsuri ; 44(2): 29-35, 2024.
Artigo em Japonês | MEDLINE | ID: mdl-38945880

RESUMO

This is an explanatory paper on Sun Il Kwon et al., Nat. Photon. 15: 914-918, 2021 and some parts of this manuscript are translated from the paper. Medical imaging modalities such as X-ray computed tomography, Magnetic resonance imaging, positron emission tomography (PET), and single photon emission computed tomography, require image reconstruction processes, consequently constraining them to form cylindrical shapes. However, among them, only PET can use additional information, so called time of flight, on an event-by-event basis. If coincidence time resolution (CTR) of PET detectors improved to 30 ps, which corresponds to spatial resolution of 4.5 mm, directly localizing electron-positron annihilation point is possible, allowing us to circumvent image reconstruction processes and free us from the geometric constraint. We call this concept direct positron emission imaging (dPEI). We have developed ultrafast radiation detectors by focusing on Cherenkov photon detection. Furthermore, the CTR of 32 ps being equivalent to 4.8 mm spatial resolution is achieved by combining deep learning-based signal processing with the detectors. In this article, we explain how we developed the detectors and demonstrated the first dPEI using different types of phantoms, how we will tackle limitations to be addressed to make the dPEI more practical, and how dPEI will emerge as an imaging modality in nuclear medicine.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Fótons , Tomografia por Emissão de Pósitrons/instrumentação , Tomografia por Emissão de Pósitrons/métodos , Fatores de Tempo
3.
Ann Nucl Med ; 38(7): 544-552, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38717535

RESUMO

OBJECTIVE: In preclinical studies, high-throughput positron emission tomography (PET) imaging, known as simultaneous multiple animal scanning, can reduce the time spent on animal experiments, the cost of PET tracers, and the risk of synthesis of PET tracers. It is well known that the image quality acquired by high-throughput imaging depends on the PET system. Herein, we investigated the influence of large field of view (FOV) PET scanner on high-throughput imaging. METHODS: We investigated the influence of scanning four objects using a small animal PET scanner with a large FOV. We compared the image quality acquired by four objects scanned with the one acquired by one object scanned using phantoms and animals. We assessed the image quality with uniformity, recovery coefficient (RC), and spillover ratio (SOR), which are indicators of image noise, spatial resolution, and quantitative precision, respectively. For the phantom study, we used the NEMA NU 4-2008 image quality phantom and evaluated uniformity, RC, and SOR, and for the animal study, we used Wistar rats and evaluated the spillover in the heart and kidney. RESULTS: In the phantom study, four phantoms had little effect on imaging quality, especially SOR compared with that for one phantom. In the animal study as well, four rats had little effect on spillover from the heart muscle and kidney cortex compared with that for one rat. CONCLUSIONS: This study demonstrated that an animal PET scanner with a large FOV was suitable for high-throughput imaging. Thus, the large FOV PET scanner can support drug discovery and bridging research through rapid pharmacological and pathological evaluation.


Assuntos
Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Ratos Wistar , Animais , Tomografia por Emissão de Pósitrons/instrumentação , Tomografia por Emissão de Pósitrons/métodos , Ratos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Rim/diagnóstico por imagem , Coração/diagnóstico por imagem
4.
Phys Med Biol ; 69(10)2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38640921

RESUMO

Objective.This study aims to introduce a novel back projection-induced U-Net-shaped architecture, called ReconU-Net, based on the original U-Net architecture for deep learning-based direct positron emission tomography (PET) image reconstruction. Additionally, our objective is to visualize the behavior of direct PET image reconstruction by comparing the proposed ReconU-Net architecture with the original U-Net architecture and existing DeepPET encoder-decoder architecture without skip connections.Approach. The proposed ReconU-Net architecture uniquely integrates the physical model of the back projection operation into the skip connection. This distinctive feature facilitates the effective transfer of intrinsic spatial information from the input sinogram to the reconstructed image via an embedded physical model. The proposed ReconU-Net was trained using Monte Carlo simulation data from the Brainweb phantom and tested on both simulated and real Hoffman brain phantom data.Main results. The proposed ReconU-Net method provided better reconstructed image in terms of the peak signal-to-noise ratio and contrast recovery coefficient than the original U-Net and DeepPET methods. Further analysis shows that the proposed ReconU-Net architecture has the ability to transfer features of multiple resolutions, especially non-abstract high-resolution information, through skip connections. Unlike the U-Net and DeepPET methods, the proposed ReconU-Net successfully reconstructed the real Hoffman brain phantom, despite limited training on simulated data.Significance. The proposed ReconU-Net can improve the fidelity of direct PET image reconstruction, even with small training datasets, by leveraging the synergistic relationship between data-driven modeling and the physics model of the imaging process.


Assuntos
Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Método de Monte Carlo , Humanos
6.
PLoS One ; 19(2): e0298132, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38349916

RESUMO

PURPOSE: Measurements of macular pigment optical density (MPOD) using the autofluorescence spectroscopy yield underestimations of actual values in eyes with cataracts. Previously, we proposed a correction method for this error using deep learning (DL); however, the correction performance was validated through internal cross-validation. This cross-sectional study aimed to validate this approach using an external validation dataset. METHODS: MPODs at 0.25°, 0.5°, 1°, and 2° eccentricities and macular pigment optical volume (MPOV) within 9° eccentricity were measured using SPECTRALIS (Heidelberg Engineering, Heidelberg, Germany) in 197 (training dataset inherited from our previous study) and 157 eyes (validating dataset) before and after cataract surgery. A DL model was trained to predict the corrected value from the pre-operative value using the training dataset, and we measured the discrepancy between the corrected value and the actual postoperative value. Subsequently, the prediction performance was validated using a validation dataset. RESULTS: Using the validation dataset, the mean absolute values of errors for MPOD and MPOV corrected using DL ranged from 8.2 to 12.4%, which were lower than values with no correction (P < 0.001, linear mixed model with Tukey's test). The error depended on the autofluorescence image quality used to calculate MPOD. The mean errors in high and moderate quality images ranged from 6.0 to 11.4%, which were lower than those of poor quality images. CONCLUSION: The usefulness of the DL correction method was validated. Deep learning reduced the error for a relatively good autofluorescence image quality. Poor-quality images were not corrected.


Assuntos
Catarata , Aprendizado Profundo , Pigmento Macular , Humanos , Luteína , Estudos Transversais , Zeaxantinas , Catarata/terapia , Análise Espectral
7.
Radiol Phys Technol ; 17(1): 24-46, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38319563

RESUMO

This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Redes Neurais de Computação , Algoritmos , Imagens de Fantasmas
8.
IEEE Trans Med Imaging ; 43(5): 1654-1663, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38109238

RESUMO

Direct positron emission imaging (dPEI), which does not require a mathematical reconstruction step, is a next-generation molecular imaging modality. To maximize the practical applicability of the dPEI system to clinical practice, we introduce a novel reconstruction-free image-formation method called direct µCompton imaging, which directly localizes the interaction position of Compton scattering from the annihilation photons in a three-dimensional space by utilizing the same compact geometry as that for dPEI, involving ultrafast time-of-flight radiation detectors. This unique imaging method not only provides the anatomical information about an object but can also be applied to attenuation correction of dPEI images. Evaluations through Monte Carlo simulation showed that functional and anatomical hybrid images can be acquired using this multimodal imaging system. By fusing the images, it is possible to simultaneously access various object data, which ensures the synergistic effect of the two imaging methodologies. In addition, attenuation correction improves the quantification of dPEI images. The realization of the whole reconstruction-free imaging system from image generation to quantitative correction provides a new perspective in molecular imaging.


Assuntos
Processamento de Imagem Assistida por Computador , Método de Monte Carlo , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Tomografia por Emissão de Pósitrons/instrumentação , Algoritmos , Humanos , Simulação por Computador
9.
Phys Med Biol ; 68(15)2023 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-37406637

RESUMO

Objective. Deep image prior (DIP) has recently attracted attention owing to its unsupervised positron emission tomography (PET) image reconstruction method, which does not require any prior training dataset. In this paper, we present the first attempt to implement an end-to-end DIP-based fully 3D PET image reconstruction method that incorporates a forward-projection model into a loss function.Approach. A practical implementation of a fully 3D PET image reconstruction could not be performed at present because of a graphics processing unit memory limitation. Consequently, we modify the DIP optimization to a block iteration and sequential learning of an ordered sequence of block sinograms. Furthermore, the relative difference penalty (RDP) term is added to the loss function to enhance the quantitative accuracy of the PET image.Main results. We evaluated our proposed method using Monte Carlo simulation with [18F]FDG PET data of a human brain and a preclinical study on monkey-brain [18F]FDG PET data. The proposed method was compared with the maximum-likelihood expectation maximization (EM), maximuma posterioriEM with RDP, and hybrid DIP-based PET reconstruction methods. The simulation results showed that, compared with other algorithms, the proposed method improved the PET image quality by reducing statistical noise and better preserved the contrast of brain structures and inserted tumors. In the preclinical experiment, finer structures and better contrast recovery were obtained with the proposed method.Significance.The results indicated that the proposed method could produce high-quality images without a prior training dataset. Thus, the proposed method could be a key enabling technology for the straightforward and practical implementation of end-to-end DIP-based fully 3D PET image reconstruction.


Assuntos
Fluordesoxiglucose F18 , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Imagens de Fantasmas
10.
IEEE Trans Med Imaging ; 42(6): 1822-1834, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37022039

RESUMO

List-mode positron emission tomography (PET) image reconstruction is an important tool for PET scanners with many lines-of-response and additional information such as time-of-flight and depth-of-interaction. Deep learning is one possible solution to enhance the quality of PET image reconstruction. However, the application of deep learning techniques to list-mode PET image reconstruction has not been progressed because list data is a sequence of bit codes and unsuitable for processing by convolutional neural networks (CNN). In this study, we propose a novel list-mode PET image reconstruction method using an unsupervised CNN called deep image prior (DIP) which is the first trial to integrate list-mode PET image reconstruction and CNN. The proposed list-mode DIP reconstruction (LM-DIPRecon) method alternatively iterates the regularized list-mode dynamic row action maximum likelihood algorithm (LM-DRAMA) and magnetic resonance imaging conditioned DIP (MR-DIP) using an alternating direction method of multipliers. We evaluated LM-DIPRecon using both simulation and clinical data, and it achieved sharper images and better tradeoff curves between contrast and noise than the LM-DRAMA, MR-DIP and sinogram-based DIPRecon methods. These results indicated that the LM-DIPRecon is useful for quantitative PET imaging with limited events while keeping accurate raw data information. In addition, as list data has finer temporal information than dynamic sinograms, list-mode deep image prior reconstruction is expected to be useful for 4D PET imaging and motion correction.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Processamento de Imagem Assistida por Computador/métodos , Tomografia por Emissão de Pósitrons/métodos , Movimento (Física) , Simulação por Computador , Algoritmos , Imagens de Fantasmas
11.
Ann Nucl Med ; 36(8): 746-755, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35698016

RESUMO

OBJECTIVE: Various motion correction (MC) algorithms for positron emission tomography (PET) have been proposed to accelerate the diagnostic performance and research in brain activity and neurology. We have incorporated MC system-based optical motion tracking into the brain-dedicated time-of-flight PET scanner. In this study, we evaluate the performance characteristics of the developed PET scanner when performing MC in accordance with the standards and guidelines for the brain PET scanner. METHODS: We evaluate the spatial resolution, scatter fraction, count rate characteristics, sensitivity, and image quality of PET images. The MC evaluation is measured in terms of the spatial resolution and image quality that affect movement. RESULTS: In the basic performance evaluation, the average spatial resolution by iterative reconstruction was 2.2 mm at 10 mm offset position. The measured peak noise equivalent count rate was 38.0 kcps at 16.7 kBq/mL. The scatter fraction and system sensitivity were 43.9% and 22.4 cps/(Bq/mL), respectively. The image contrast recovery was between 43.2% (10 mm sphere) and 72.0% (37 mm sphere). In the MC performance evaluation, the average spatial resolution was 2.7 mm at 10 mm offset position, when the phantom stage with the point source translates to ± 15 mm along the y-axis. The image contrast recovery was between 34.2 % (10 mm sphere) and 66.8 % (37 mm sphere). CONCLUSIONS: The reconstructed images using MC were restored to their nearly identical state as those at rest. Therefore, it is concluded that this scanner can observe more natural brain activity.


Assuntos
Tomografia por Emissão de Pósitrons , Tomografia Computadorizada por Raios X , Encéfalo/diagnóstico por imagem , Cabeça , Humanos , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos
12.
Radiol Phys Technol ; 15(1): 72-82, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35132574

RESUMO

Although deep learning for application in positron emission tomography (PET) image reconstruction has attracted the attention of researchers, the image quality must be further improved. In this study, we propose a novel convolutional neural network (CNN)-based fast time-of-flight PET (TOF-PET) image reconstruction method to fully utilize the direction information of coincidence events. The proposed method inputs view-grouped histo-images into a 3D CNN as a multi-channel image to use the direction information of such events. We evaluated the proposed method using Monte Carlo simulation data obtained from a digital brain phantom. Compared with a case without direction information, the peak signal-to-noise ratio and structural similarity were improved by 1.2 dB and 0.02, respectively, at a coincidence time resolution of 300 ps. The calculation times of the proposed method were significantly lower than those of a conventional iterative reconstruction. These results indicate that the proposed method improves both the speed and image quality of a TOF-PET image reconstruction.


Assuntos
Aprendizado Profundo , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons/métodos
13.
Phys Med Biol ; 67(4)2022 02 11.
Artigo em Inglês | MEDLINE | ID: mdl-35100575

RESUMO

Objective.Convolutional neural networks (CNNs) are a strong tool for improving the coincidence time resolution (CTR) of time-of-flight (TOF) positron emission tomography detectors. However, several signal waveforms from multiple source positions are required for CNN training. Furthermore, there is concern that TOF estimation is biased near the edge of the training space, despite the reduced estimation variance (i.e. timing uncertainty).Approach.We propose a simple method for unbiased TOF estimation by combining a conventional leading-edge discriminator (LED) and a CNN that can be trained with waveforms collected from one source position. The proposed method estimates and corrects the time difference error calculated by the LED rather than the absolute time difference. This model can eliminate the TOF estimation bias, as the combination with the LED converts the distribution of the label data from discrete values at each position into a continuous symmetric distribution.Main results.Evaluation results using signal waveforms collected from scintillation detectors show that the proposed method can correctly estimate all source positions without bias from a single source position. Moreover, the proposed method improves the CTR of the conventional LED.Significance.We believe that the improved CTR will not only increase the signal-to-noise ratio but will also contribute significantly to a part of the direct positron emission imaging.


Assuntos
Fótons , Contagem de Cintilação , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Contagem de Cintilação/métodos , Razão Sinal-Ruído
14.
Med Image Anal ; 74: 102226, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34563861

RESUMO

Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on an anatomical information-guided attention mechanism. The proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR-guidance image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. In a Monte Carlo simulation of [18F]fluoro-2-deoxy-D-glucose (FDG), the proposed method achieved the highest peak signal-to-noise ratio and structural similarity (27.92 ± 0.44 dB/0.886 ± 0.007), as compared with Gaussian filtering (26.68 ± 0.10 dB/0.807 ± 0.004), image guided filtering (27.40 ± 0.11 dB/0.849 ± 0.003), deep image prior (DIP) (24.22 ± 0.43 dB/0.737 ± 0.017), and MR-DIP (27.65 ± 0.42 dB/0.879 ± 0.007). Furthermore, we experimentally visualized the behavior of the optimization process, which is often unknown in unsupervised CNN-based restoration problems. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using a common network architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Fluordesoxiglucose F18 , Humanos , Redes Neurais de Computação , Razão Sinal-Ruído
15.
Transl Vis Sci Technol ; 10(2): 18, 2021 02 05.
Artigo em Inglês | MEDLINE | ID: mdl-34003903

RESUMO

Purpose: Measurements of macular pigment optical density (MPOD) by the autofluorescence technique yield underestimations of actual values in eyes with cataract. We applied deep learning (DL) to correct this error. Subjects and Methods: MPOD was measured by SPECTRALIS (Heidelberg Engineering, Heidelberg, Germany) in 197 eyes before and after cataract surgery. The nominal MPOD values (= preoperative value) were corrected by three methods: the regression equation (RE) method, subjective classification (SC) method (described in our previous study), and DL method. The errors between the corrected and true values (= postoperative value) were calculated for local MPODs at 0.25°, 0.5°, 1°, and 2° eccentricities and macular pigment optical volume (MPOV) within 9° eccentricity. Results: The mean error for MPODs at four eccentricities was 32% without any correction, 15% with correction by RE, 16% with correction by SC, and 14% with correction by DL. The mean error for MPOV was 21% without correction and 14%, 10%, and 10%, respectively, with correction by the same methods. The errors with any correction were significantly lower than those without correction (P < 0.001, linear mixed model with Tukey's test). The errors with DL correction were significantly lower than those with RE correction in MPOD at 1° eccentricity and MPOV (P < 0.001) and were equivalent to those with SC correction. Conclusions: The objective method using DL was useful to correct MPOD values measured in aged people. Translational Relevance: MPOD can be obtained with small errors in eyes with cataract using DL.


Assuntos
Catarata , Aprendizado Profundo , Pigmento Macular , Idoso , Alemanha , Humanos , Luteína , Zeaxantinas
16.
Ann Nucl Med ; 35(6): 691-701, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33811600

RESUMO

OBJECTIVES: Attenuation correction (AC) is crucial for ensuring the quantitative accuracy of positron emission tomography (PET) imaging. However, obtaining accurate µ-maps from brain-dedicated PET scanners without AC acquisition mechanism is challenging. Therefore, to overcome these problems, we developed a deep learning-based PET AC (deep AC) framework to synthesize transmission computed tomography (TCT) images from non-AC (NAC) PET images using a convolutional neural network (CNN) with a huge dataset of various radiotracers for brain PET imaging. METHODS: The proposed framework is comprised of three steps: (1) NAC PET image generation, (2) synthetic TCT generation using CNN, and (3) PET image reconstruction. We trained the CNN by combining the mixed image dataset of six radiotracers to avoid overfitting, including [18F]FDG, [18F]BCPP-EF, [11C]Racropride, [11C]PIB, [11C]DPA-713, and [11C]PBB3. We used 1261 brain NAC PET and TCT images (1091 for training and 70 for testing). We did not include [11C]Methionine subjects in the training dataset, but included them in the testing dataset. RESULTS: The image quality of the synthetic TCT images obtained using the CNN trained on the mixed dataset of six radiotracers was superior to those obtained using the CNN trained on the split dataset generated from each radiotracer. In the [18F]FDG study, the mean relative PET biases of the emission-segmented AC (ESAC) and deep AC were 8.46 ± 5.24 and - 5.69 ± 4.97, respectively. The deep AC PET and TCT AC PET images exhibited excellent correlation for all seven radiotracers (R2 = 0.912-0.982). CONCLUSION: These results indicate that our proposed deep AC framework can be leveraged to provide quantitatively superior PET images when using the CNN trained on the mixed dataset of PET tracers than when using the CNN trained on the split dataset which means specific for each tracer.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons , Fluordesoxiglucose F18 , Imagem Multimodal
17.
Nat Photonics ; 15(12): 914-918, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35663419

RESUMO

X-ray and gamma-ray photons are widely used for imaging but require a mathematical reconstruction step, known as tomography, to produce cross-sectional images from the measured data. Theoretically, the back-to-back annihilation photons produced by positron-electron annihilation can be directly localized in three-dimensional space using time-of-flight information without tomographic reconstruction. However, this has not yet been demonstrated due to the insufficient timing performance of available radiation detectors. Here, we develop techniques based on detecting prompt Cerenkov photons, which when combined with a convolutional neural network for timing estimation resulted in an average timing precision of 32 picoseconds, corresponding to a spatial precision of 4.8 mm. We show this is sufficient to produce cross-sectional images of a positron-emitting radionuclide directly from the detected coincident annihilation photons, without using any tomographic reconstruction algorithm. The reconstruction-free imaging demonstrated here directly localizes positron emission, and frees the design of an imaging system from the geometric and sampling constraints that normally present for tomographic reconstruction.

18.
Phys Med Biol ; 66(1): 015006, 2021 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-33227725

RESUMO

Although convolutional neural networks (CNNs) demonstrate the superior performance in denoising positron emission tomography (PET) images, a supervised training of the CNN requires a pair of large, high-quality PET image datasets. As an unsupervised learning method, a deep image prior (DIP) has recently been proposed; it can perform denoising with only the target image. In this study, we propose an innovative procedure for the DIP approach with a four-dimensional (4D) branch CNN architecture in end-to-end training to denoise dynamic PET images. Our proposed 4D CNN architecture can be applied to end-to-end dynamic PET image denoising by introducing a feature extractor and a reconstruction branch for each time frame of the dynamic PET image. In the proposed DIP method, it is not necessary to prepare high-quality and large patient-related PET images. Instead, a subject's own static PET image is used as additional information, dynamic PET images are treated as training labels, and denoised dynamic PET images are obtained from the CNN outputs. Both simulation with [18F]fluoro-2-deoxy-D-glucose (FDG) and preclinical data with [18F]FDG and [11C]raclopride were used to evaluate the proposed framework. The results showed that our 4D DIP framework quantitatively and qualitatively outperformed 3D DIP and other unsupervised denoising methods. The proposed 4D DIP framework thus provides a promising procedure for dynamic PET image denoising.


Assuntos
Encéfalo/diagnóstico por imagem , Fluordesoxiglucose F18/metabolismo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Tomografia por Emissão de Pósitrons/métodos , Animais , Encéfalo/metabolismo , Haplorrinos , Humanos , Compostos Radiofarmacêuticos/metabolismo
19.
FEBS Open Bio ; 10(12): 2640-2655, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33048473

RESUMO

Colorectal cancer was the third most commonly diagnosed malignant tumor and the fourth leading cause of cancer deaths worldwide in 2012. A human colorectal cancer cell line, RCM-1, was established from a colon cancer tissue diagnosed as a well-differentiated rectum adenocarcinoma. RCM-1 cells spontaneously form 'domes' (formerly designated 'ducts') resembling villiform structures. Two sulphur-containing compounds from Cucumis melo var. conomon (Katsura-uri, or Japanese pickling melon), referred to as 3-methylthiopropionic acid ethyl ester (MTPE) and methylthioacetic acid ethyl ester (MTAE), can induce the differentiation of the unorganized cell mass of an RCM-1 human colorectal cancer cell culture into a dome. However, the underlying molecular mechanisms of such dome formation have not been previously reported. Here, we performed a structure-activity relationship analysis, which indicated that methylthioacetic acid (MTA) was the lowest molecular weight compound with the most potent dome-inducing activity among 37 MTPE and MTAE analogues, and the methylthio group was essential for this activity. According to our microarray analysis, MTA resulted in down-regulation of 537 genes and up-regulation of 117 genes. Furthermore, MTA caused down-regulation of many genes involved in cell-cycle control, with the cyclin E2 (CCNE2) and cell division cycle 25A (CDC25A) genes being the most significantly reduced. Pharmacological analysis showed that the administration of two cell-cycle inhibitors for inactivating CDC25A phosphatase (NSC95397) and the cyclin E2/cyclin-dependent kinase 2 complex (purvalanol A) increased the dome number independently of MTA. Altogether, our results indicate that MTA is the minimum unit required to induce dome formation, with the down-regulation of CDC25A and possibly CCNE2 being important steps in this process.


Assuntos
Antineoplásicos/farmacologia , Neoplasias Colorretais/tratamento farmacológico , Cucumis melo/química , Compostos de Enxofre/farmacologia , Antineoplásicos/química , Diferenciação Celular/efeitos dos fármacos , Neoplasias Colorretais/metabolismo , Neoplasias Colorretais/patologia , Ensaios de Seleção de Medicamentos Antitumorais , Ésteres/química , Ésteres/farmacologia , Humanos , Propionatos/química , Propionatos/farmacologia , Compostos de Enxofre/química , Células Tumorais Cultivadas
20.
Front Nutr ; 7: 115, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32850936

RESUMO

Shinkiku (Massa Medicata Fermentata) is a traditional crude drug used to treat anorexia and dyspepsia of elder patients in east Asia. Shinkiku is generally prepared by the microbial fermentation of wheat and herbs. Shinkiku is also used in Japanese Kampo medicine as a component of (Hangebyakujutsutemmato). However, the quality of shinkiku varies by manufacture because there are no reference standards to control the quality of medicinal shinkiku. Thus, we aim to characterize the quality of various commercially available shinkiku by chemical and microbial analysis. We collected 13 shinkiku products manufactured in China and Korea and investigated the microbial structure and chemical constituents. Amplicon sequence analysis revealed that Aspergillus sp. was common microorganism in shinkiku products. Digestive enzymes (α-amylase, protease, and lipase), organic acids (ferulic acid, citric acid, lactic acid, and acetic acid), and 39 volatile compounds were commonly found in shinkiku products. Although there were some commonalities in shinkiku products, microbial and chemical characteristic considerably differed as per the manufacturer. Aspergillus sp. was predominant in Korean products, and Korean products showed higher enzyme activities than Chinese products. Meanwhile, Bacillus sp. was commonly detected in Chinese shinkiku, and ferulic acid was higher in Chinese products. Principal component analysis based on the GC-MS peak area of the volatiles also clearly distinguished shinkiku products manufactured in China from those in Korea. Chinese products contained higher amounts of benzaldehyde and anethole than Korean ones. Korean products were further separated into two groups: one with relatively higher linalool and terpinen-4-ol and another with higher hexanoic acid and 1-octen-3-ol. Thus, our study revealed the commonality and diversity of commercial shinkiku products, in which the commonalities can possibly be the reference standard for quality control of shinkiku, and the diversity suggested the importance of microbial management to stabilize the quality of shinkiku.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA