Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.163
Filtrar
Más filtros

Intervalo de año de publicación
1.
Phys Med Biol ; 69(10)2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38640915

RESUMEN

Objective. Beam hardening (BH) artifacts in computed tomography (CT) images originate from the polychromatic nature of x-ray photons. In a CT system with a bowtie filter, residual BH artifacts remain when polynomial fits are used. These artifacts lead to worse visuals, reduced contrast, and inaccurate CT numbers. This work proposes a pixel-by-pixel correction (PPC) method to reduce the residual BH artifacts caused by a bowtie filter.Approach. The energy spectrum for each pixel at the detector after the photons pass through the bowtie filter was calculated. Then, the spectrum was filtered through a series of water slabs with different thicknesses. The polychromatic projection corresponding to the thickness of the water slab for each detector pixel could be obtained. Next, we carried out a water slab experiment with a mono energyE= 69 keV to get the monochromatic projection. The polychromatic and monochromatic projections were then fitted with a 2nd-order polynomial. The proposed method was evaluated on digital phantoms in a virtual CT system and phantoms in a real CT machine.Main results. In the case of a virtual CT system, the standard deviation of the line profile was reduced by 23.8%, 37.3%, and 14.3%, respectively, in the water phantom with different shapes. The difference of the linear attenuation coefficients (LAC) in the central and peripheral areas of an image was reduced from 0.010 to 0.003cm-1and 0.007cm-1to 0 in the biological tissue phantom and human phantom, respectively. The method was also validated using CT projection data obtained from Activion16 (Canon Medical Systems, Japan). The difference in the LAC in the central and peripheral areas can be reduced by a factor of two.Significance. The proposed PPC method can successfully remove the cupping artifacts in both virtual and authentic CT images. The scanned object's shapes and materials do not affect the technique.


Asunto(s)
Artefactos , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
2.
Comput Assist Surg (Abingdon) ; 29(1): 2327981, 2024 12.
Artículo en Inglés | MEDLINE | ID: mdl-38468391

RESUMEN

Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.


Asunto(s)
Protones , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Dosificación Radioterapéutica , Inteligencia Artificial , Estudios de Factibilidad , Procesamiento de Imagen Asistido por Computador/métodos
3.
Hum Brain Mapp ; 45(2): e26582, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-38339904

RESUMEN

Preclinical evidence suggests that inter-individual variation in the structure of the hypothalamus at birth is associated with variation in the intrauterine environment, with downstream implications for future disease susceptibility. However, scientific advancement in humans is limited by a lack of validated methods for the automatic segmentation of the newborn hypothalamus. N = 215 healthy full-term infants with paired T1-/T2-weighted MR images across four sites were considered for primary analyses (mean postmenstrual age = 44.3 ± 3.5 weeks, nmale /nfemale = 110/106). The outputs of FreeSurfer's hypothalamic subunit segmentation tools designed for adults (segFS) were compared against those of a novel registration-based pipeline developed here (segATLAS) and against manually edited segmentations (segMAN) as reference. Comparisons were made using Dice Similarity Coefficients (DSCs) and through expected associations with postmenstrual age at scan. In addition, we aimed to demonstrate the validity of the segATLAS pipeline by testing for the stability of inter-individual variation in hypothalamic volume across the first year of life (n = 41 longitudinal datasets available). SegFS and segATLAS segmentations demonstrated a wide spread in agreement (mean DSC = 0.65 ± 0.14 SD; range = {0.03-0.80}). SegATLAS volumes were more highly correlated with postmenstrual age at scan than segFS volumes (n = 215 infants; RsegATLAS 2 = 65% vs. RsegFS 2 = 40%), and segATLAS volumes demonstrated a higher degree of agreement with segMAN reference segmentations at the whole hypothalamus (segATLAS DSC = 0.89 ± 0.06 SD; segFS DSC = 0.68 ± 0.14 SD) and subunit levels (segATLAS DSC = 0.80 ± 0.16 SD; segFS DSC = 0.40 ± 0.26 SD). In addition, segATLAS (but not segFS) volumes demonstrated stability from near birth to ~1 years age (n = 41; R2 = 25%; p < 10-3 ). These findings highlight segATLAS as a valid and publicly available (https://github.com/jerodras/neonate_hypothalamus_seg) pipeline for the segmentation of hypothalamic subunits using human newborn MRI up to 3 months of age collected at resolutions on the order of 1 mm isotropic. Because the hypothalamus is traditionally understudied due to a lack of high-quality segmentation tools during the early life period, and because the hypothalamus is of high biological relevance to human growth and development, this tool may stimulate developmental and clinical research by providing new insight into the unique role of the hypothalamus and its subunits in shaping trajectories of early life health and disease.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Adulto , Recién Nacido , Lactante , Humanos , Masculino , Femenino , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Hipotálamo/diagnóstico por imagen
4.
Radiat Oncol ; 19(1): 20, 2024 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-38336759

RESUMEN

OBJECTIVE: This study aimed to present a deep-learning network called contrastive learning-based cycle generative adversarial networks (CLCGAN) to mitigate streak artifacts and correct the CT value in four-dimensional cone beam computed tomography (4D-CBCT) for dose calculation in lung cancer patients. METHODS: 4D-CBCT and 4D computed tomography (CT) of 20 patients with locally advanced non-small cell lung cancer were used to paired train the deep-learning model. The lung tumors were located in the right upper lobe, right lower lobe, left upper lobe, and left lower lobe, or in the mediastinum. Additionally, five patients to create 4D synthetic computed tomography (sCT) for test. Using the 4D-CT as the ground truth, the quality of the 4D-sCT images was evaluated by quantitative and qualitative assessment methods. The correction of CT values was evaluated holistically and locally. To further validate the accuracy of the dose calculations, we compared the dose distributions and calculations of 4D-CBCT and 4D-sCT with those of 4D-CT. RESULTS: The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) of the 4D-sCT increased from 87% and 22.31 dB to 98% and 29.15 dB, respectively. Compared with cycle consistent generative adversarial networks, CLCGAN enhanced SSIM and PSNR by 1.1% (p < 0.01) and 0.42% (p < 0.01). Furthermore, CLCGAN significantly decreased the absolute mean differences of CT value in lungs, bones, and soft tissues. The dose calculation results revealed a significant improvement in 4D-sCT compared to 4D-CBCT. CLCGAN was the most accurate in dose calculations for left lung (V5Gy), right lung (V5Gy), right lung (V20Gy), PTV (D98%), and spinal cord (D2%), with the relative dose difference were reduced by 6.84%, 3.84%, 1.46%, 0.86%, 3.32% compared to 4D-CBCT. CONCLUSIONS: Based on the satisfactory results obtained in terms of image quality, CT value measurement, it can be concluded that CLCGAN-based corrected 4D-CBCT can be utilized for dose calculation in lung cancer.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada Cuatridimensional , Planificación de la Radioterapia Asistida por Computador/métodos
5.
Analyst ; 149(6): 1837-1848, 2024 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-38345564

RESUMEN

Radix glycyrrhizae (licorice) is extensively employed in traditional Chinese medicine, and serves as a crucial raw material in industries such as food and cosmetics. The quality of licorice from different origins varies greatly, so classification of its geographical origin is particularly important. This study proposes a technique for fine structure recognition and segmentation of hyperspectral images of licorice using deep learning U-Net neural networks to segment the tissue structure patterns (phloem, xylem, and pith). Firstly, the three partitions were separately labeled using the Labelme tool, which was utilized to train the U-Net model. Secondly, the obtained optimal U-Net model was applied to predict three partitions of all samples. Lastly, various machine learning models (LDA, SVM, and PLS-DA) were trained based on segmented hyperspectral data. In addition, a threshold method and a circumcircle method were applied to segment licorice hyperspectral images for comparison. The results revealed that compared with the threshold segmentation method (which yielded SVM classifier accuracies of 99.17%, 91.15%, and 92.50% on the training set, validation set, and test set, respectively), the U-Net segmentation method significantly enhanced the accuracy of origin classification (99.06%, 94.72% and 96.07%). Conversely, the circumcircle segmentation method did not effectively improve the accuracy of origin classification (99.65%, 91.16% and 92.13%). By integrating Raman imaging of licorice, it can be inferred that the U-Net model, designed for region segmentation based on the inherent tissue structure of licorice, can effectively improve the accuracy origin classification, which has positive significance in the development of intelligence and information technology of Chinese medicine quality control.


Asunto(s)
Glycyrrhiza , Imágenes Hiperespectrales , Glycyrrhiza/química , Redes Neurales de la Computación , Aprendizaje Automático , Raíces de Plantas , Procesamiento de Imagen Asistido por Computador/métodos
6.
Med Phys ; 51(3): 1653-1673, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38323878

RESUMEN

BACKGROUND: Dual-energy (DE) detection of bone marrow edema (BME) would be a valuable new diagnostic capability for the emerging orthopedic cone-beam computed tomography (CBCT) systems. However, this imaging task is inherently challenging because of the narrow energy separation between water (edematous fluid) and fat (health yellow marrow), requiring precise artifact correction and dedicated material decomposition approaches. PURPOSE: We investigate the feasibility of BME assessment using kV-switching DE CBCT with a comprehensive CBCT artifact correction framework and a two-stage projection- and image-domain three-material decomposition algorithm. METHODS: DE CBCT projections of quantitative BME phantoms (water containers 100-165 mm in size with inserts presenting various degrees of edema) and an animal cadaver model of BME were acquired on a CBCT test bench emulating the standard wrist imaging configuration of a Multitom Rax twin robotic x-ray system. The slow kV-switching scan protocol involved a 60 kV low energy (LE) beam and a 120 kV high energy (HE) beam switched every 0.5° over a 200° angular span. The DE CBCT data preprocessing and artifact correction framework consisted of (i) projection interpolation onto matched LE and HE projections views, (ii) lag and glare deconvolutions, and (iii) efficient Monte Carlo (MC)-based scatter correction. Virtual non-calcium (VNCa) images for BME detection were then generated by projection-domain decomposition into an Aluminium (Al) and polyethylene basis set (to remove beam hardening) followed by three-material image-domain decomposition into water, Ca, and fat. Feasibility of BME detection was quantified in terms of VNCa image contrast and receiver operating characteristic (ROC) curves. Robustness to object size, position in the field of view (FOV) and beam collimation (varied 20-160 mm) was investigated. RESULTS: The MC-based scatter correction delivered > 69% reduction of cupping artifacts for moderate to wide collimations (> 80 mm beam width), which was essential to achieve accurate DE material decomposition. In a forearm-sized object, a 20% increase in water concentration (edema) of a trabecular bone-mimicking mixture presented as ∼15 HU VNCa contrast using 80-160 mm beam collimations. The variability with respect to object position in the FOV was modest (< 15% coefficient of variation). The areas under the ROC curve were > 0.9. A femur-sized object presented a somewhat more challenging task, resulting in increased sensitivity to object positioning at 160 mm collimation. In animal cadaver specimens, areas of VNCa enhancement consistent with BME were observed in DE CBCT images in regions of MRI-confirmed edema. CONCLUSION: Our results indicate that the proposed artifact correction and material decomposition pipeline can overcome the challenges of scatter and limited spectral separation to achieve relatively accurate and sensitive BME detection in DE CBCT. This study provides an important baseline for clinical translation of musculoskeletal DE CBCT to quantitative, point-of-care bone health assessment.


Asunto(s)
Médula Ósea , Tomografía Computarizada de Haz Cónico , Humanos , Médula Ósea/diagnóstico por imagen , Estudios de Factibilidad , Tomografía Computarizada de Haz Cónico/métodos , Algoritmos , Fantasmas de Imagen , Edema , Cadáver , Agua , Dispersión de Radiación , Procesamiento de Imagen Asistido por Computador/métodos
7.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(1): 114-120, 2024 Feb 25.
Artículo en Chino | MEDLINE | ID: mdl-38403611

RESUMEN

The automatic segmentation of auricular acupoint divisions is the basis for realizing intelligent auricular acupoint therapy. However, due to the large number of ear acupuncture areas and the lack of clear boundary, existing solutions face challenges in automatically segmenting auricular acupoints. Therefore, a fast and accurate automatic segmentation approach of auricular acupuncture divisions is needed. A deep learning-based approach for automatic segmentation of auricular acupoint divisions is proposed, which mainly includes three stages: ear contour detection, anatomical part segmentation and keypoints localization, and image post-processing. In the anatomical part segmentation and keypoints localization stages, K-YOLACT was proposed to improve operating efficiency. Experimental results showed that the proposed approach achieved automatic segmentation of 66 acupuncture points in the frontal image of the ear, and the segmentation effect was better than existing solutions. At the same time, the mean average precision (mAP) of the anatomical part segmentation of the K-YOLACT was 83.2%, mAP of keypoints localization was 98.1%, and the running speed was significantly improved. The implementation of this approach provides a reliable solution for the accurate segmentation of auricular point images, and provides strong technical support for the modern development of traditional Chinese medicine.


Asunto(s)
Acupuntura Auricular , Aprendizaje Profundo , Puntos de Acupuntura , Acupuntura Auricular/métodos , Procesamiento de Imagen Asistido por Computador/métodos
8.
J Struct Biol ; 216(1): 108057, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38182035

RESUMEN

Ctfplotter in the IMOD software package is a flexible program for determination of CTF parameters in tilt series images. It uses a novel approach to find astigmatism by measuring defocus in one-dimensional power spectra rotationally averaged over a series of restricted angular ranges. Comparisons with Ctffind, Gctf, and Warp show that Ctfplotter's estimated astigmatism is generally more reliable than that found by these programs that fit CTF parameters to two-dimensional power spectra, especially at higher tilt angles. In addition to that intrinsic advantage, Ctfplotter can reduce the variability in astigmatism estimates further by summing results over multiple tilt angles (typically 5), while still finding defocus for each individual image. Its fitting strategy also produces better phase estimates. The program now includes features for tuning the sampling of the power spectrum so that it is well-represented for analysis, and for determining an appropriate fitting range that can vary with tilt angle. It can thus be used automatically in a variety of situations, not just for fitting tilt series, and has been integrated into the SerialEM acquisition software for real-time determination of focus and astigmatism.


Asunto(s)
Algoritmos , Astigmatismo , Extractos Vegetales , Humanos , Astigmatismo/diagnóstico , Programas Informáticos , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía por Crioelectrón/métodos
9.
Sci Rep ; 14(1): 2514, 2024 01 30.
Artículo en Inglés | MEDLINE | ID: mdl-38291147

RESUMEN

Improving the quality of medical images is crucial for accurate clinical diagnosis; however, medical images are often disrupted by various types of noise, posing challenges to the reliability and diagnostic accuracy of the images. This study aims to enhance the Black Widow optimization algorithm and apply it to the task of denoising medical images to improve both the quality of medical images and the accuracy of diagnostic results. By introducing Tent mapping, we refined the Black Widow optimization algorithm to better adapt to the complex features of medical images. The algorithm's denoising capabilities for various types of noise were enhanced through the combination of multiple filters, all without the need for training each time to achieve preset goals. Simulation results, based on processing a dataset containing 1588 images with Gaussian, salt-and-pepper, Poisson, and speckle noise, demonstrated a reduction in Mean Squared Error (MSE) by 0.439, an increase in Peak Signal-to-Noise Ratio (PSNR) by 4.315, an improvement in Structural Similarity Index (SSIM) by 0.132, an enhancement in Edge-to-Noise Ratio (ENL) by 0.402, and an increase in Edge Preservation Index (EPI) by 0.614. Simulation experiments verified that the proposed algorithm has a certain advantage in terms of computational efficiency. The improvement, incorporating Tent mapping and a combination of multiple filters, successfully elevated the performance of the Black Widow algorithm in medical image denoising, providing an effective solution for enhancing medical image quality and diagnostic accuracy.


Asunto(s)
Araña Viuda Negra , Animales , Reproducibilidad de los Resultados , Algoritmos , Simulación por Computador , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador
10.
Schizophr Res ; 264: 266-271, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38198878

RESUMEN

AIM: We aimed to investigate potential discrepancies in the volume of thalamic nuclei between individuals with schizophrenia and healthy controls. METHODS: The imaging data for this study were obtained from the MCICShare data repository within SchizConnect. We employed probabilistic mapping technique developed by Iglesias et al. (2018). The analytical component entailed volumetric segmentation of the thalamus using the FreeSurfer image analysis suite. Our analysis focused on evaluating the differences in the volumes of various thalamic nuclei groups within the thalami, specifically the anterior, intralaminar, medial, posterior, lateral, and ventral groups in both the right and left thalami, between schizophrenia patients and healthy controls. We employed MANCOVA to analyse these dependent variables (volumes of 12 distinct thalamic nuclei groups), with diagnosis (SCZ vs. HCs) as the main explanatory variable, while controlling for covariates such as eTIV and age. RESULTS: The assumptions of MANCOVA, including the homogeneity of covariance matrices, were met. Specific univariate tests for the right thalamus revealed significant differences in the medial (F[1, 200] = 26.360, p < 0.001), and the ventral groups (F[1, 200] = 4.793, p = 0.030). For the left thalamus, the medial (F[1, 200] = 22.527, p < 0.001); posterior (F[1, 200] = 8.227, p = 0.005), lateral (F[1, 200] = 7.004, p = 0.009), and ventral groups (F[1, 200] = 9.309, p = 0.003) showed significant differences. CONCLUSION: These findings suggest that particular thalamic nuclei groups in both the right and left thalami may be most affected in schizophrenia, with more pronounced differences observed in the left thalamic nuclei. FUNDINGS: The authors received no financial support for the research.


Asunto(s)
Esquizofrenia , Humanos , Esquizofrenia/diagnóstico por imagen , Núcleos Talámicos/diagnóstico por imagen , Tálamo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética/métodos
11.
Neural Netw ; 170: 349-363, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38016230

RESUMEN

Visual images observed by humans can be reconstructed from their brain activity. However, the visualization (externalization) of mental imagery is challenging. Only a few studies have reported successful visualization of mental imagery, and their visualizable images have been limited to specific domains such as human faces or alphabetical letters. Therefore, visualizing mental imagery for arbitrary natural images stands as a significant milestone. In this study, we achieved this by enhancing a previous method. Specifically, we demonstrated that the visual image reconstruction method proposed in the seminal study by Shen et al. (2019) heavily relied on low-level visual information decoded from the brain and could not efficiently utilize the semantic information that would be recruited during mental imagery. To address this limitation, we extended the previous method to a Bayesian estimation framework and introduced the assistance of semantic information into it. Our proposed framework successfully reconstructed both seen images (i.e., those observed by the human eye) and imagined images from brain activity. Quantitative evaluation showed that our framework could identify seen and imagined images highly accurately compared to the chance accuracy (seen: 90.7%, imagery: 75.6%, chance accuracy: 50.0%). In contrast, the previous method could only identify seen images (seen: 64.3%, imagery: 50.4%). These results suggest that our framework would provide a unique tool for directly investigating the subjective contents of the brain such as illusions, hallucinations, and dreams.


Asunto(s)
Mapeo Encefálico , Imaginación , Humanos , Teorema de Bayes , Mapeo Encefálico/métodos , Encéfalo/diagnóstico por imagen , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética/métodos
12.
Artículo en Inglés | MEDLINE | ID: mdl-38083316

RESUMEN

Automatic segmentation of sublingual images and color quantification of sublingual vein are of great significance to disease diagnosis in traditional Chinese medicine. With the development of computer vision, automatic sublingual image processing provides a noninvasive way to observe patients' tongue and is convenient for both doctors and patients. However, current sublingual image segmentation methods are not accurate enough. Besides, the differences in subjective judgments by different doctors bring more difficulties in color analysis of sublingual veins. In this paper, we propose a method of sublingual image segmentation based on a modified UNet++ network to improve the segmentation accuracy, a color classification approach based on triplet network, and a color quantization method of sublingual vein based on linear discriminant analysis to provide intuitive one-dimensional results. Our methods achieve 88.2% mean intersection over union (mIoU) and 94.1% pixel accuracy on tongue dorsum segmentation, and achieves 69.8% mIoU and 82.7% pixel accuracy on sublingual vein segmentation. Compared with the state-of-art methods, the segmentation mIoUs are improved by 5.8% and 5.3% respectively. Our sublingual vein color classification method has the highest overall accuracy of 81.2% and the highest recall for the minority class of 77.5%, and the accuracy of color quantization is 90.5%.Clinical Relevance- The methods provide accurate and quantified information of the sublingual image, which can assist doctors in diagnosis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Lengua , Humanos , Color , Procesamiento de Imagen Asistido por Computador/métodos , Lengua/diagnóstico por imagen , Lengua/irrigación sanguínea , Medicina Tradicional China/métodos , Venas Yugulares
13.
Nat Methods ; 20(12): 2011-2020, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37985712

RESUMEN

Maps of the nervous system that identify individual cells along with their type, subcellular components and connectivity have the potential to elucidate fundamental organizational principles of neural circuits. Nanometer-resolution imaging of brain tissue provides the necessary raw data, but inferring cellular and subcellular annotation layers is challenging. We present segmentation-guided contrastive learning of representations (SegCLR), a self-supervised machine learning technique that produces representations of cells directly from 3D imagery and segmentations. When applied to volumes of human and mouse cortex, SegCLR enables accurate classification of cellular subcompartments and achieves performance equivalent to a supervised approach while requiring 400-fold fewer labeled examples. SegCLR also enables inference of cell types from fragments as small as 10 µm, which enhances the utility of volumes in which many neurites are truncated at boundaries. Finally, SegCLR enables exploration of layer 5 pyramidal cell subtypes and automated large-scale analysis of synaptic partners in mouse visual cortex.


Asunto(s)
Neurópilo , Corteza Visual , Humanos , Animales , Ratones , Neuritas , Células Piramidales , Aprendizaje Automático Supervisado , Procesamiento de Imagen Asistido por Computador
14.
Med Phys ; 50(12): 7498-7512, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37669510

RESUMEN

BACKGROUND: The bowtie-filter in cone-beam CT (CBCT) causes spatially nonuniform x-ray beam often leading to eclipse artifacts in the reconstructed image. The artifacts are further confounded by the patient scatter, which is therefore patient-dependent as well as system-specific. PURPOSE: In this study, we propose a dual-domain network for reducing the bowtie-filter-induced artifacts in CBCT images. METHODS: In the projection domain, the network compensates for the filter-induced beam-hardening that are highly related to the eclipse artifacts. The output of the projection-domain network was used for image reconstruction and the reconstructed images were fed into the image-domain network. In the image domain, the network further reduces the remaining cupping artifacts that are associated with the scatter. A single image-domain-only network was also implemented for comparison. RESULTS: The proposed approach successfully enhanced soft-tissue contrast with much-reduced image artifacts. In the numerical study, the proposed method decreased perceptual loss and root-mean-square-error (RMSE) of the images by 84.5% and 84.9%, respectively, and increased the structure similarity index measure (SSIM) by 0.26 compared to the original input images on average. In the experimental study, the proposed method decreased perceptual loss and RMSE of the images by 87.2% and 92.1%, respectively, and increased SSIM by 0.58 compared to the original input images on average. CONCLUSIONS: We have proposed a deep-learning-based dual-domain framework to reduce the bowtie-filter artifacts and to increase the soft-tissue contrast in CBCT images. The performance of the proposed method has been successfully demonstrated in both numerical and experimental studies.


Asunto(s)
Redes Neurales de la Computación , Mejoramiento de la Calidad , Humanos , Tomografía Computarizada de Haz Cónico/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Rayos X , Algoritmos , Fantasmas de Imagen , Artefactos
15.
J Affect Disord ; 339: 495-501, 2023 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-37459978

RESUMEN

BACKGROUND: Despite cognitive behavioral therapy (CBT) being a standard treatment in major depressive disorder (MDD), nearly half of patients do not respond. As one of the predictors of CBT's efficacy is amygdala reactivity to positive information, which is often decreased in MDD, we explored whether real-time fMRI neurofeedback (rtfMRI-nf) training to increase amygdala responses during positive memory recall prior CBT would enhance its efficacy. METHODS: In a double-blind, placebo controlled, randomized clinical trial, 35 adults with MDD received two sessions of rtfMRI-nf training to increase their amygdala (experimental group, n = 16) or parietal (control group, n = 19) responses during positive memory neurofeedback prior to receiving 10 CBT sessions. Depressive symptomatology was monitored between the rtfMRI sessions, the first three, 9th and 10th sessions of CBT and at 6 months and 1 year follow-up. RESULTS: Participants in the experimental group showed decreased depressive symptomatology and higher remission rates at 6 months and 1 year follow-up than the control group. Analysis of CBT content highlighted that participants in the experimental group focused more on positive thinking and behaviors than the control group. LIMITATIONS: The study was relatively small and not sufficiently powered to detect small effects. CONCLUSIONS: CBT, when combined with amygdala neurofeedback, results in sustained clinical changes and leads to long-lasting clinical improvement, potentially by increasing focus on positive memories and cognitions.


Asunto(s)
Trastorno Depresivo Mayor , Neurorretroalimentación , Adulto , Humanos , Neurorretroalimentación/métodos , Trastorno Depresivo Mayor/diagnóstico por imagen , Trastorno Depresivo Mayor/terapia , Depresión , Procesamiento de Imagen Asistido por Computador , Amígdala del Cerebelo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos
16.
Ultrasonics ; 134: 107103, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37437399

RESUMEN

This study aims to investigate the feasibility of combined segmentation for the separation of lesions from non-ablated regions, which allows surgeons to easily distinguish, measure, and evaluate the lesion area, thereby improving the quality of high-intensity focused-ultrasound (HIFU) surgery used for the non-invasive tumor treatment. Given that the flexible shape of the Gamma mixture model (GΓMM) fits the complex statistical distribution of samples, a method combining the GΓMM and Bayes framework is constructed for the classification of samples to obtain the segmentation result. An appropriate normalization range and parameters can be used to rapidly obtain a good performance of GΓMM segmentation. The performance values of the proposed method under four metrics (Dice score: 85%, Jaccard coefficient: 75%, recall: 86%, and accuracy: 96%) are better than those of conventional approaches including Otsu and Region growing. Furthermore, the statistical result of sample intensity indicates that the finding of the GΓMM is similar to that obtained by the manual method. These results indicate the stability and reliability of the GΓMM combined with the Bayes framework for the segmentation of HIFU lesions in ultrasound images. The experimental results show the possibility of combining the GΓMM with the Bayes framework to segment lesion areas and evaluate the effect of therapeutic ultrasound.


Asunto(s)
Algoritmos , Hipertermia Inducida , Teorema de Bayes , Reproducibilidad de los Resultados , Ultrasonografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos
17.
J Vis Exp ; (194)2023 04 14.
Artículo en Inglés | MEDLINE | ID: mdl-37125807

RESUMEN

Tongue diagnosis is an essential technique of traditional Chinese medicine (TCM) diagnosis, and the need for objectifying tongue images through image processing technology is growing. The present study provides an overview of the progress made in tongue objectification over the past decade and compares segmentation models. Various deep learning models are constructed to verify and compare algorithms using real tongue image sets. The strengths and weaknesses of each model are analyzed. The findings indicate that the U-Net algorithm outperforms other models regarding precision accuracy (PA), recall, and mean intersection over union (MIoU) metrics. However, despite the significant progress in tongue image acquisition and processing, a uniform standard for objectifying tongue diagnosis has yet to be established. To facilitate the widespread application of tongue images captured using mobile devices in tongue diagnosis objectification, further research could address the challenges posed by tongue images captured in complex environments.


Asunto(s)
Algoritmos , Lengua , Medicina Tradicional China/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Análisis de Datos
18.
Neural Netw ; 163: 205-218, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37062179

RESUMEN

Detecting subpixel targets is a considerably challenging issue in hyperspectral image processing and interpretation. Most of the existing hyperspectral subpixel target detection methods construct detectors based on the linear mixing model which regards a pixel as a linear combination of different spectral signatures. However, due to the multiple scattering, the linear mixing model cannot​ illustrate the multiple materials interactions that are nonlinear and widespread in real-world hyperspectral images, which could result in unsatisfactory performance in detecting subpixel targets. To alleviate this problem, this work presents a novel collaborative-guided spectral abundance learning model (denoted as CGSAL) for subpixel target detection based on the bilinear mixing model in hyperspectral images. The proposed CGSAL detects subpixel targets by learning a spectral abundance of the target signature in each pixel. In CGSAL, virtual endmembers and their abundance help to achieve good accuracy for modeling nonlinear scattering accounts for multiple materials interactions according to the bilinear mixing model. Besides, we impose a collaborative term to the spectral abundance learning model to emphasize the collaborative relationships between different endmembers, which contributes to accurate spectral abundance learning and further help to detect subpixel targets. Plentiful experiments and analyses are conducted on three real-world and one synthetic hyperspectral datasets to evaluate the effectiveness of the CGSAL in subpixel target detection. The experiment results demonstrate that the CGSAL achieves competitive performance in detecting subpixel targets and outperforms other state-of-the-art hyperspectral subpixel target detectors.


Asunto(s)
Algoritmos , Prácticas Interdisciplinarias , Interpretación de Imagen Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador , Modelos Lineales
19.
Int J Hyperthermia ; 40(1): 2194595, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37080550

RESUMEN

PURPOSE: In presence of respiratory motion, temperature mapping is altered by in-plane and through-plane displacements between successive acquisitions together with periodic phase variations. Fast 2D Echo Planar Imaging (EPI) sequence can accommodate intra-scan motion, but limited volume coverage and inter-scan motion remain a challenge during free-breathing acquisition since position offsets can arise between the different slices. METHOD: To address this limitation, we evaluated a 2D simultaneous multi-slice EPI sequence with multiband (MB) acceleration during radiofrequency ablation on a mobile gel and in the liver of a volunteer (no heating). The sequence was evaluated in terms of resulting inter-scan motion, temperature uncertainty and elevation, potential false-positive heating and repeatability. Lastly, to account for potential through-plane motion, a 3D motion compensation pipeline was implemented and evaluated. RESULTS: In-plane motion was compensated whatever the MB factor and temperature distribution was found in agreement during both the heating and cooling periods. No obvious false-positive temperature was observed under the conditions being investigated. Repeatability of measurements results in a 95% uncertainty below 2 °C for MB1 and MB2. Uncertainty up to 4.5 °C was reported with MB3 together with the presence of aliasing artifacts. Lastly, fast simultaneous multi-slice EPI combined with 3D motion compensation reduce residual out-of-plane motion. CONCLUSION: Volumetric temperature imaging (12 slices/700 ms) could be performed with 2 °C accuracy or less, and offer tradeoffs in acquisition time or volume coverage. Such a strategy is expected to increase procedure safety by monitoring large volumes more rapidly for MR-guided thermotherapy on mobile organs.


Asunto(s)
Imagen Eco-Planar , Termometría , Humanos , Imagen Eco-Planar/métodos , Termometría/métodos , Termografía/métodos , Temperatura , Temperatura Corporal , Encéfalo , Imagen por Resonancia Magnética/métodos , Procesamiento de Imagen Asistido por Computador
20.
J Neuroeng Rehabil ; 20(1): 40, 2023 04 11.
Artículo en Inglés | MEDLINE | ID: mdl-37038142

RESUMEN

Electroencephalogram (EEG) signals have been utilized in a variety of medical as well as engineering applications. However, one of the challenges associated with recording EEG data is the difficulty of recording large amounts of data. Consequently, data augmentation is a potential solution to overcome this challenge in which the objective is to increase the amount of data. Inspired by the success of Generative Adversarial Networks (GANs) in image processing applications, generating artificial EEG data from the limited recorded data using GANs has seen recent success. This article provides an overview of various techniques and approaches of GANs for augmenting EEG signals. We focus on the utility of GANs in different applications including Brain-Computer Interface (BCI) paradigms such as motor imagery and P300-based systems, in addition to emotion recognition, epileptic seizures detection and prediction, and various other applications. We address in this article how GANs have been used in each study, the impact of using GANs on the model performance, the limitations of each algorithm, and future possibilities for developing new algorithms. We emphasize the utility of GANs in augmenting the limited EEG data typically available in the studied applications.


Asunto(s)
Algoritmos , Interfaces Cerebro-Computador , Humanos , Electroencefalografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imágenes en Psicoterapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA