Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 144.082
Filtrar
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38434231

RESUMEN

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Asunto(s)
Técnicas Histológicas , Microscopía , Animales , Citometría de Flujo , Procesamiento de Imagen Asistido por Computador
2.
J Biomed Opt ; 29(4): 046001, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38585417

RESUMEN

Significance: Endoscopic screening for esophageal cancer (EC) may enable early cancer diagnosis and treatment. While optical microendoscopic technology has shown promise in improving specificity, the limited field of view (<1 mm) significantly reduces the ability to survey large areas efficiently in EC screening. Aim: To improve the efficiency of endoscopic screening, we propose a novel concept of end-expandable endoscopic optical fiber probe for larger field of visualization and for the first time evaluate a deep-learning-based image super-resolution (DL-SR) method to overcome the issue of limited sampling capability. Approach: To demonstrate feasibility of the end-expandable optical fiber probe, DL-SR was applied on simulated low-resolution microendoscopic images to generate super-resolved (SR) ones. Varying the degradation model of image data acquisition, we identified the optimal parameters for optical fiber probe prototyping. The proposed screening method was validated with a human pathology reading study. Results: For various degradation parameters considered, the DL-SR method demonstrated different levels of improvement of traditional measures of image quality. The endoscopists' interpretations of the SR images were comparable to those performed on the high-resolution ones. Conclusions: This work suggests avenues for development of DL-SR-enabled sparse image reconstruction to improve high-yield EC screening and similar clinical applications.


Asunto(s)
Esófago de Barrett , Aprendizaje Profundo , Neoplasias Esofágicas , Humanos , Fibras Ópticas , Neoplasias Esofágicas/diagnóstico por imagen , Esófago de Barrett/patología , Procesamiento de Imagen Asistido por Computador
3.
Opt Express ; 32(7): 11934-11951, 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38571030

RESUMEN

Optical coherence tomography (OCT) can resolve biological three-dimensional tissue structures, but it is inevitably plagued by speckle noise that degrades image quality and obscures biological structure. Recently unsupervised deep learning methods are becoming more popular in OCT despeckling but they still have to use unpaired noisy-clean images or paired noisy-noisy images. To address the above problem, we propose what we believe to be a novel unsupervised deep learning method for OCT despeckling, termed Double-free Net, which eliminates the need for ground truth data and repeated scanning by sub-sampling noisy images and synthesizing noisier images. In comparison to existing unsupervised methods, Double-free Net obtains superior denoising performance when trained on datasets comprising retinal and human tissue images without clean images. The efficacy of Double-free Net in denoising holds significant promise for diagnostic applications in retinal pathologies and enhances the accuracy of retinal layer segmentation. Results demonstrate that Double-free Net outperforms state-of-the-art methods and exhibits strong convenience and adaptability across different OCT images.


Asunto(s)
Algoritmos , Tomografía de Coherencia Óptica , Humanos , Tomografía de Coherencia Óptica/métodos , Retina/diagnóstico por imagen , Cintigrafía , Procesamiento de Imagen Asistido por Computador/métodos
4.
PLoS One ; 19(4): e0299099, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38564618

RESUMEN

Individual muscle segmentation is the process of partitioning medical images into regions representing each muscle. It can be used to isolate spatially structured quantitative muscle characteristics, such as volume, geometry, and the level of fat infiltration. These features are pivotal to measuring the state of muscle functional health and in tracking the response of the body to musculoskeletal and neuromusculoskeletal disorders. The gold standard approach to perform muscle segmentation requires manual processing of large numbers of images and is associated with significant operator repeatability issues and high time requirements. Deep learning-based techniques have been recently suggested to be capable of automating the process, which would catalyse research into the effects of musculoskeletal disorders on the muscular system. In this study, three convolutional neural networks were explored in their capacity to automatically segment twenty-three lower limb muscles from the hips, thigh, and calves from magnetic resonance images. The three neural networks (UNet, Attention UNet, and a novel Spatial Channel UNet) were trained independently with augmented images to segment 6 subjects and were able to segment the muscles with an average Relative Volume Error (RVE) between -8.6% and 2.9%, average Dice Similarity Coefficient (DSC) between 0.70 and 0.84, and average Hausdorff Distance (HD) between 12.2 and 46.5 mm, with performance dependent on both the subject and the network used. The trained convolutional neural networks designed, and data used in this study are openly available for use, either through re-training for other medical images, or application to automatically segment new T1-weighted lower limb magnetic resonance images captured with similar acquisition parameters.


Asunto(s)
Aprendizaje Profundo , Humanos , Femenino , Animales , Bovinos , Procesamiento de Imagen Asistido por Computador/métodos , Posmenopausia , Muslo/diagnóstico por imagen , Músculos , Imagen por Resonancia Magnética/métodos
5.
Biomed Eng Online ; 23(1): 39, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38566181

RESUMEN

BACKGROUND: Congenital heart disease (CHD) is one of the most common birth defects in the world. It is the leading cause of infant mortality, necessitating an early diagnosis for timely intervention. Prenatal screening using ultrasound is the primary method for CHD detection. However, its effectiveness is heavily reliant on the expertise of physicians, leading to subjective interpretations and potential underdiagnosis. Therefore, a method for automatic analysis of fetal cardiac ultrasound images is highly desired to assist an objective and effective CHD diagnosis. METHOD: In this study, we propose a deep learning-based framework for the identification and segmentation of the three vessels-the pulmonary artery, aorta, and superior vena cava-in the ultrasound three vessel view (3VV) of the fetal heart. In the first stage of the framework, the object detection model Yolov5 is employed to identify the three vessels and localize the Region of Interest (ROI) within the original full-sized ultrasound images. Subsequently, a modified Deeplabv3 equipped with our novel AMFF (Attentional Multi-scale Feature Fusion) module is applied in the second stage to segment the three vessels within the cropped ROI images. RESULTS: We evaluated our method with a dataset consisting of 511 fetal heart 3VV images. Compared to existing models, our framework exhibits superior performance in the segmentation of all the three vessels, demonstrating the Dice coefficients of 85.55%, 89.12%, and 77.54% for PA, Ao and SVC respectively. CONCLUSIONS: Our experimental results show that our proposed framework can automatically and accurately detect and segment the three vessels in fetal heart 3VV images. This method has the potential to assist sonographers in enhancing the precision of vessel assessment during fetal heart examinations.


Asunto(s)
Aprendizaje Profundo , Embarazo , Femenino , Humanos , Vena Cava Superior , Ultrasonografía , Ultrasonografía Prenatal/métodos , Corazón Fetal/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
6.
PLoS One ; 19(4): e0299360, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38557660

RESUMEN

Ovarian cancer is a highly lethal malignancy in the field of oncology. Generally speaking, the segmentation of ovarian medical images is a necessary prerequisite for the diagnosis and treatment planning. Therefore, accurately segmenting ovarian tumors is of utmost importance. In this work, we propose a hybrid network called PMFFNet to improve the segmentation accuracy of ovarian tumors. The PMFFNet utilizes an encoder-decoder architecture. Specifically, the encoder incorporates the ViTAEv2 model to extract inter-layer multi-scale features from the feature pyramid. To address the limitation of fixed window size that hinders sufficient interaction of information, we introduce Varied-Size Window Attention (VSA) to the ViTAEv2 model to capture rich contextual information. Additionally, recognizing the significance of multi-scale features, we introduce the Multi-scale Feature Fusion Block (MFB) module. The MFB module enhances the network's capacity to learn intricate features by capturing both local and multi-scale information, thereby enabling more precise segmentation of ovarian tumors. Finally, in conjunction with our designed decoder, our model achieves outstanding performance on the MMOTU dataset. The results are highly promising, with the model achieving scores of 97.24%, 91.15%, and 87.25% in mACC, mIoU, and mDice metrics, respectively. When compared to several Unet-based and advanced models, our approach demonstrates the best segmentation performance.


Asunto(s)
Neoplasias Ováricas , Femenino , Humanos , Neoplasias Ováricas/diagnóstico por imagen , Benchmarking , Aprendizaje , Oncología Médica , Procesamiento de Imagen Asistido por Computador
7.
Sci Data ; 11(1): 330, 2024 Apr 03.
Artículo en Inglés | MEDLINE | ID: mdl-38570515

RESUMEN

Variations in color and texture of histopathology images are caused by differences in staining conditions and imaging devices between hospitals. These biases decrease the robustness of machine learning models exposed to out-of-domain data. To address this issue, we introduce a comprehensive histopathology image dataset named PathoLogy Images of Scanners and Mobile phones (PLISM). The dataset consisted of 46 human tissue types stained using 13 hematoxylin and eosin conditions and captured using 13 imaging devices. Precisely aligned image patches from different domains allowed for an accurate evaluation of color and texture properties in each domain. Variation in PLISM was assessed and found to be significantly diverse across various domains, particularly between whole-slide images and smartphones. Furthermore, we assessed the improvement in domain shift using a convolutional neural network pre-trained on PLISM. PLISM is a valuable resource that facilitates the precise evaluation of domain shifts in digital pathology and makes significant contributions towards the development of robust machine learning models that can effectively address challenges of domain shift in histological image analysis.


Asunto(s)
Técnicas Histológicas , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Redes Neurales de la Computación , Coloración y Etiquetado , Humanos , Eosina Amarillenta-(YS) , Procesamiento de Imagen Asistido por Computador/métodos , Histología
8.
PLoS One ; 19(4): e0301019, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38573957

RESUMEN

Automatic and accurate segmentation of medical images plays an essential role in disease diagnosis and treatment planning. Convolution neural networks have achieved remarkable results in medical image segmentation in the past decade. Meanwhile, deep learning models based on Transformer architecture also succeeded tremendously in this domain. However, due to the ambiguity of the medical image boundary and the high complexity of physical organization structures, implementing effective structure extraction and accurate segmentation remains a problem requiring a solution. In this paper, we propose a novel Dual Encoder Network named DECTNet to alleviate this problem. Specifically, the DECTNet embraces four components, which are a convolution-based encoder, a Transformer-based encoder, a feature fusion decoder, and a deep supervision module. The convolutional structure encoder can extract fine spatial contextual details in images. Meanwhile, the Transformer structure encoder is designed using a hierarchical Swin Transformer architecture to model global contextual information. The novel feature fusion decoder integrates the multi-scale representation from two encoders and selects features that focus on segmentation tasks by channel attention mechanism. Further, a deep supervision module is used to accelerate the convergence of the proposed method. Extensive experiments demonstrate that, compared to the other seven models, the proposed method achieves state-of-the-art results on four segmentation tasks: skin lesion segmentation, polyp segmentation, Covid-19 lesion segmentation, and MRI cardiac segmentation.


Asunto(s)
COVID-19 , Examen Físico , Humanos , Suministros de Energía Eléctrica , Corazón , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador
9.
Sci Rep ; 14(1): 7906, 2024 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-38575710

RESUMEN

This paper delves into the specialized domain of human action recognition, focusing on the Identification of Indian classical dance poses, specifically Bharatanatyam. Within the dance context, a "Karana" embodies a synchronized and harmonious movement encompassing body, hands, and feet, as defined by the Natyashastra. The essence of Karana lies in the amalgamation of nritta hasta (hand movements), sthaana (body postures), and chaari (leg movements). Although numerous, Natyashastra codifies 108 karanas, showcased in the intricate stone carvings adorning the Nataraj temples of Chidambaram, where Lord Shiva's association with these movements is depicted. Automating pose identification in Bharatanatyam poses challenges due to the vast array of variations, encompassing hand and body postures, mudras (hand gestures), facial expressions, and head gestures. To simplify this intricate task, this research employs image processing and automation techniques. The proposed methodology comprises four stages: acquisition and pre-processing of images involving skeletonization and Data Augmentation techniques, feature extraction from images, classification of dance poses using a deep learning network-based convolution neural network model (InceptionResNetV2), and visualization of 3D models through mesh creation from point clouds. The use of advanced technologies, such as the MediaPipe library for body key point detection and deep learning networks, streamlines the identification process. Data augmentation, a pivotal step, expands small datasets, enhancing the model's accuracy. The convolution neural network model showcased its effectiveness in accurately recognizing intricate dance movements, paving the way for streamlined analysis and interpretation. This innovative approach not only simplifies the identification of Bharatanatyam poses but also sets a precedent for enhancing accessibility and efficiency for practitioners and researchers in the Indian classical dance.


Asunto(s)
Realidad Aumentada , Humanos , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos , Cabeza , Gestos
10.
PLoS One ; 19(4): e0300122, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38578724

RESUMEN

We introduce the concept photophysical image analysis (PIA) and an associated pipeline for unsupervised probabilistic image thresholding for images recorded by electron-multiplying charge-coupled device (EMCCD) cameras. We base our approach on a closed-form analytic expression for the characteristic function (Fourier-transform of the probability mass function) for the image counts recorded in an EMCCD camera, which takes into account both stochasticity in the arrival of photons at the imaging camera and subsequent noise induced by the detection system of the camera. The only assumption in our method is that the background photon arrival to the imaging system is described by a stationary Poisson process (we make no assumption about the photon statistics for the signal). We estimate the background photon statistics parameter, λbg, from an image which contains both background and signal pixels by use of a novel truncated fit procedure with an automatically determined image count threshold. Prior to this, the camera noise model parameters are estimated using a calibration step. Utilizing the estimates for the camera parameters and λbg, we then introduce a probabilistic thresholding method, where, for the first time, the fraction of misclassified pixels can be determined a priori for a general image in an unsupervised way. We use synthetic images to validate our a priori estimates and to benchmark against the Otsu method, which is a popular unsupervised non-probabilistic image thresholding method (no a priori estimates for the error rates are provided). For completeness, we lastly present a simple heuristic general-purpose segmentation method based on the thresholding results, which we apply to segmentation of synthetic images and experimental images of fluorescent beads and lung cell nuclei. Our publicly available software opens up for fully automated, unsupervised, probabilistic photophysical image analysis.


Asunto(s)
Diagnóstico por Imagen , Electrones , Procesamiento de Imagen Asistido por Computador/métodos , Análisis de Fourier
11.
PLoS One ; 19(4): e0300767, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38578733

RESUMEN

Semantic segmentation of cityscapes via deep learning is an essential and game-changing research topic that offers a more nuanced comprehension of urban landscapes. Deep learning techniques tackle urban complexity and diversity, which unlocks a broad range of applications. These include urban planning, transportation management, autonomous driving, and smart city efforts. Through rich context and insights, semantic segmentation helps decision-makers and stakeholders make educated decisions for sustainable and effective urban development. This study investigates an in-depth exploration of cityscape image segmentation using the U-Net deep learning model. The proposed U-Net architecture comprises an encoder and decoder structure. The encoder uses convolutional layers and down sampling to extract hierarchical information from input images. Each down sample step reduces spatial dimensions, and increases feature depth, aiding context acquisition. Batch normalization and dropout layers stabilize models and prevent overfitting during encoding. The decoder reconstructs higher-resolution feature maps using "UpSampling2D" layers. Through extensive experimentation and evaluation of the Cityscapes dataset, this study demonstrates the effectiveness of the U-Net model in achieving state-of-the-art results in image segmentation. The results clearly shown that, the proposed model has high accuracy, mean IOU and mean DICE compared to existing models.


Asunto(s)
Aprendizaje Profundo , Semántica , Planificación de Ciudades , Investigación Empírica , Hidrolasas , Procesamiento de Imagen Asistido por Computador
12.
Sci Rep ; 14(1): 8253, 2024 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589478

RESUMEN

This work presents a deep learning approach for rapid and accurate muscle water T2 with subject-specific fat T2 calibration using multi-spin-echo acquisitions. This method addresses the computational limitations of conventional bi-component Extended Phase Graph fitting methods (nonlinear-least-squares and dictionary-based) by leveraging fully connected neural networks for fast processing with minimal computational resources. We validated the approach through in vivo experiments using two different MRI vendors. The results showed strong agreement of our deep learning approach with reference methods, summarized by Lin's concordance correlation coefficients ranging from 0.89 to 0.97. Further, the deep learning method achieved a significant computational time improvement, processing data 116 and 33 times faster than the nonlinear least squares and dictionary methods, respectively. In conclusion, the proposed approach demonstrated significant time and resource efficiency improvements over conventional methods while maintaining similar accuracy. This methodology makes the processing of water T2 data faster and easier for the user and will facilitate the utilization of the use of a quantitative water T2 map of muscle in clinical and research studies.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Agua , Calibración , Imagen por Resonancia Magnética/métodos , Músculos/diagnóstico por imagen , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo
13.
BMC Med Imaging ; 24(1): 83, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589793

RESUMEN

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.


Asunto(s)
Aprendizaje Profundo , Humanos , Curaduría de Datos , Leucocitos , Redes Neurales de la Computación , Células Sanguíneas , Procesamiento de Imagen Asistido por Computador/métodos
14.
PLoS One ; 19(4): e0298287, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38593135

RESUMEN

Cryo-electron micrograph images have various characteristics such as varying sizes, shapes, and distribution densities of individual particles, severe background noise, high levels of impurities, irregular shapes, blurred edges, and similar color to the background. How to demonstrate good adaptability in the field of image vision by picking up single particles from multiple types of cryo-electron micrographs is currently a challenge in the field of cryo-electron micrographs. This paper combines the characteristics of the MixUp hybrid enhancement algorithm, enhances the image feature information in the pre-processing stage, builds a feature perception network based on the channel self-attention mechanism in the forward network of the Swin Transformer model network, achieving adaptive adjustment of self-attention mechanism between different single particles, increasing the network's tolerance to noise, Incorporating PReLU activation function to enhance information exchange between pixel blocks of different single particles, and combining the Cross-Entropy function with the softmax function to construct a classification network based on Swin Transformer suitable for cryo-electron micrograph single particle detection model (Swin-cryoEM), achieving mixed detection of multiple types of single particles. Swin-cryoEM algorithm can better solve the problem of good adaptability in picking single particles of many types of cryo-electron micrographs, improve the accuracy and generalization ability of the single particle picking method, and provide high-quality data support for the three-dimensional reconstruction of a single particle. In this paper, ablation experiments and comparison experiments were designed to evaluate and compare Swin-cryoEM algorithms in detail and comprehensively on multiple datasets. The Average Precision is an important evaluation index of the evaluation model, and the optimal Average Precision reached 95.5% in the training stage Swin-cryoEM, and the single particle picking performance was also superior in the prediction stage. This model inherits the advantages of the Swin Transformer detection model and is superior to mainstream models such as Faster R-CNN and YOLOv5 in terms of the single particle detection capability of cryo-electron micrographs.


Asunto(s)
Algoritmos , Electrones , Microscopía por Crioelectrón/métodos , Procesamiento de Imagen Asistido por Computador/métodos
15.
Sci Rep ; 14(1): 8348, 2024 04 09.
Artículo en Inglés | MEDLINE | ID: mdl-38594373

RESUMEN

Single molecule fluorescence in situ hybridisation (smFISH) has become a valuable tool to investigate the mRNA expression of single cells. However, it requires a considerable amount of programming expertise to use currently available open-source analytical software packages to extract and analyse quantitative data about transcript expression. Here, we present FISHtoFigure, a new software tool developed specifically for the analysis of mRNA abundance and co-expression in QuPath-quantified, multi-labelled smFISH data. FISHtoFigure facilitates the automated spatial analysis of transcripts of interest, allowing users to analyse populations of cells positive for specific combinations of mRNA targets without the need for computational image analysis expertise. As a proof of concept and to demonstrate the capabilities of this new research tool, we have validated FISHtoFigure in multiple biological systems. We used FISHtoFigure to identify an upregulation in the expression of Cd4 by T-cells in the spleens of mice infected with influenza A virus, before analysing more complex data showing crosstalk between microglia and regulatory B-cells in the brains of mice infected with Trypanosoma brucei brucei. These analyses demonstrate the ease of analysing cell expression profiles using FISHtoFigure and the value of this new tool in the field of smFISH data analysis.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Programas Informáticos , Animales , Ratones , ARN Mensajero/metabolismo , Hibridación Fluorescente in Situ/métodos , Regulación hacia Arriba
16.
Adv Neurobiol ; 36: 795-814, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38468064

RESUMEN

To explore questions asked in neuroscience, neuroscientists rely heavily on the tools available. One such toolset is ImageJ, open-source, free, biological digital image analysis software. Open-source software has matured alongside of fractal analysis in neuroscience, and today ImageJ is not a niche but a foundation relied on by a substantial number of neuroscientists for work in diverse fields including fractal analysis. This is largely owing to two features of open-source software leveraged in ImageJ and vital to vigorous neuroscience: customizability and collaboration. With those notions in mind, this chapter's aim is threefold: (1) it introduces ImageJ, (2) it outlines ways this software tool has influenced fractal analysis in neuroscience and shaped the questions researchers devote time to, and (3) it reviews a few examples of ways investigators have developed and used ImageJ for pattern extraction in fractal analysis. Throughout this chapter, the focus is on fostering a collaborative and creative mindset for translating knowledge of the fractal geometry of the brain into clinical reality.


Asunto(s)
Fractales , Investigación Biomédica Traslacional , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Programas Informáticos
17.
Comput Assist Surg (Abingdon) ; 29(1): 2327981, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38468391

RESUMEN

Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.


Asunto(s)
Protones , Tomografía Computarizada de Haz Cónico Espiral , Humanos , Dosificación Radioterapéutica , Inteligencia Artificial , Estudios de Factibilidad , Procesamiento de Imagen Asistido por Computador/métodos
18.
Exp Biol Med (Maywood) ; 249: 10064, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38463389

RESUMEN

Ultrasonographic characteristics of skeletal muscles are related to their health status and functional capacity, but they still provide limited information on muscle composition during the inflammatory process. It has been demonstrated that an alteration in muscle composition or structure can have disparate effects on different ranges of ultrasonogram pixel intensities. Therefore, monitoring specific clusters or bands of pixel intensity values could help detect echotextural changes in skeletal muscles associated with neurogenic inflammation. Here we compare two methods of ultrasonographic image analysis, namely, the echointensity (EI) segmentation approach (EI banding method) and detection of selective pixel intensity ranges correlated with the expression of inflammatory regulators using an in-house developed computer algorithm (r-Algo). This study utilized an experimental model of neurogenic inflammation in segmentally linked myotomes (i.e., rectus femoris (RF) muscle) of rats subjected to lumbar facet injury. Our results show that there were no significant differences in RF echotextural variables for different EI bands (with 50- or 25-pixel intervals) between surgery and sham-operated rats, and no significant correlations among individual EI band pixel characteristics and protein expression of inflammatory regulators studied. However, mean numerical pixel values for the pixel intensity ranges identified with the proprietary r-Algo computer program correlated with protein expression of ERK1/2 and substance P (both 86-101-pixel ranges) and CaMKII (86-103-pixel range) in RF, and were greater (p < 0.05) in surgery rats compared with their sham-operated counterparts. Our findings indicate that computer-aided identification of specific pixel intensity ranges was critical for ultrasonographic detection of changes in the expression of inflammatory mediators in neurosegmentally-linked skeletal muscles of rats after facet injury.


Asunto(s)
Inflamación Neurogénica , Músculo Cuádriceps , Ratas , Animales , Músculo Cuádriceps/diagnóstico por imagen , Músculo Esquelético/diagnóstico por imagen , Músculo Esquelético/fisiología , Ultrasonografía/métodos , Procesamiento de Imagen Asistido por Computador
19.
Biomed Res Int ; 2024: 9267554, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38464681

RESUMEN

Purpose: Segmentation of hepatocellular carcinoma (HCC) is crucial; however, manual segmentation is subjective and time-consuming. Accurate and automatic lesion contouring for HCC is desirable in clinical practice. In response to this need, our study introduced a segmentation approach for HCC combining deep convolutional neural networks (DCNNs) and radiologist intervention in magnetic resonance imaging (MRI). We sought to design a segmentation method with a deep learning method that automatically segments using manual location information for moderately experienced radiologists. In addition, we verified the viability of this method to assist radiologists in accurate and fast lesion segmentation. Method: In our study, we developed a semiautomatic approach for segmenting HCC using DCNN in conjunction with radiologist intervention in dual-phase gadolinium-ethoxybenzyl-diethylenetriamine penta-acetic acid- (Gd-EOB-DTPA-) enhanced MRI. We developed a DCNN and deep fusion network (DFN) trained on full-size images, namely, DCNN-F and DFN-F. Furthermore, DFN was applied to the image blocks containing tumor lesions that were roughly contoured by a radiologist with 10 years of experience in abdominal MRI, and this method was named DFN-R. Another radiologist with five years of experience (moderate experience) performed tumor lesion contouring for comparison with our proposed methods. The ground truth image was contoured by an experienced radiologist and reviewed by an independent experienced radiologist. Results: The mean DSC of DCNN-F, DFN-F, and DFN-R was 0.69 ± 0.20 (median, 0.72), 0.74 ± 0.21 (median, 0.77), and 0.83 ± 0.13 (median, 0.88), respectively. The mean DSC of the segmentation by the radiologist with moderate experience was 0.79 ± 0.11 (median, 0.83), which was lower than the performance of DFN-R. Conclusions: Deep learning using dual-phase MRI shows great potential for HCC lesion segmentation. The radiologist-aided semiautomated method (DFN-R) achieved improved performance compared to manual contouring by the radiologist with moderate experience, although the difference was not statistically significant.


Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagen , Neoplasias Hepáticas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Radiólogos
20.
J Vis Exp ; (204)2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-38465926

RESUMEN

This study aimed to introduce cone-beam computed tomography (CBCT) digitization and integration of digital dental images (DDI) based on artificial intelligence (AI)-based registration (ABR) and to evaluate the reliability and reproducibility using this method compared with those of surface-based registration (SBR). This retrospective study consisted of CBCT images and DDI of 17 patients who had undergone computer-aided bimaxillary orthognathic surgery. The digitization of CBCT images and their integration with DDI were repeated using an AI-based program. CBCT images and DDI were integrated using a point-to-point registration. In contrast, with the SBR method, the three landmarks were identified manually on the CBCT and DDI, which were integrated with the iterative closest points method. After two repeated integrations of each method, the three-dimensional coordinate values of the first maxillary molars and central incisors and their differences were obtained. Intraclass coefficient (ICC) testing was performed to evaluate intra-observer reliability with each method's coordinates and compare their reliability between the ABR and SBR. The intra-observer reliability showed significant and almost perfect ICC in each method. There was no significance in the mean difference between the first and second registrations in each ABR and SBR and between both methods; however, their ranges were narrower with ABR than with the SBR method. This study shows that AI-based digitization and integration are reliable and reproducible.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Humanos , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Inteligencia Artificial , Estudios Retrospectivos , Tomografía Computarizada de Haz Cónico/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...