Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 142.090
Filtrar
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38434231

RESUMO

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Assuntos
Técnicas Histológicas , Microscopia , Animais , Citometria de Fluxo , Processamento de Imagem Assistida por Computador
2.
Sci Rep ; 14(1): 5068, 2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429362

RESUMO

Using deep learning technology to segment oral CBCT images for clinical diagnosis and treatment is one of the important research directions in the field of clinical dentistry. However, the blurred contour and the scale difference limit the segmentation accuracy of the crown edge and the root part of the current methods, making these regions become difficult-to-segment samples in the oral CBCT segmentation task. Aiming at the above problems, this work proposed a Difficult-to-Segment Focus Network (DSFNet) for segmenting oral CBCT images. The network utilizes a Feature Capturing Module (FCM) to efficiently capture local and long-range features, enhancing the feature extraction performance. Additionally, a Multi-Scale Feature Fusion Module (MFFM) is employed to merge multiscale feature information. To further improve the loss ratio for difficult-to-segment samples, a hybrid loss function is proposed, combining Focal Loss and Dice Loss. By utilizing the hybrid loss function, DSFNet achieves 91.85% Dice Similarity Coefficient (DSC) and 0.216 mm Average Symmetric Surface Distance (ASSD) performance in oral CBCT segmentation tasks. Experimental results show that the proposed method is superior to current dental CBCT image segmentation techniques and has real-world applicability.


Assuntos
Tomografia Computadorizada de Feixe Cônico Espiral , Tecnologia , Processamento de Imagem Assistida por Computador
3.
PLoS One ; 19(3): e0295536, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38466697

RESUMO

Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.


Assuntos
Encéfalo , Cabeça , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Redes Neurais de Computação , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos
4.
PLoS One ; 19(3): e0297331, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38466735

RESUMO

KRAS is a pathogenic gene frequently implicated in non-small cell lung cancer (NSCLC). However, biopsy as a diagnostic method has practical limitations. Therefore, it is important to accurately determine the mutation status of the KRAS gene non-invasively by combining NSCLC CT images and genetic data for early diagnosis and subsequent targeted therapy of patients. This paper proposes a Semi-supervised Multimodal Multiscale Attention Model (S2MMAM). S2MMAM comprises a Supervised Multilevel Fusion Segmentation Network (SMF-SN) and a Semi-supervised Multimodal Fusion Classification Network (S2MF-CN). S2MMAM facilitates the execution of the classification task by transferring the useful information captured in SMF-SN to the S2MF-CN to improve the model prediction accuracy. In SMF-SN, we propose a Triple Attention-guided Feature Aggregation module for obtaining segmentation features that incorporate high-level semantic abstract features and low-level semantic detail features. Segmentation features provide pre-guidance and key information expansion for S2MF-CN. S2MF-CN shares the encoder and decoder parameters of SMF-SN, which enables S2MF-CN to obtain rich classification features. S2MF-CN uses the proposed Intra and Inter Mutual Guidance Attention Fusion (I2MGAF) module to first guide segmentation and classification feature fusion to extract hidden multi-scale contextual information. I2MGAF then guides the multidimensional fusion of genetic data and CT image data to compensate for the lack of information in single modality data. S2MMAM achieved 83.27% AUC and 81.67% accuracy in predicting KRAS gene mutation status in NSCLC. This method uses medical image CT and genetic data to effectively improve the accuracy of predicting KRAS gene mutation status in NSCLC.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/genética , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/genética , Proteínas Proto-Oncogênicas p21(ras)/genética , Biópsia , Mutação , Processamento de Imagem Assistida por Computador
5.
Adv Neurobiol ; 36: 795-814, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38468064

RESUMO

To explore questions asked in neuroscience, neuroscientists rely heavily on the tools available. One such toolset is ImageJ, open-source, free, biological digital image analysis software. Open-source software has matured alongside of fractal analysis in neuroscience, and today ImageJ is not a niche but a foundation relied on by a substantial number of neuroscientists for work in diverse fields including fractal analysis. This is largely owing to two features of open-source software leveraged in ImageJ and vital to vigorous neuroscience: customizability and collaboration. With those notions in mind, this chapter's aim is threefold: (1) it introduces ImageJ, (2) it outlines ways this software tool has influenced fractal analysis in neuroscience and shaped the questions researchers devote time to, and (3) it reviews a few examples of ways investigators have developed and used ImageJ for pattern extraction in fractal analysis. Throughout this chapter, the focus is on fostering a collaborative and creative mindset for translating knowledge of the fractal geometry of the brain into clinical reality.


Assuntos
Fractais , Pesquisa Translacional Biomédica , Humanos , Processamento de Imagem Assistida por Computador/métodos , Software
6.
Comput Assist Surg (Abingdon) ; 29(1): 2327981, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38468391

RESUMO

Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.


Assuntos
Prótons , Tomografia Computadorizada de Feixe Cônico Espiral , Humanos , Dosagem Radioterapêutica , Inteligência Artificial , Estudos de Viabilidade , Processamento de Imagem Assistida por Computador/métodos
7.
Int J Nanomedicine ; 19: 2137-2148, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38476277

RESUMO

Purpose: Magnetic particle imaging (MPI) is an emerging medical imaging modality that is on the verge of clinical use. In recent years, cardiovascular applications have shown huge potential like, e.g., intraprocedural imaging guidance of stent placement through MPI. Due to the lack of signal generation, nano-modifications have been necessary to visualize commercial medical instruments until now. In this work, it is investigated if commercial interventional devices can be tracked with MPI without any nano-modification. Material and Methods: Potential MPI signal generation of nine endovascular metal stents was tested in a commercial MPI scanner. Two of the stents revealed sufficient MPI signal. Because one of the two stents showed relevant heating, the imaging experiments were carried out with a single stent model (Boston Scientific/Wallstent-Uni Endoprothesis, diameter: 16 mm, length: 60 mm). The nitinol stent and its delivery system were investigated in seven different scenarios. Therefore, the samples were placed at 49 defined spatial positions by a robot in a meandering pattern during MPI scans. Image reconstruction was performed, and the mean absolute errors (MAE) between the signals' centers of mass (COM) and ground truth positions were calculated. The stent material was investigated by magnetic particle spectroscopy (MPS) and vibrating sample magnetometry (VSM). To detect metallic components within the delivery system, nondestructive testing via computed tomography was performed. Results: The tracking of the stent and its delivery system was possible without any nano-modification. The MAE of the COM were 1.49 mm for the stent mounted on the delivery system, 3.70 mm for the expanded stent and 1.46 mm for the delivery system without the stent. The results of the MPS and VSM measurements indicate that besides material properties eddy currents seem to be responsible for signal generation. Conclusion: It is possible to image medical instruments with dedicated designs without modifications by means of MPI. This enables a variety of applications without compromising the mechanical and biocompatible properties of the instruments.


Assuntos
Stents , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Magnetismo , Fenômenos Magnéticos
8.
Sci Rep ; 14(1): 6086, 2024 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-38480847

RESUMO

Research on different machine learning (ML) has become incredibly popular during the past few decades. However, for some researchers not familiar with statistics, it might be difficult to understand how to evaluate the performance of ML models and compare them with each other. Here, we introduce the most common evaluation metrics used for the typical supervised ML tasks including binary, multi-class, and multi-label classification, regression, image segmentation, object detection, and information retrieval. We explain how to choose a suitable statistical test for comparing models, how to obtain enough values of the metric for testing, and how to perform the test and interpret its results. We also present a few practical examples about comparing convolutional neural networks used to classify X-rays with different lung infections and detect cancer tumors in positron emission tomography images.


Assuntos
Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Tomografia por Emissão de Pósitrons
9.
PLoS One ; 19(3): e0299970, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38478519

RESUMO

The accuracy of traditional CT image segmentation algorithms is hindered by issues such as low contrast and high noise in the images. While numerous scholars have introduced deep learning-based CT image segmentation algorithms, they still face challenges, particularly in achieving high edge accuracy and addressing pixel classification errors. To tackle these issues, this study proposes the MIS-Net (Medical Images Segment Net) model, a deep learning-based approach. The MIS-Net model incorporates multi-scale atrous convolution into the encoding and decoding structure with symmetry, enabling the comprehensive extraction of multi-scale features from CT images. This enhancement aims to improve the accuracy of lung and liver edge segmentation. In the evaluation using the COVID-19 CT Lung and Infection Segmentation dataset, the left and right lung segmentation results demonstrate that MIS-Net achieves a Dice Similarity Coefficient (DSC) of 97.61. Similarly, in the Liver Tumor Segmentation Challenge 2017 public dataset, the DSC of MIS-Net reaches 98.78.


Assuntos
COVID-19 , Aprendizado Profundo , Neoplasias Hepáticas , Humanos , Algoritmos , COVID-19/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
10.
Biomed Eng Online ; 23(1): 31, 2024 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-38468262

RESUMO

BACKGROUND: Ultrasound three-dimensional visualization, a cutting-edge technology in medical imaging, enhances diagnostic accuracy by providing a more comprehensive and readable portrayal of anatomical structures compared to traditional two-dimensional ultrasound. Crucial to this visualization is the segmentation of multiple targets. However, challenges like noise interference, inaccurate boundaries, and difficulties in segmenting small structures exist in the multi-target segmentation of ultrasound images. This study, using neck ultrasound images, concentrates on researching multi-target segmentation methods for the thyroid and surrounding tissues. METHOD: We improved the Unet++ to propose PA-Unet++ to enhance the multi-target segmentation accuracy of the thyroid and its surrounding tissues by addressing ultrasound noise interference. This involves integrating multi-scale feature information using a pyramid pooling module to facilitate segmentation of structures of various sizes. Additionally, an attention gate mechanism is applied to each decoding layer to progressively highlight target tissues and suppress the impact of background pixels. RESULTS: Video data obtained from 2D ultrasound thyroid serial scans served as the dataset for this paper.4600 images containing 23,000 annotated regions were divided into training and test sets at a ratio of 9:1, the results showed that: compared with the results of U-net++, the Dice of our model increased from 78.78% to 81.88% (+ 3.10%), the mIOU increased from 73.44% to 80.35% (+ 6.91%), and the PA index increased from 92.95% to 94.79% (+ 1.84%). CONCLUSIONS: Accurate segmentation is fundamental for various clinical applications, including disease diagnosis, treatment planning, and monitoring. This study will have a positive impact on the improvement of 3D visualization capabilities and clinical decision-making and research in the context of ultrasound image.


Assuntos
Imageamento Tridimensional , Glândula Tireoide , Glândula Tireoide/diagnóstico por imagem , Projetos de Pesquisa , Tecnologia , Processamento de Imagem Assistida por Computador
11.
Int J Mol Sci ; 25(5)2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38474256

RESUMO

The aim of this work was to use and optimize a 1.5 Tesla magnetic resonance imaging (MRI) system for three-dimensional (3D) images of small samples obtained from breast cell cultures in vitro. The basis of this study was to design MRI equipment to enable imaging of MCF-7 breast cancer cell cultures (about 1 million cells) in 1.5 and 2 mL glass tubes and/or bioreactors with an external diameter of less than 20 mm. Additionally, the development of software to calculate longitudinal and transverse relaxation times is described. Imaging tests were performed using a clinical MRI scanner OPTIMA 360 manufactured by GEMS. Due to the size of the tested objects, it was necessary to design additional receiving circuits allowing for the study of MCF-7 cell cultures placed in glass bioreactors. The examined sample's volume did not exceed 2.0 mL nor did the number of cells exceed 1 million. This work also included a modification of the sequence to allow for the analysis of T1 and T2 relaxation times. The analysis was performed using the MATLAB package (produced by MathWorks). The created application is based on medical MR images saved in the DICOM3.0 standard which ensures that the data analyzed are reliable and unchangeable in an unintentional manner that could affect the measurement results. The possibility of using 1.5 T MRI systems for cell culture research providing quantitative information from in vitro studies was realized. The scanning resolution for FOV = 5 cm and the matrix was achieved at a level of resolution of less than 0.1 mm/pixel. Receiving elements were built allowing for the acquisition of data for MRI image reconstruction confirmed by images of a phantom with a known structure and geometry. Magnetic resonance sequences were modified for the saturation recovery (SR) method, the purpose of which was to determine relaxation times. An application in MATLAB was developed that allows for the analysis of T1 and T2 relaxation times. The relaxation times of cell cultures were determined over a 6-week period. In the first week, the T1 time value was 1100 ± 40 ms, which decreased to 673 ± 59 ms by the sixth week. For T2, the results were 171 ± 10 ms and 128 ± 12 ms, respectively.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Tamanho da Amostra , Imageamento por Ressonância Magnética/métodos , Imagens de Fantasmas , Técnicas de Cultura de Células
12.
Sensors (Basel) ; 24(5)2024 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-38475010

RESUMO

This article presents the development of a vision system designed to enhance the autonomous navigation capabilities of robots in complex forest environments. Leveraging RGBD and thermic cameras, specifically the Intel RealSense 435i and FLIR ADK, the system integrates diverse visual sensors with advanced image processing algorithms. This integration enables robots to make real-time decisions, recognize obstacles, and dynamically adjust their trajectories during operation. The article focuses on the architectural aspects of the system, emphasizing the role of sensors and the formulation of algorithms crucial for ensuring safety during robot navigation in challenging forest terrains. Additionally, the article discusses the training of two datasets specifically tailored to forest environments, aiming to evaluate their impact on autonomous navigation. Tests conducted in real forest conditions affirm the effectiveness of the developed vision system. The results underscore the system's pivotal contribution to the autonomous navigation of robots in forest environments.


Assuntos
Dispositivos Ópticos , Robótica , Robótica/métodos , Agricultura Florestal , Algoritmos , Processamento de Imagem Assistida por Computador
13.
Sensors (Basel) ; 24(5)2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38475040

RESUMO

Livestock's live body dimensions are a pivotal indicator of economic output. Manual measurement is labor-intensive and time-consuming, often eliciting stress responses in the livestock. With the advancement of computer technology, the techniques for livestock live body dimension measurement have progressed rapidly, yielding significant research achievements. This paper presents a comprehensive review of the recent advancements in livestock live body dimension measurement, emphasizing the crucial role of computer-vision-based sensors. The discussion covers three main aspects: sensing data acquisition, sensing data processing, and sensing data analysis. The common techniques and measurement procedures in, and the current research status of, live body dimension measurement are introduced, along with a comparative analysis of their respective merits and drawbacks. Livestock data acquisition is the initial phase of live body dimension measurement, where sensors are employed as data collection equipment to obtain information conducive to precise measurements. Subsequently, the acquired data undergo processing, leveraging techniques such as 3D vision technology, computer graphics, image processing, and deep learning to calculate the measurements accurately. Lastly, this paper addresses the existing challenges within the domain of livestock live body dimension measurement in the livestock industry, highlighting the potential contributions of computer-vision-based sensors. Moreover, it predicts the potential development trends in the realm of high-throughput live body dimension measurement techniques for livestock.


Assuntos
Computadores , Gado , Animais , Processamento de Imagem Assistida por Computador , Inquéritos e Questionários , Indústrias
14.
Sensors (Basel) ; 24(5)2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38475138

RESUMO

The approach of using more than one processor to compute in order to overcome the complexity of different medical imaging methods that make up an overall job is known as GPU (graphic processing unit)-based parallel processing. It is extremely important for several medical imaging techniques such as image classification, object detection, image segmentation, registration, and content-based image retrieval, since the GPU-based parallel processing approach allows for time-efficient computation by a software, allowing multiple computations to be completed at once. On the other hand, a non-invasive imaging technology that may depict the shape of an anatomy and the biological advancements of the human body is known as magnetic resonance imaging (MRI). Implementing GPU-based parallel processing approaches in brain MRI analysis with medical imaging techniques might be helpful in achieving immediate and timely image capture. Therefore, this extended review (the extension of the IWBBIO2023 conference paper) offers a thorough overview of the literature with an emphasis on the expanding use of GPU-based parallel processing methods for the medical analysis of brain MRIs with the imaging techniques mentioned above, given the need for quicker computation to acquire early and real-time feedback in medicine. Between 2019 and 2023, we examined the articles in the literature matrix that include the tasks, techniques, MRI sequences, and processing results. As a result, the methods discussed in this review demonstrate the advancements achieved until now in minimizing computing runtime as well as the obstacles and problems still to be solved in the future.


Assuntos
Algoritmos , Gráficos por Computador , Humanos , Software , Encéfalo , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
15.
Exp Biol Med (Maywood) ; 249: 10064, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38463389

RESUMO

Ultrasonographic characteristics of skeletal muscles are related to their health status and functional capacity, but they still provide limited information on muscle composition during the inflammatory process. It has been demonstrated that an alteration in muscle composition or structure can have disparate effects on different ranges of ultrasonogram pixel intensities. Therefore, monitoring specific clusters or bands of pixel intensity values could help detect echotextural changes in skeletal muscles associated with neurogenic inflammation. Here we compare two methods of ultrasonographic image analysis, namely, the echointensity (EI) segmentation approach (EI banding method) and detection of selective pixel intensity ranges correlated with the expression of inflammatory regulators using an in-house developed computer algorithm (r-Algo). This study utilized an experimental model of neurogenic inflammation in segmentally linked myotomes (i.e., rectus femoris (RF) muscle) of rats subjected to lumbar facet injury. Our results show that there were no significant differences in RF echotextural variables for different EI bands (with 50- or 25-pixel intervals) between surgery and sham-operated rats, and no significant correlations among individual EI band pixel characteristics and protein expression of inflammatory regulators studied. However, mean numerical pixel values for the pixel intensity ranges identified with the proprietary r-Algo computer program correlated with protein expression of ERK1/2 and substance P (both 86-101-pixel ranges) and CaMKII (86-103-pixel range) in RF, and were greater (p < 0.05) in surgery rats compared with their sham-operated counterparts. Our findings indicate that computer-aided identification of specific pixel intensity ranges was critical for ultrasonographic detection of changes in the expression of inflammatory mediators in neurosegmentally-linked skeletal muscles of rats after facet injury.


Assuntos
Inflamação Neurogênica , Músculo Quadríceps , Ratos , Animais , Músculo Quadríceps/diagnóstico por imagem , Músculo Esquelético/diagnóstico por imagem , Músculo Esquelético/fisiologia , Ultrassonografia/métodos , Processamento de Imagem Assistida por Computador
16.
Biomed Res Int ; 2024: 9267554, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38464681

RESUMO

Purpose: Segmentation of hepatocellular carcinoma (HCC) is crucial; however, manual segmentation is subjective and time-consuming. Accurate and automatic lesion contouring for HCC is desirable in clinical practice. In response to this need, our study introduced a segmentation approach for HCC combining deep convolutional neural networks (DCNNs) and radiologist intervention in magnetic resonance imaging (MRI). We sought to design a segmentation method with a deep learning method that automatically segments using manual location information for moderately experienced radiologists. In addition, we verified the viability of this method to assist radiologists in accurate and fast lesion segmentation. Method: In our study, we developed a semiautomatic approach for segmenting HCC using DCNN in conjunction with radiologist intervention in dual-phase gadolinium-ethoxybenzyl-diethylenetriamine penta-acetic acid- (Gd-EOB-DTPA-) enhanced MRI. We developed a DCNN and deep fusion network (DFN) trained on full-size images, namely, DCNN-F and DFN-F. Furthermore, DFN was applied to the image blocks containing tumor lesions that were roughly contoured by a radiologist with 10 years of experience in abdominal MRI, and this method was named DFN-R. Another radiologist with five years of experience (moderate experience) performed tumor lesion contouring for comparison with our proposed methods. The ground truth image was contoured by an experienced radiologist and reviewed by an independent experienced radiologist. Results: The mean DSC of DCNN-F, DFN-F, and DFN-R was 0.69 ± 0.20 (median, 0.72), 0.74 ± 0.21 (median, 0.77), and 0.83 ± 0.13 (median, 0.88), respectively. The mean DSC of the segmentation by the radiologist with moderate experience was 0.79 ± 0.11 (median, 0.83), which was lower than the performance of DFN-R. Conclusions: Deep learning using dual-phase MRI shows great potential for HCC lesion segmentation. The radiologist-aided semiautomated method (DFN-R) achieved improved performance compared to manual contouring by the radiologist with moderate experience, although the difference was not statistically significant.


Assuntos
Carcinoma Hepatocelular , Aprendizado Profundo , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Radiologistas
17.
J Vis Exp ; (204)2024 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-38465926

RESUMO

This study aimed to introduce cone-beam computed tomography (CBCT) digitization and integration of digital dental images (DDI) based on artificial intelligence (AI)-based registration (ABR) and to evaluate the reliability and reproducibility using this method compared with those of surface-based registration (SBR). This retrospective study consisted of CBCT images and DDI of 17 patients who had undergone computer-aided bimaxillary orthognathic surgery. The digitization of CBCT images and their integration with DDI were repeated using an AI-based program. CBCT images and DDI were integrated using a point-to-point registration. In contrast, with the SBR method, the three landmarks were identified manually on the CBCT and DDI, which were integrated with the iterative closest points method. After two repeated integrations of each method, the three-dimensional coordinate values of the first maxillary molars and central incisors and their differences were obtained. Intraclass coefficient (ICC) testing was performed to evaluate intra-observer reliability with each method's coordinates and compare their reliability between the ABR and SBR. The intra-observer reliability showed significant and almost perfect ICC in each method. There was no significance in the mean difference between the first and second registrations in each ABR and SBR and between both methods; however, their ranges were narrower with ABR than with the SBR method. This study shows that AI-based digitization and integration are reliable and reproducible.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Humanos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Inteligência Artificial , Estudos Retrospectivos , Tomografia Computadorizada de Feixe Cônico/métodos
18.
Appl Opt ; 63(6): A7-A15, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38437352

RESUMO

Accurate and efficient counting of shrimp larvae is crucial for monitoring reproduction patterns, assessing growth rates, and evaluating the performance of aquaculture. Traditional methods via density estimation are ineffective in the case of high density. In addition, the image contains bright spots utilizing the point light source or the line light source. Therefore, in this paper an automated shrimp counting platform based on optics and image processing is designed to complete the task of counting shrimp larvae. First, an area light source ensures a uniformly illuminated environment, which helps to obtain shrimp images with high resolution. Then, a counting algorithm based on improved k-means and a side window filter (SWF) is designed to achieve an accurate number of shrimp in the lamp house. Specifically, the SWF technique is introduced to preserve the body contour of shrimp larvae, and eliminate noise, such as water impurities and eyes of shrimp larvae. Finally, shrimp larvae are divided into two groups, independent and interdependent, and counted separately. Experimental results show that the designed optical counting system is excellent in terms of visual effect and objective evaluation.


Assuntos
Algoritmos , Aquicultura , Animais , Olho , Processamento de Imagem Assistida por Computador , Larva
19.
J Opt Soc Am A Opt Image Sci Vis ; 41(3): 414-423, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38437432

RESUMO

The extraction of 3D human pose and body shape details from a single monocular image is a significant challenge in computer vision. Traditional methods use RGB images, but these are constrained by varying lighting and occlusions. However, cutting-edge developments in imaging technologies have introduced new techniques such as single-pixel imaging (SPI) that can surmount these hurdles. In the near-infrared spectrum, SPI demonstrates impressive capabilities in capturing a 3D human pose. This wavelength can penetrate clothing and is less influenced by lighting variations than visible light, thus providing a reliable means to accurately capture body shape and pose data, even in difficult settings. In this work, we explore the use of an SPI camera operating in the NIR with time-of-flight (TOF) at bands 850-1550 nm as a solution to detect humans in nighttime environments. The proposed system uses the vision transformers (ViT) model to detect and extract the characteristic features of humans for integration over a 3D body model SMPL-X through 3D body shape regression using deep learning. To evaluate the efficacy of NIR-SPI 3D image reconstruction, we constructed a laboratory scenario that simulates nighttime conditions, enabling us to test the feasibility of employing NIR-SPI as a vision sensor in outdoor environments. By assessing the results obtained from this setup, we aim to demonstrate the potential of NIR-SPI as an effective tool to detect humans in nighttime scenarios and capture their accurate 3D body pose and shape.


Assuntos
Aprendizado Profundo , Humanos , Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador , Fontes de Energia Elétrica , Luz
20.
Opt Express ; 32(4): 5460-5480, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38439272

RESUMO

It is well known that photoacoustic tomography (PAT) can circumvent the photon scattering problem in optical imaging and achieve high-contrast and high-resolution imaging at centimeter depths. However, after two decades of development, the long-standing question of the imaging depth limit of PAT in biological tissues remains unclear. Here we propose a numerical framework for evaluating the imaging depth limit of PAT in the visible and the first near-infrared windows. The established framework simulates the physical process of PAT and consists of seven modules, including tissue modelling, photon transportation, photon to ultrasound conversion, sound field propagation, signal reception, image reconstruction, and imaging depth evaluation. The framework can simulate the imaging depth limits in general tissues, such as the human breast, the human abdomen-liver tissues, and the rodent whole body and provide accurate evaluation results. The study elucidates the fundamental imaging depth limit of PAT in biological tissues and can provide useful guidance for practical experiments.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Imagem Óptica , Fótons
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...