Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 145.748
Filtrar
1.
J Biomed Opt ; 30(Suppl 1): S13703, 2025 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-39034959

RESUMEN

Significance: Standardization of fluorescence molecular imaging (FMI) is critical for ensuring quality control in guiding surgical procedures. To accurately evaluate system performance, two metrics, the signal-to-noise ratio (SNR) and contrast, are widely employed. However, there is currently no consensus on how these metrics can be computed. Aim: We aim to examine the impact of SNR and contrast definitions on the performance assessment of FMI systems. Approach: We quantified the SNR and contrast of six near-infrared FMI systems by imaging a multi-parametric phantom. Based on approaches commonly used in the literature, we quantified seven SNRs and four contrast values considering different background regions and/or formulas. Then, we calculated benchmarking (BM) scores and respective rank values for each system. Results: We show that the performance assessment of an FMI system changes depending on the background locations and the applied quantification method. For a single system, the different metrics can vary up to ∼ 35 dB (SNR), ∼ 8.65 a . u . (contrast), and ∼ 0.67 a . u . (BM score). Conclusions: The definition of precise guidelines for FMI performance assessment is imperative to ensure successful clinical translation of the technology. Such guidelines can also enable quality control for the already clinically approved indocyanine green-based fluorescence image-guided surgery.


Asunto(s)
Benchmarking , Imagen Molecular , Imagen Óptica , Fantasmas de Imagen , Relación Señal-Ruido , Imagen Molecular/métodos , Imagen Molecular/normas , Imagen Óptica/métodos , Imagen Óptica/normas , Procesamiento de Imagen Asistido por Computador/métodos
2.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38434231

RESUMEN

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Asunto(s)
Técnicas Histológicas , Microscopía , Animales , Citometría de Flujo , Procesamiento de Imagen Asistido por Computador
3.
IEEE J Biomed Health Inform ; 28(7): 4170-4183, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954557

RESUMEN

Efficient medical image segmentation aims to provide accurate pixel-wise predictions with a lightweight implementation framework. However, existing lightweight networks generally overlook the generalizability of the cross-domain medical segmentation tasks. In this paper, we propose Generalizable Knowledge Distillation (GKD), a novel framework for enhancing the performance of lightweight networks on cross-domain medical segmentation by generalizable knowledge distillation from powerful teacher networks. Considering the domain gaps between different medical datasets, we propose the Model-Specific Alignment Networks (MSAN) to obtain the domain-invariant representations. Meanwhile, a customized Alignment Consistency Training (ACT) strategy is designed to promote the MSAN training. Based on the domain-invariant vectors in MSAN, we propose two generalizable distillation schemes, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD). In DCGD, two implicit contrastive graphs are designed to model the intra-coupling and inter-coupling semantic correlations. Then, in DICD, the domain-invariant semantic vectors are reconstructed from two networks (i.e., teacher and student) with a crossover manner to achieve simultaneous generalization of lightweight networks, hierarchically. Moreover, a metric named Fréchet Semantic Distance (FSD) is tailored to verify the effectiveness of the regularized domain-invariant features. Extensive experiments conducted on the Liver, Retinal Vessel and Colonoscopy segmentation datasets demonstrate the superiority of our method, in terms of performance and generalization ability on lightweight networks.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación , Bases de Datos Factuales , Aprendizaje Profundo
4.
J Contemp Dent Pract ; 25(4): 320-325, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38956845

RESUMEN

AIM: The aim of the present research was to assess the mesiodistal angulation of the maxillary anterior teeth utilizing Image J computer software, a Profile projector, and a Custom-made jig. MATERIALS AND METHODS: A total of 34 subjects (17 males and 17 females) were chosen from a group of 18-30 years old with bilateral Angle Class I molars and canine relationships. One manual approach (Custom-made jig) and two digital methods (J computer software, a Profile projector) were used to record the mesiodistal angulation in incisal view. The individuals had alginate impressions made, and a facebow was used to capture the maxilla's spatial relationship with the cranium. The articulated cast with the help of mounting ring moved to the specially customized jig, then the angulations was measured in the incisal view after the casts were placed in a semi-adjustable articulator. Data were recorded and statistically analyzed. RESULTS: The mesiodistal angulation in the incisal view via three methods between the 17 males and 17 females has statistically significant different. Although the mesiodistal angulation for maxillary lateral incisor and canine did not show any statistically significant difference, the maximum and minimum values obtained were always greater in males in comparison with the females. This indicates that the positions of six maxillary anterior teeth in the males resulted in the creation of upward sweep of incisal edges of central and lateral incisors which was also referred to as "smiling line" producing masculine surface anatomy more squared and vigorous while feminine surface anatomy being more rounded, soft, and pleasant. There was no statistically significant difference between the right and left sides, indicating bilateral arch symmetry and the symmetrical place of the right teeth compared with the left side's corresponding teeth. CONCLUSION: On conclusion, according to the current study's findings, all three approaches can measure the mesiodistal angulations of maxillary anterior teeth in incisal view with clinically acceptable accuracy. The digital methods, which included using the Image J computer software and the profile projector, achieved more accurate results than the manual method. CLINICAL SIGNIFICANCE: The outcomes of this study's mesiodistal angulations can be used as a reference for placing teeth in both fully and partially edentulous conditions. This study contributes to a better understanding of the importance of achieving the ideal occlusion in the Indian population by placing the maxillary anterior teeth at the proper mesiodistal angulation. How to cite this article: Shadaksharappa SH, Lahiri B, Kamath AG, et al. Evaluation of Mesiodistal Angulation of Maxillary Anterior Teeth in Incisal View Using Manual and Digital Methods: An In Vivo Study. J Contemp Dent Pract 2024;25(4):320-325.


Asunto(s)
Incisivo , Maxilar , Humanos , Masculino , Femenino , Maxilar/anatomía & histología , Adolescente , Incisivo/anatomía & histología , Adulto Joven , Adulto , Programas Informáticos , Procesamiento de Imagen Asistido por Computador/métodos , Diente Canino/anatomía & histología
5.
Zhongguo Ying Yong Sheng Li Xue Za Zhi ; 40: e20240008, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38952174

RESUMEN

The numerous and varied forms of neurodegenerative illnesses provide a considerable challenge to contemporary healthcare. The emergence of artificial intelligence has fundamentally changed the diagnostic picture by providing effective and early means of identifying these crippling illnesses. As a subset of computational intelligence, machine-learning algorithms have become very effective tools for the analysis of large datasets that include genetic, imaging, and clinical data. Moreover, multi-modal data integration, which includes information from brain imaging (MRI, PET scans), genetic profiles, and clinical evaluations, is made easier by computational intelligence. A thorough knowledge of the course of the illness is made possible by this consolidative method, which also facilitates the creation of predictive models for early medical evaluation and outcome prediction. Furthermore, there has been a great deal of promise shown by the use of artificial intelligence to neuroimaging analysis. Sophisticated image processing methods combined with machine learning algorithms make it possible to identify functional and structural anomalies in the brain, which often act as early indicators of neurodegenerative diseases. This chapter examines how computational intelligence plays a critical role in improving the diagnosis of neurodegenerative diseases such as Parkinson's, Alzheimer's, etc. To sum up, computational intelligence provides a revolutionary approach for improving the identification of neurodegenerative illnesses. In the battle against these difficult disorders, embracing and improving these computational techniques will surely pave the path for more individualized therapy and more therapies that are successful.


Asunto(s)
Biología Computacional , Aprendizaje Automático , Enfermedades Neurodegenerativas , Neuroimagen , Humanos , Enfermedades Neurodegenerativas/diagnóstico , Enfermedades Neurodegenerativas/diagnóstico por imagen , Biología Computacional/métodos , Neuroimagen/métodos , Algoritmos , Inteligencia Artificial , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
6.
Zhongguo Xue Xi Chong Bing Fang Zhi Za Zhi ; 36(3): 251-258, 2024 Jun 07.
Artículo en Chino | MEDLINE | ID: mdl-38952311

RESUMEN

OBJECTIVE: To investigate the feasibility of developing a grading diagnostic model for schistosomiasis-induced liver fibrosis based on B-mode ultrasonographic images and clinical laboratory indicators. METHODS: Ultrasound images and clinical laboratory testing data were captured from schistosomiasis patients admitted to the Second People's Hospital of Duchang County, Jiangxi Province from 2018 to 2022. Patients with grade I schistosomiasis-induced liver fibrosis were enrolled in Group 1, and patients with grade II and III schistosomiasis-induced liver fibrosis were enrolled in Group 2. The machine learning binary classification tasks were created based on patients'radiomics and clinical laboratory data from 2018 to 2021 as the training set, and patients'radiomics and clinical laboratory data in 2022 as the validation set. The features of ultrasonographic images were labeled with the ITK-SNAP software, and the features of ultrasonographic images were extracted using the Python 3.7 package and PyRadiomics toolkit. The difference in the features of ultrasonographic images was compared between groups with t test or Mann-Whitney U test, and the key imaging features were selected with the least absolute shrinkage and selection operator (LASSO) regression algorithm. Four machine learning models were created using the Scikit-learn repository, including the support vector machine (SVM), random forest (RF), linear regression (LR) and extreme gradient boosting (XGBoost). The optimal machine learning model was screened with the receiver operating characteristic curve (ROC), and features with the greatest contributions to the differentiation features of ultrasound images in machine learning models with the SHapley Additive exPlanations (SHAP) method. RESULTS: The ultrasonographic imaging data and clinical laboratory testing data from 491 schistosomiasis patients from 2019 to 2022 were included in the study, and a total of 851 radiomics features and 54 clinical laboratory indicators were captured. Following statistical tests (t = -5.98 to 4.80, U = 6 550 to 20 994, all P values < 0.05) and screening of key features with LASSO regression, 44 features or indicators were included for the subsequent modeling. The areas under ROC curve (AUCs) were 0.763 and 0.611 for the training and validation sets of the SVM model based on clinical laboratory indicators, 0.951 and 0.892 for the training and validation sets of the SVM model based on radiomics, and 0.960 and 0.913 for the training and validation sets of the multimodal SVM model. The 10 greatest contributing features or indicators in machine learning models included 2 clinical laboratory indicators and 8 radiomics features. CONCLUSIONS: The multimodal machine learning models created based on ultrasound-based radiomics and clinical laboratory indicators are feasible for intelligent identification of schistosomiasis-induced liver fibrosis, and are effective to improve the classification effect of one-class data models.


Asunto(s)
Cirrosis Hepática , Aprendizaje Automático , Esquistosomiasis , Ultrasonografía , Humanos , Esquistosomiasis/diagnóstico , Esquistosomiasis/diagnóstico por imagen , Cirrosis Hepática/parasitología , Cirrosis Hepática/diagnóstico por imagen , Cirrosis Hepática/diagnóstico , Ultrasonografía/métodos , Masculino , Femenino , Persona de Mediana Edad , Adulto , Máquina de Vectores de Soporte , Procesamiento de Imagen Asistido por Computador/métodos , Radiómica
7.
PLoS One ; 19(7): e0304915, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38950045

RESUMEN

A trademark's image is usually the first type of indirect contact between a consumer and a product or a service. Companies rely on graphical trademarks as a symbol of quality and instant recognition, seeking to protect them from copyright infringements. A popular defense mechanism is graphical searching, where an image is compared to a large database to find potential conflicts with similar trademarks. Despite not being a new subject, image retrieval state-of-the-art lacks reliable solutions in the Industrial Property (IP) sector, where datasets are practically unrestricted in content, with abstract images for which modeling human perception is a challenging task. Existing Content-based Image Retrieval (CBIR) systems still present several problems, particularly in terms of efficiency and reliability. In this paper, we propose a new CBIR system that overcomes these major limitations. It follows a modular methodology, composed of a set of individual components tasked with the retrieval, maintenance and gradual optimization of trademark image searching, working on large-scale, unlabeled datasets. Its generalization capacity is achieved using multiple feature descriptions, weighted separately, and combined to represent a single similarity score. Images are evaluated for general features, edge maps, and regions of interest, using a method based on Watershedding K-Means segments. We propose an image recovery process that relies on a new similarity measure between all feature descriptions. New trademark images are added every day to ensure up-to-date results. The proposed system showcases a timely retrieval speed, with 95% of searches having a 10 second presentation speed and a mean average precision of 93.7%, supporting its applicability to real-word IP protection scenarios.


Asunto(s)
Propiedad Intelectual , Humanos , Almacenamiento y Recuperación de la Información/métodos , Bases de Datos Factuales , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
8.
PeerJ ; 12: e17557, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38952993

RESUMEN

Imagery has become one of the main data sources for investigating seascape spatial patterns. This is particularly true in deep-sea environments, which are only accessible with underwater vehicles. On the one hand, using collaborative web-based tools and machine learning algorithms, biological and geological features can now be massively annotated on 2D images with the support of experts. On the other hand, geomorphometrics such as slope or rugosity derived from 3D models built with structure from motion (sfm) methodology can then be used to answer spatial distribution questions. However, precise georeferencing of 2D annotations on 3D models has proven challenging for deep-sea images, due to a large mismatch between navigation obtained from underwater vehicles and the reprojected navigation computed in the process of building 3D models. In addition, although 3D models can be directly annotated, the process becomes challenging due to the low resolution of textures and the large size of the models. In this article, we propose a streamlined, open-access processing pipeline to reproject 2D image annotations onto 3D models using ray tracing. Using four underwater image datasets, we assessed the accuracy of annotation reprojection on 3D models and achieved successful georeferencing to centimetric accuracy. The combination of photogrammetric 3D models and accurate 2D annotations would allow the construction of a 3D representation of the landscape and could provide new insights into understanding species microdistribution and biotic interactions.


Asunto(s)
Imagenología Tridimensional , Imagenología Tridimensional/métodos , Algoritmos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos , Océanos y Mares
9.
Sci Rep ; 14(1): 15010, 2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38951163

RESUMEN

Diffusion tensor imaging (DTI) metrics and tractography can be biased due to low signal-to-noise ratio (SNR) and systematic errors resulting from image artifacts and imperfections in magnetic field gradients. The imperfections include non-uniformity and nonlinearity, effects caused by eddy currents, and the influence of background and imaging gradients. We investigated the impact of systematic errors on DTI metrics of an isotropic phantom and DTI metrics and tractography of a rat brain measured at high resolution. We tested denoising and Gibbs ringing removal methods combined with the B matrix spatial distribution (BSD) method for magnetic field gradient calibration. The results showed that the performance of the BSD method depends on whether Gibbs ringing is removed and the effectiveness of stochastic error removal. Region of interest (ROI)-based analysis of the DTI metrics showed that, depending on the size of the ROI and its location in space, correction methods can remove systematic bias to varying degrees. The preprocessing pipeline proposed and dedicated to this type of data together with the BSD method resulted in an even - 90% decrease in fractional anisotropy (FA) (globally and locally) in the isotropic phantom and - 45% in the rat brain. The largest global changes in the rat brain tractogram compared to the standard method without preprocessing (sDTI) were noticed after denoising. The direction of the first eigenvector obtained from DTI after denoising, Gibbs ringing removal and BSD differed by an average of 56 and 10 degrees in the ROI from sDTI and from sDTI after denoising and Gibbs ringing removal, respectively. The latter can be identified with the amount of improvement in tractography due to the elimination of systematic errors related to imperfect magnetic field gradients. Based on the results, the systematic bias for high resolution data mainly depended on SNR, but the influence of non-uniform gradients could also be seen. After denoising, the BSD method was able to further correct both the metrics and tractography of the diffusion tensor in the rat brain by taking into account the actual distribution of magnetic field gradients independent of the examined object and uniquely dependent on the scanner and sequence. This means that in vivo studies are also subject to this type of errors, which should be taken into account when processing such data.


Asunto(s)
Artefactos , Encéfalo , Imagen de Difusión Tensora , Fantasmas de Imagen , Relación Señal-Ruido , Animales , Imagen de Difusión Tensora/métodos , Ratas , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Anisotropía , Masculino
10.
Sci Rep ; 14(1): 15013, 2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38951526

RESUMEN

Visual Transformers(ViT) have made remarkable achievements in the field of medical image analysis. However, ViT-based methods have poor classification results on some small-scale medical image classification datasets. Meanwhile, many ViT-based models sacrifice computational cost for superior performance, which is a great challenge in practical clinical applications. In this paper, we propose an efficient medical image classification network based on an alternating mixture of CNN and Transformer tandem, which is called Eff-CTNet. Specifically, the existing ViT-based method still mainly relies on multi-head self-attention (MHSA). Among them, the attention maps of MHSA are highly similar, which leads to computational redundancy. Therefore, we propose a group cascade attention (GCA) module to split the feature maps, which are provided to different attention heads to further improves the diversity of attention and reduce the computational cost. In addition, we propose an efficient CNN (EC) module to enhance the ability of the model and extract the local detail information in medical images. Finally, we connect them and design an efficient hybrid medical image classification network, namely Eff-CTNet. Extensive experimental results show that our Eff-CTNet achieves advanced classification performance with less computational cost on three public medical image classification datasets.


Asunto(s)
Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Diagnóstico por Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos
11.
Sci Rep ; 14(1): 14993, 2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38951574

RESUMEN

Spinal magnetic resonance (MR) scans are a vital tool for diagnosing the cause of back pain for many diseases and conditions. However, interpreting clinically useful information from these scans can be challenging, time-consuming and hard to reproduce across different radiologists. In this paper, we alleviate these problems by introducing a multi-stage automated pipeline for analysing spinal MR scans. This pipeline first detects and labels vertebral bodies across several commonly used sequences (e.g. T1w, T2w and STIR) and fields of view (e.g. lumbar, cervical, whole spine). Using these detections it then performs automated diagnosis for several spinal disorders, including intervertebral disc degenerative changes in T1w and T2w lumbar scans, and spinal metastases, cord compression and vertebral fractures. To achieve this, we propose a new method of vertebrae detection and labelling, using vector fields to group together detected vertebral landmarks and a language-modelling inspired beam search to determine the corresponding levels of the detections. We also employ a new transformer-based architecture to perform radiological grading which incorporates context from multiple vertebrae and sequences, as a real radiologist would. The performance of each stage of the pipeline is tested in isolation on several clinical datasets, each consisting of 66 to 421 scans. The outputs are compared to manual annotations of expert radiologists, demonstrating accurate vertebrae detection across a range of scan parameters. Similarly, the model's grading predictions for various types of disc degeneration and detection of spinal metastases closely match those of an expert radiologist. To aid future research, our code and trained models are made publicly available.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Enfermedades de la Columna Vertebral/diagnóstico por imagen , Enfermedades de la Columna Vertebral/patología , Columna Vertebral/diagnóstico por imagen , Columna Vertebral/patología , Degeneración del Disco Intervertebral/diagnóstico por imagen , Degeneración del Disco Intervertebral/patología , Procesamiento de Imagen Asistido por Computador/métodos , Interpretación de Imagen Asistida por Computador/métodos
12.
Sci Rep ; 14(1): 14995, 2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38951630

RESUMEN

Transmission electron microscopy (TEM) is an imaging technique used to visualize and analyze nano-sized structures and objects such as virus particles. Light microscopy can be used to diagnose diseases or characterize e.g. blood cells. Since samples under microscopes exhibit certain symmetries, such as global rotation invariance, equivariant neural networks are presumed to be useful. In this study, a baseline convolutional neural network is constructed in the form of the commonly used VGG16 classifier. Thereafter, it is modified to be equivariant to the p4 symmetry group of rotations of multiples of 90° using group convolutions. This yields a number of benefits on a TEM virus dataset, including higher top validation set accuracy by on average 7.6% and faster convergence during training by on average 23.1% of that of the baseline. Similarly, when training and testing on images of blood cells, the convergence time for the equivariant neural network is 7.9% of that of the baseline. From this it is concluded that augmentation strategies for rotation can be skipped. Furthermore, when modelling the accuracy versus amount of TEM virus training data with a power law, the equivariant network has a slope of - 0.43 compared to - 0.26 of the baseline. Thus the equivariant network learns faster than the baseline when more training data is added. This study extends previous research on equivariant neural networks applied to images which exhibit symmetries to isometric transformations.


Asunto(s)
Microscopía Electrónica de Transmisión , Redes Neurales de la Computación , Microscopía Electrónica de Transmisión/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Rotación , Humanos
13.
Cell Metab ; 36(7): 1482-1493.e7, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38959862

RESUMEN

Although human core body temperature is known to decrease with age, the age dependency of facial temperature and its potential to indicate aging rate or aging-related diseases remains uncertain. Here, we collected thermal facial images of 2,811 Han Chinese individuals 20-90 years old, developed the ThermoFace method to automatically process and analyze images, and then generated thermal age and disease prediction models. The ThermoFace deep learning model for thermal facial age has a mean absolute deviation of about 5 years in cross-validation and 5.18 years in an independent cohort. The difference between predicted and chronological age is highly associated with metabolic parameters, sleep time, and gene expression pathways like DNA repair, lipolysis, and ATPase in the blood transcriptome, and it is modifiable by exercise. Consistently, ThermoFace disease predictors forecast metabolic diseases like fatty liver with high accuracy (AUC > 0.80), with predicted disease probability correlated with metabolic parameters.


Asunto(s)
Envejecimiento , Cara , Enfermedades Metabólicas , Humanos , Persona de Mediana Edad , Anciano , Adulto , Masculino , Femenino , Anciano de 80 o más Años , Adulto Joven , Aprendizaje Profundo , Temperatura Corporal , Procesamiento de Imagen Asistido por Computador
14.
Sci Data ; 11(1): 722, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956115

RESUMEN

Around 20% of complete blood count samples necessitate visual review using light microscopes or digital pathology scanners. There is currently no technological alternative to the visual examination of red blood cells (RBCs) morphology/shapes. True/non-artifact teardrop-shaped RBCs and schistocytes/fragmented RBCs are commonly associated with serious medical conditions that could be fatal, increased ovalocytes are associated with almost all types of anemias. 25 distinct blood smears, each from a different patient, were manually prepared, stained, and then sorted into four groups. Each group underwent imaging using different cameras integrated into light microscopes with 40X microscopic lenses resulting in total 47 K + field images/patches. Two hematologists processed cell-by-cell to provide one million + segmented RBCs with their XYWH coordinates and classified 240 K + RBCs into nine shapes. This dataset (Elsafty_RBCs_for_AI) enables the development/testing of deep learning-based (DL) automation of RBCs morphology/shapes examination, including specific normalization of blood smear stains (different from histopathology stains), detection/counting, segmentation, and classification. Two codes are provided (Elsafty_Codes_for_AI), one for semi-automated image processing and another for training/testing of a DL-based image classifier.


Asunto(s)
Eritrocitos , Eritrocitos/citología , Humanos , Microscopía , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador
15.
Sci Rep ; 14(1): 15219, 2024 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956117

RESUMEN

Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing techniques struggle to accurately segment these delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on specific operations can limit its ability to capture crucial details such as the edges of the vessel. This paper introduces LMBiS-Net, a lightweight convolutional neural network designed for the segmentation of retinal vessels. LMBiS-Net achieves exceptional performance with a remarkably low number of learnable parameters (only 0.172 million). The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. In addition, we have optimised the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimisation significantly reduces training time and improves computational efficiency. To assess LMBiS-Net's robustness and ability to generalise to unseen data, we conducted comprehensive evaluations on four publicly available datasets: DRIVE, STARE, CHASE_DB1, and HRF The proposed LMBiS-Net achieves significant performance metrics in various datasets. It obtains sensitivity values of 83.60%, 84.37%, 86.05%, and 83.48%, specificity values of 98.83%, 98.77%, 98.96%, and 98.77%, accuracy (acc) scores of 97.08%, 97.69%, 97.75%, and 96.90%, and AUC values of 98.80%, 98.82%, 98.71%, and 88.77% on the DRIVE, STARE, CHEASE_DB, and HRF datasets, respectively. In addition, it records F1 scores of 83.43%, 84.44%, 83.54%, and 78.73% on the same datasets. Our evaluations demonstrate that LMBiS-Net achieves high segmentation accuracy (acc) while exhibiting both robustness and generalisability across various retinal image datasets. This combination of qualities makes LMBiS-Net a promising tool for various clinical applications.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
16.
Sci Rep ; 14(1): 15094, 2024 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956139

RESUMEN

With the increase in the dependency on digital devices, the incidence of myopia, a precursor of various ocular diseases, has risen significantly. Because myopia and eyeball volume are related, myopia progression can be monitored through eyeball volume estimation. However, existing methods are limited because the eyeball shape is disregarded during estimation. We propose an automated eyeball volume estimation method from computed tomography images that incorporates prior knowledge of the actual eyeball shape. This study involves data preprocessing, image segmentation, and volume estimation steps, which include the truncated cone formula and integral equation. We obtained eyeball image masks using U-Net, HFCN, DeepLab v3 +, SegNet, and HardNet-MSEG. Data from 200 subjects were used for volume estimation, and manually extracted eyeball volumes were used for validation. U-Net outperformed among the segmentation models, and the proposed volume estimation method outperformed comparative methods on all evaluation metrics, with a correlation coefficient of 0.819, mean absolute error of 0.640, and mean squared error of 0.554. The proposed method surpasses existing methods, provides an accurate eyeball volume estimation for monitoring the progression of myopia, and could potentially aid in the diagnosis of ocular diseases. It could be extended to volume estimation of other ocular structures.


Asunto(s)
Ojo , Miopía , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Ojo/diagnóstico por imagen , Miopía/diagnóstico por imagen , Miopía/patología , Femenino , Masculino , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Persona de Mediana Edad , Adulto Joven
17.
Sci Rep ; 14(1): 15057, 2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38956224

RESUMEN

Image segmentation is a critical and challenging endeavor in the field of medicine. A magnetic resonance imaging (MRI) scan is a helpful method for locating any abnormal brain tissue these days. It is a difficult undertaking for radiologists to diagnose and classify the tumor from several pictures. This work develops an intelligent method for accurately identifying brain tumors. This research investigates the identification of brain tumor types from MRI data using convolutional neural networks and optimization strategies. Two novel approaches are presented: the first is a novel segmentation technique based on firefly optimization (FFO) that assesses segmentation quality based on many parameters, and the other is a combination of two types of convolutional neural networks to categorize tumor traits and identify the kind of tumor. These upgrades are intended to raise the general efficacy of the MRI scan technique and increase identification accuracy. Using MRI scans from BBRATS2018, the testing is carried out, and the suggested approach has shown improved performance with an average accuracy of 98.6%.


Asunto(s)
Neoplasias Encefálicas , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Imagen por Resonancia Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Neoplasias Encefálicas/clasificación , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Encéfalo/diagnóstico por imagen , Encéfalo/patología
18.
F1000Res ; 13: 691, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38962692

RESUMEN

Background: Non-contrast Computed Tomography (NCCT) plays a pivotal role in assessing central nervous system disorders and is a crucial diagnostic method. Iterative reconstruction (IR) methods have enhanced image quality (IQ) but may result in a blotchy appearance and decreased resolution for subtle contrasts. The deep-learning image reconstruction (DLIR) algorithm, which integrates a convolutional neural network (CNN) into the reconstruction process, generates high-quality images with minimal noise. Hence, the objective of this study was to assess the IQ of the Precise Image (DLIR) and the IR technique (iDose 4) for the NCCT brain. Methods: This is a prospective study. Thirty patients who underwent NCCT brain were included. The images were reconstructed using DLIR-standard and iDose 4. Qualitative IQ analysis parameters, such as overall image quality (OQ), subjective image noise (SIN), and artifacts, were measured. Quantitative IQ analysis parameters such as Computed Tomography (CT) attenuation (HU), image noise (IN), posterior fossa index (PFI), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in the basal ganglia (BG) and centrum-semiovale (CSO) were measured. Paired t-tests were performed for qualitative and quantitative IQ analyses between the iDose 4 and DLIR-standard. Kappa statistics were used to assess inter-observer agreement for qualitative analysis. Results: Quantitative IQ analysis showed significant differences (p<0.05) in IN, SNR, and CNR between the iDose 4 and DLIR-standard at the BG and CSO levels. IN was reduced (41.8-47.6%), SNR (65-82%), and CNR (68-78.8%) were increased with DLIR-standard. PFI was reduced (27.08%) the DLIR-standard. Qualitative IQ analysis showed significant differences (p<0.05) in OQ, SIN, and artifacts between the DLIR standard and iDose 4. The DLIR standard showed higher qualitative IQ scores than the iDose 4. Conclusion: DLIR standard yielded superior quantitative and qualitative IQ compared to the IR technique (iDose4). The DLIR-standard significantly reduced the IN and artifacts compared to iDose 4 in the NCCT brain.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Proyectos Piloto , Femenino , Tomografía Computarizada por Rayos X/métodos , Masculino , Estudios Prospectivos , Persona de Mediana Edad , Encéfalo/diagnóstico por imagen , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Relación Señal-Ruido , Algoritmos
19.
J Biomed Opt ; 29(7): 076002, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38966847

RESUMEN

Significance: Optical coherence tomography has great utility for capturing dynamic processes, but such applications are particularly data-intensive. Samples such as biological tissues exhibit temporal features at varying time scales, which makes data reduction challenging. Aim: We propose a method for capturing short- and long-term correlations of a sample in a compressed way using non-uniform temporal sampling to reduce scan time and memory overhead. Approach: The proposed method separates the relative contributions of white noise, fluctuating features, and stationary features. The method is demonstrated on mammary epithelial cell spheroids in three-dimensional culture for capturing intracellular motility without loss of signal integrity. Results: Results show that the spatial patterns of motility are preserved and that hypothesis tests of spheroids treated with blebbistatin, a motor protein inhibitor, are unchanged with up to eightfold compression. Conclusions: The ability to measure short- and long-term correlations compressively will enable new applications in (3+1)D imaging and high-throughput screening.


Asunto(s)
Tomografía de Coherencia Óptica , Tomografía de Coherencia Óptica/métodos , Humanos , Esferoides Celulares/efectos de los fármacos , Movimiento Celular/fisiología , Movimiento Celular/efectos de los fármacos , Procesamiento de Imagen Asistido por Computador/métodos , Células Epiteliales/efectos de los fármacos , Algoritmos , Compuestos Heterocíclicos de 4 o más Anillos
20.
Sci Rep ; 14(1): 15402, 2024 07 04.
Artículo en Inglés | MEDLINE | ID: mdl-38965305

RESUMEN

The diagnosis of leukemia is a serious matter that requires immediate and accurate attention. This research presents a revolutionary method for diagnosing leukemia using a Capsule Neural Network (CapsNet) with an optimized design. CapsNet is a cutting-edge neural network that effectively captures complex features and spatial relationships within images. To improve the CapsNet's performance, a Modified Version of Osprey Optimization Algorithm (MOA) has been utilized. Thesuggested approach has been tested on the ALL-IDB database, a widely recognized dataset for leukemia image classification. Comparative analysis with various machine learning techniques, including Combined combine MobilenetV2 and ResNet18 (MBV2/Res) network, Depth-wise convolution model, a hybrid model that combines a genetic algorithm with ResNet-50V2 (ResNet/GA), and SVM/JAYA demonstrated the superiority of our method in different terms. As a result, the proposed method is a robust and powerful tool for diagnosing leukemia from medical images.


Asunto(s)
Algoritmos , Leucemia , Redes Neurales de la Computación , Humanos , Leucemia/diagnóstico por imagen , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos , Bases de Datos Factuales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA