Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Nat Biomed Eng ; 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38514775

RESUMEN

Training machine-learning models with synthetically generated data can alleviate the problem of data scarcity when acquiring diverse and sufficiently large datasets is costly and challenging. Here we show that cascaded diffusion models can be used to synthesize realistic whole-slide image tiles from latent representations of RNA-sequencing data from human tumours. Alterations in gene expression affected the composition of cell types in the generated synthetic image tiles, which accurately preserved the distribution of cell types and maintained the cell fraction observed in bulk RNA-sequencing data, as we show for lung adenocarcinoma, kidney renal papillary cell carcinoma, cervical squamous cell carcinoma, colon adenocarcinoma and glioblastoma. Machine-learning models pretrained with the generated synthetic data performed better than models trained from scratch. Synthetic data may accelerate the development of machine-learning models in scarce-data settings and allow for the imputation of missing data modalities.

2.
Cell Rep Methods ; 4(2): 100695, 2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38278157

RESUMEN

In this study, we develop a 3D beta variational autoencoder (beta-VAE) to advance lung cancer imaging analysis, countering the constraints of conventional radiomics methods. The autoencoder extracts information from public lung computed tomography (CT) datasets without additional labels. It reconstructs 3D lung nodule images with high quality (structural similarity: 0.774, peak signal-to-noise ratio: 26.1, and mean-squared error: 0.0008). The model effectively encodes lesion sizes in its latent embeddings, with a significant correlation with lesion size found after applying uniform manifold approximation and projection (UMAP) for dimensionality reduction. Additionally, the beta-VAE can synthesize new lesions of varying sizes by manipulating the latent features. The model can predict multiple clinical endpoints, including pathological N stage or KRAS mutation status, on the Stanford radiogenomics lung cancer dataset. Comparisons with other methods show that the beta-VAE performs equally well in these tasks, suggesting its potential as a pretrained model for predicting patient outcomes in medical imaging.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Mutación , Proyección , Radiómica
3.
bioRxiv ; 2024 Jan 19.
Artículo en Inglés | MEDLINE | ID: mdl-37808782

RESUMEN

Cancer is a heterogeneous disease that demands precise molecular profiling for better understanding and management. Recently, deep learning has demonstrated potentials for cost-efficient prediction of molecular alterations from histology images. While transformer-based deep learning architectures have enabled significant progress in non-medical domains, their application to histology images remains limited due to small dataset sizes coupled with the explosion of trainable parameters. Here, we develop SEQUOIA, a transformer model to predict cancer transcriptomes from whole-slide histology images. To enable the full potential of transformers, we first pre-train the model using data from 1,802 normal tissues. Then, we fine-tune and evaluate the model in 4,331 tumor samples across nine cancer types. The prediction performance is assessed at individual gene levels and pathway levels through Pearson correlation analysis and root mean square error. The generalization capacity is validated across two independent cohorts comprising 1,305 tumors. In predicting the expression levels of 25,749 genes, the highest performance is observed in cancers from breast, kidney and lung, where SEQUOIA accurately predicts the expression of 11,069, 10,086 and 8,759 genes, respectively. The accurately predicted genes are associated with the regulation of inflammatory response, cell cycles and metabolisms. While the model is trained at the tissue level, we showcase its potential in predicting spatial gene expression patterns using spatial transcriptomics datasets. Leveraging the prediction performance, we develop a digital gene expression signature that predicts the risk of recurrence in breast cancer. SEQUOIA deciphers clinically relevant gene expression patterns from histology images, opening avenues for improved cancer management and personalized therapies.

5.
Cell Rep Methods ; 3(8): 100534, 2023 08 28.
Artículo en Inglés | MEDLINE | ID: mdl-37671024

RESUMEN

In this work, we propose an approach to generate whole-slide image (WSI) tiles by using deep generative models infused with matched gene expression profiles. First, we train a variational autoencoder (VAE) that learns a latent, lower-dimensional representation of multi-tissue gene expression profiles. Then, we use this representation to infuse generative adversarial networks (GANs) that generate lung and brain cortex tissue tiles, resulting in a new model that we call RNA-GAN. Tiles generated by RNA-GAN were preferred by expert pathologists compared with tiles generated using traditional GANs, and in addition, RNA-GAN needs fewer training epochs to generate high-quality tiles. Finally, RNA-GAN was able to generalize to gene expression profiles outside of the training set, showing imputation capabilities. A web-based quiz is available for users to play a game distinguishing real and synthetic tiles: https://rna-gan.stanford.edu/, and the code for RNA-GAN is available here: https://github.com/gevaertlab/RNA-GAN.


Asunto(s)
Encéfalo , Transcriptoma , Corteza Cerebral , Aprendizaje , ARN
6.
Nat Commun ; 14(1): 4122, 2023 07 11.
Artículo en Inglés | MEDLINE | ID: mdl-37433817

RESUMEN

Intra-tumoral heterogeneity and cell-state plasticity are key drivers for the therapeutic resistance of glioblastoma. Here, we investigate the association between spatial cellular organization and glioblastoma prognosis. Leveraging single-cell RNA-seq and spatial transcriptomics data, we develop a deep learning model to predict transcriptional subtypes of glioblastoma cells from histology images. Employing this model, we phenotypically analyze 40 million tissue spots from 410 patients and identify consistent associations between tumor architecture and prognosis across two independent cohorts. Patients with poor prognosis exhibit higher proportions of tumor cells expressing a hypoxia-induced transcriptional program. Furthermore, a clustering pattern of astrocyte-like tumor cells is associated with worse prognosis, while dispersion and connection of the astrocytes with other transcriptional subtypes correlate with decreased risk. To validate these results, we develop a separate deep learning model that utilizes histology images to predict prognosis. Applying this model to spatial transcriptomics data reveal survival-associated regional gene expression programs. Overall, our study presents a scalable approach to unravel the transcriptional heterogeneity of glioblastoma and establishes a critical connection between spatial cellular architecture and clinical outcomes.


Asunto(s)
Glioblastoma , Humanos , Glioblastoma/genética , Astrocitos , Plasticidad de la Célula , Análisis por Conglomerados , Perfilación de la Expresión Génica
7.
Cancer Imaging ; 23(1): 66, 2023 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-37365659

RESUMEN

BACKGROUND: Pancreatic ductal carcinoma patients have a really poor prognosis given its difficult early detection and the lack of early symptoms. Digital pathology is routinely used by pathologists to diagnose the disease. However, visually inspecting the tissue is a time-consuming task, which slows down the diagnostic procedure. With the advances occurred in the area of artificial intelligence, specifically with deep learning models, and the growing availability of public histology data, clinical decision support systems are being created. However, the generalization capabilities of these systems are not always tested, nor the integration of publicly available datasets for pancreatic ductal carcinoma detection (PDAC). METHODS: In this work, we explored the performace of two weakly-supervised deep learning models using the two more widely available datasets with pancreatic ductal carcinoma histology images, The Cancer Genome Atlas Project (TCGA) and the Clinical Proteomic Tumor Analysis Consortium (CPTAC). In order to have sufficient training data, the TCGA dataset was integrated with the Genotype-Tissue Expression (GTEx) project dataset, which contains healthy pancreatic samples. RESULTS: We showed how the model trained on CPTAC generalizes better than the one trained on the integrated dataset, obtaining an inter-dataset accuracy of 90.62% ± 2.32 and an outer-dataset accuracy of 92.17% when evaluated on TCGA + GTEx. Furthermore, we tested the performance on another dataset formed by tissue micro-arrays, obtaining an accuracy of 98.59%. We showed how the features learned in an integrated dataset do not differentiate between the classes, but between the datasets, noticing that a stronger normalization might be needed when creating clinical decision support systems with datasets obtained from different sources. To mitigate this effect, we proposed to train on the three available datasets, improving the detection performance and generalization capabilities of a model trained only on TCGA + GTEx and achieving a similar performance to the model trained only on CPTAC. CONCLUSIONS: The integration of datasets where both classes are present can mitigate the batch effect present when integrating datasets, improving the classification performance, and accurately detecting PDAC across different datasets.


Asunto(s)
Carcinoma Ductal Pancreático , Aprendizaje Profundo , Neoplasias Pancreáticas , Humanos , Inteligencia Artificial , Carcinoma Ductal Pancreático/diagnóstico , Carcinoma Ductal Pancreático/patología , Proteómica , Neoplasias Pancreáticas/diagnóstico , Neoplasias Pancreáticas
8.
Cancer Res ; 83(17): 2970-2984, 2023 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-37352385

RESUMEN

In prostate cancer, there is an urgent need for objective prognostic biomarkers that identify the metastatic potential of a tumor at an early stage. While recent analyses indicated TP53 mutations as candidate biomarkers, molecular profiling in a clinical setting is complicated by tumor heterogeneity. Deep learning models that predict the spatial presence of TP53 mutations in whole slide images (WSI) offer the potential to mitigate this issue. To assess the potential of WSIs as proxies for spatially resolved profiling and as biomarkers for aggressive disease, we developed TiDo, a deep learning model that achieves state-of-the-art performance in predicting TP53 mutations from WSIs of primary prostate tumors. In an independent multifocal cohort, the model showed successful generalization at both the patient and lesion level. Analysis of model predictions revealed that false positive (FP) predictions could at least partially be explained by TP53 deletions, suggesting that some FP carry an alteration that leads to the same histological phenotype as TP53 mutations. Comparative expression and histologic cell type analyses identified a TP53-like cellular phenotype triggered by expression of pathways affecting stromal composition. Together, these findings indicate that WSI-based models might not be able to perfectly predict the spatial presence of individual TP53 mutations but they have the potential to elucidate the prognosis of a tumor by depicting a downstream phenotype associated with aggressive disease biomarkers. SIGNIFICANCE: Deep learning models predicting TP53 mutations from whole slide images of prostate cancer capture histologic phenotypes associated with stromal composition, lymph node metastasis, and biochemical recurrence, indicating their potential as in silico prognostic biomarkers. See related commentary by Bordeleau, p. 2809.


Asunto(s)
Neoplasias de la Próstata , Masculino , Humanos , Mutación , Neoplasias de la Próstata/genética , Neoplasias de la Próstata/patología , Pronóstico , Próstata/patología , Fenotipo , Proteína p53 Supresora de Tumor/genética
9.
Nat Med ; 29(3): 738-747, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36864252

RESUMEN

Undetected infection and delayed isolation of infected individuals are key factors driving the monkeypox virus (now termed mpox virus or MPXV) outbreak. To enable earlier detection of MPXV infection, we developed an image-based deep convolutional neural network (named MPXV-CNN) for the identification of the characteristic skin lesions caused by MPXV. We assembled a dataset of 139,198 skin lesion images, split into training/validation and testing cohorts, comprising non-MPXV images (n = 138,522) from eight dermatological repositories and MPXV images (n = 676) from the scientific literature, news articles, social media and a prospective cohort of the Stanford University Medical Center (n = 63 images from 12 patients, all male). In the validation and testing cohorts, the sensitivity of the MPXV-CNN was 0.83 and 0.91, the specificity was 0.965 and 0.898 and the area under the curve was 0.967 and 0.966, respectively. In the prospective cohort, the sensitivity was 0.89. The classification performance of the MPXV-CNN was robust across various skin tones and body regions. To facilitate the usage of the algorithm, we developed a web-based app by which the MPXV-CNN can be accessed for patient guidance. The capability of the MPXV-CNN for identifying MPXV lesions has the potential to aid in MPXV outbreak mitigation.


Asunto(s)
Aprendizaje Profundo , Mpox , Humanos , Masculino , Estudios Prospectivos , Monkeypox virus , Algoritmos
10.
bioRxiv ; 2023 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-36711711

RESUMEN

Data scarcity presents a significant obstacle in the field of biomedicine, where acquiring diverse and sufficient datasets can be costly and challenging. Synthetic data generation offers a potential solution to this problem by expanding dataset sizes, thereby enabling the training of more robust and generalizable machine learning models. Although previous studies have explored synthetic data generation for cancer diagnosis, they have predominantly focused on single modality settings, such as whole-slide image tiles or RNA-Seq data. To bridge this gap, we propose a novel approach, RNA-Cascaded-Diffusion-Model or RNA-CDM, for performing RNA-to-image synthesis in a multi-cancer context, drawing inspiration from successful text-to-image synthesis models used in natural images. In our approach, we employ a variational auto-encoder to reduce the dimensionality of a patient's gene expression profile, effectively distinguishing between different types of cancer. Subsequently, we employ a cascaded diffusion model to synthesize realistic whole-slide image tiles using the latent representation derived from the patient's RNA-Seq data. Our results demonstrate that the generated tiles accurately preserve the distribution of cell types observed in real-world data, with state-of-the-art cell identification models successfully detecting important cell types in the synthetic samples. Furthermore, we illustrate that the synthetic tiles maintain the cell fraction observed in bulk RNA-Seq data and that modifications in gene expression affect the composition of cell types in the synthetic tiles. Next, we utilize the synthetic data generated by RNA-CDM to pretrain machine learning models and observe improved performance compared to training from scratch. Our study emphasizes the potential usefulness of synthetic data in developing machine learning models in sarce-data settings, while also highlighting the possibility of imputing missing data modalities by leveraging the available information. In conclusion, our proposed RNA-CDM approach for synthetic data generation in biomedicine, particularly in the context of cancer diagnosis, offers a novel and promising solution to address data scarcity. By generating synthetic data that aligns with real-world distributions and leveraging it to pretrain machine learning models, we contribute to the development of robust clinical decision support systems and potential advancements in precision medicine.

11.
Trends Mol Med ; 29(2): 141-151, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36470817

RESUMEN

Sequencing of the human genome in the early 2000s enabled probing of the genetic basis of disease on a scale previously unimaginable. Now, two decades later, after interrogating millions of markers in thousands of individuals, a significant portion of disease heritability still remains hidden. Recent efforts to unravel this 'missing heritability' have focused on garnering new insight from merging different data types, including medical imaging. Imaging offers promising intermediate phenotypes to bridge the gap between genetic variation and disease pathology. In this review we outline this fusion and provide examples of imaging genomics in a range of diseases, from oncology to cardiovascular and neurodegenerative disease. Finally, we discuss how ongoing revolutions in data science and sharing are primed to advance the field.


Asunto(s)
Variación Genética , Enfermedades Neurodegenerativas , Humanos , Predisposición Genética a la Enfermedad , Genómica de Imágenes , Fenotipo , Estudio de Asociación del Genoma Completo
12.
J Dent ; 124: 104213, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35793761

RESUMEN

OBJECTIVE: To determine the visual 50:50% perceptibility and acceptability CIEDE2000 lightness, chroma and hue human gingiva thresholds. METHODS: A psychophysical experiment based on visual assessments of simulated images of human gingiva on a calibrated display was performed. A 20-obsever panel (dentists and laypersons; n=10) evaluated three subsets of simulated human gingiva: lightness subset (|ΔL'/ΔE00|≥ 0.98); chroma subset (|ΔC'/ΔE00|≥ 0.98) and hue subset (|ΔH'/ΔE00|≥ 0.96) using ΔE00< 5 units. A Takagi-Sugeno-Kang (TSK) Fuzzy Approximation model was used as fitting procedure, and 50:50% perceptibility threshold (PT) and acceptability threshold (AT) were calculated. Data was statistically analyzed using t-test (p ≤ 0.05). RESULTS: The 50:50% PT were ΔL' = 0.74 (95% confidence interval (CI) 0.39-1.09); ΔC' = 1.10 (95% CI 0.57-1.46); ΔH' = 2.40 (95% CI 1.66->3.85). The 50:50% AT were ΔL' = 2.57 (95% CI 2.00-3.06); ΔC' = 2.70 (95% CI 2.19-3.38). AT ΔH' may be considered no computable. PT values were statistically significant among the three metric differences (p ≤ 0.05). No difference was found between observers for PT values. CONCLUSIONS: Statistically differences in perceptual limit were found among hue, lightness and chroma for human gingiva. Thus, the observers seem to show lower sensitivity for changes in hue (ΔH') than in chroma (ΔC') and in lightness (ΔL') in the gingiva color space. CLINICAL SIGNIFICANCE: PT and AT for lightness, chroma and hue specific for human gingiva should be used when evaluating natural gingiva, pink gingival shade guides or pink materials, since the thresholds of perception and acceptability for teeth are not suitable.


Asunto(s)
Encía , Diente , Color , Humanos
13.
J Pers Med ; 12(4)2022 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-35455716

RESUMEN

Differentiation between the various non-small-cell lung cancer subtypes is crucial for providing an effective treatment to the patient. For this purpose, machine learning techniques have been used in recent years over the available biological data from patients. However, in most cases this problem has been treated using a single-modality approach, not exploring the potential of the multi-scale and multi-omic nature of cancer data for the classification. In this work, we study the fusion of five multi-scale and multi-omic modalities (RNA-Seq, miRNA-Seq, whole-slide imaging, copy number variation, and DNA methylation) by using a late fusion strategy and machine learning techniques. We train an independent machine learning model for each modality and we explore the interactions and gains that can be obtained by fusing their outputs in an increasing manner, by using a novel optimization approach to compute the parameters of the late fusion. The final classification model, using all modalities, obtains an F1 score of 96.81±1.07, an AUC of 0.993±0.004, and an AUPRC of 0.980±0.016, improving those results that each independent model obtains and those presented in the literature for this problem. These obtained results show that leveraging the multi-scale and multi-omic nature of cancer data can enhance the performance of single-modality clinical decision support systems in personalized medicine, consequently improving the diagnosis of the patient.

14.
J Esthet Restor Dent ; 34(1): 259-280, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34842324

RESUMEN

OBJECTIVE: To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS: The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS: Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS: The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE: The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.


Asunto(s)
Inteligencia Artificial , Odontología , Predicción , Humanos , Aprendizaje Automático
15.
BMC Bioinformatics ; 22(1): 454, 2021 Sep 22.
Artículo en Inglés | MEDLINE | ID: mdl-34551733

RESUMEN

BACKGROUND: Adenocarcinoma and squamous cell carcinoma are the two most prevalent lung cancer types, and their distinction requires different screenings, such as the visual inspection of histology slides by an expert pathologist, the analysis of gene expression or computer tomography scans, among others. In recent years, there has been an increasing gathering of biological data for decision support systems in the diagnosis (e.g. histology imaging, next-generation sequencing technologies data, clinical information, etc.). Using all these sources to design integrative classification approaches may improve the final diagnosis of a patient, in the same way that doctors can use multiple types of screenings to reach a final decision on the diagnosis. In this work, we present a late fusion classification model using histology and RNA-Seq data for adenocarcinoma, squamous-cell carcinoma and healthy lung tissue. RESULTS: The classification model improves results over using each source of information separately, being able to reduce the diagnosis error rate up to a 64% over the isolate histology classifier and a 24% over the isolate gene expression classifier, reaching a mean F1-Score of 95.19% and a mean AUC of 0.991. CONCLUSIONS: These findings suggest that a classification model using a late fusion methodology can considerably help clinicians in the diagnosis between the aforementioned lung cancer cancer subtypes over using each source of information separately. This approach can also be applied to any cancer type or disease with heterogeneous sources of information.


Asunto(s)
Adenocarcinoma , Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/genética , Humanos , Neoplasias Pulmonares/genética , Probabilidad , RNA-Seq
16.
Comput Biol Med ; 133: 104387, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33872966

RESUMEN

KnowSeq R/Bioc package is designed as a powerful, scalable and modular software focused on automatizing and assembling renowned bioinformatic tools with new features and functionalities. It comprises a unified environment to perform complex gene expression analyses, covering all the needed processing steps to identify a gene signature for a specific disease to gather understandable knowledge. This process may be initiated from raw files either available at well-known platforms or provided by the users themselves, and in either case coming from different information sources and different Transcriptomic technologies. The pipeline makes use of a set of advanced algorithms, including the adaptation of a novel procedure for the selection of the most representative genes in a given multiclass problem. Similarly, an intelligent system able to classify new patients, providing the user the opportunity to choose one among a number of well-known and widespread classification and feature selection methods in Bioinformatics, is embedded. Furthermore, KnowSeq is engineered to automatically develop a complete and detailed HTML report of the whole process which is also modular and scalable. Biclass breast cancer and multiclass lung cancer study cases were addressed to rigorously assess the usability and efficiency of KnowSeq. The models built by using the Differential Expressed Genes achieved from both experiments reach high classification rates. Furthermore, biological knowledge was extracted in terms of Gene Ontologies, Pathways and related diseases with the aim of helping the expert in the decision-making process. KnowSeq is available at Bioconductor (https://bioconductor.org/packages/KnowSeq), GitHub (https://github.com/CasedUgr/KnowSeq) and Docker (https://hub.docker.com/r/casedugr/knowseq).


Asunto(s)
Biología Computacional , Programas Informáticos , Algoritmos , Humanos , Transcriptoma
17.
J Dent ; 108: 103640, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33757865

RESUMEN

OBJECTIVE: To evaluate the influence of neutral color backgrounds on the perception of color differences in dentistry. METHODS: A software was developed for this study that calculated the perceptibility (PT) and acceptability (AT) thresholds of color differences between a pair of computer-simulated incisor samples (n = 60 pairs) over three neutral color (white, gray and black) backgrounds. CIELAB and CIEDE2000 color difference formulas were used. Five groups of volunteer observers (N = 100) participated in the psychophysical experiment (n = 20): Dentists; Dental students; Dental auxiliaries; Dental technicians; and Laypersons. The psychophysical experiment was performed in a dark environment on a calibrated high-resolution screen. To determine PT and AT values, the 60 pairs of samples were randomly presented to each observer over the different backgrounds. The data were adjusted (TSK Fuzzy) and analyzed statistically using Student t-test and ANOVA (α = 0.05). RESULTS: Regardless of the metric and the background used, the PT values showed no difference among different observers (p > 0.05). Dentists showed the lowest PT values. Dental technicians showed the lowest AT (p ≤ 0.05) and similar values for the three backgrounds (p > 0.05), regardless of the metric used. The other groups of observers showed the lowest and the highest AT values when using black and white backgrounds, respectively (p ≤ 0.05). CONCLUSIONS: The lowest AT values using a black background indicates that the use of this background allows for the evaluation of slight color differences, and should be used for challenging color differences in esthetic dentistry. This study showed the influence of the observer experience on color evaluation in dentistry. CLINICAL SIGNIFICANCE: There was no influence of the background color on the perceptibility threshold. However, dentists and dental technicians showed greater ability to perceive slight color differences compared to other groups of observers.


Asunto(s)
Percepción de Color , Estética Dental , Color , Humanos , Incisivo
18.
J Esthet Restor Dent ; 33(6): 836-843, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33283966

RESUMEN

OBJECTIVE: To evaluate color, lightness, chroma, hue, and translucency adjustment potential of resin composites using CIEDE2000 color difference formula. METHODS: Three resin composites (Filtek Universal, Harmonize, and Omnichroma) were tested. Two types of specimens were prepared: an outer base shade with an inner hole filled with test shades and single-composite specimens of all shades. Spectrorradiometric reflectances measurements and subsequent CIELAB color coordinates and translucency parameter (TP) were performed. Color (CAP00 ), lightness, chroma, hue, and translucency (TAP00 ) adjustment potential using CIEDE2000 color difference were computed. Color and transparency differences among composite materials and shades were statistically tested (P < 0.05). RESULTS: Positive CAP00 and TAP00 values were found for majority of tested materials. CAP00 values ranged from -0.14 to 0.89, with the highest values found for Omnichroma (>0.75 in all cases). TAP00 values ranged from -0.06 to 0.86 with significant translucency differences among dual and single specimens. Omnichroma exhibited the highest adjustment potential for all color dimensions studied. CONCLUSIONS: Lightness, hue, chroma, and translucency adjustment potential have been introduced using CIEDE2000 color difference formula, and have shown their usefulness to evaluate blending effect in dentistry. Color coordinates and translucency adjustment potential were dependent on dental material. Omnichroma exhibited the most pronounced blending effect. CLINICAL SIGNIFICANCE: Resin composites with increased color and translucency adjustment may simplify shade selection, making this process easier and less time consuming. Furthermore, these materials might facilitate challenging and complex color matching situations.


Asunto(s)
Resinas Compuestas , Color
19.
Front Physiol ; 11: 606287, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33329060

RESUMEN

The mDurance® system is an innovative digital tool that combines wearable surface electromyography (sEMG), mobile computing and cloud analysis to streamline and automatize the assessment of muscle activity. The tool is particularly devised to support clinicians and sport professionals in their daily routines, as an assessment tool in the prevention, monitoring rehabilitation and training field. This study aimed at determining the validity of the mDurance system for measuring muscle activity by comparing sEMG output with a reference sEMG system, the Delsys® system. Fifteen participants were tested during isokinetic knee extensions at three different speeds (60, 180, and 300 deg/s), for two muscles (rectus femoris [RF] and vastus lateralis [VL]) and two different electrodes locations (proximal and distal placement). The maximum voluntary isometric contraction was carried out for the normalization of the signal, followed by dynamic isokinetic knee extensions for each speed. The sEMG output for both systems was obtained from the raw sEMG signal following mDurance's processing and filtering. Mean, median, first quartile, third quartile and 90th percentile was calculated from the sEMG amplitude signals for each system. The results show an almost perfect ICC relationship for the VL (ICC > 0.81) and substantial to almost perfect for the RF (ICC > 0.762) for all variables and speeds. The Bland-Altman plots revealed heteroscedasticity of error for mean, quartile 3 and 90th percentile (60 and 300 deg/s) for RF and at mean and 90th percentile for VL (300 deg/s). In conclusion, the results indicate that the mDurance® sEMG system is a valid tool to measure muscle activity during dynamic contractions over a range of speeds. This innovative system provides more time for clinicians (e.g., interpretation patients' pathologies) and sport trainers (e.g., advising athletes), thanks to automatic processing and filtering of the raw sEMG signal and generation of muscle activity reports in real-time.

20.
J Dent ; 102: 103475, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-32961261

RESUMEN

OBJECTIVE: To evaluate the influence of neutral color backgrounds on visual thresholds using three color difference metrics: CIELAB, CIEDE2000(1:1:1) and CIEDE2000(2:1:1). METHODS: Sixty observers (dentists and laypersons; n = 30) from three countries participated in the study. A psychophysical experiment based on visual assessments of simulated images of teeth on a calibrated display was performed. Images of simulated upper central incisors (SUCI) were consecutively displayed in pairs (60) on 3 neutral color backgrounds (black, grey and white). Three color difference metrics (CIELAB, CIEDE2000(1:1:1), and CIEDE2000(2:1:1)) were used to calculate the visual thresholds (PT- perceptibility threshold; and AT- acceptability threshold) with 95 % confidence intervals (CI) and a Takagi-Sugeno-Kang (TSK) Fuzzy Approximation model was used as fitting procedure. Data was statistically analyzed using paired t-test (p = 0.05). RESULTS: The50:50 % PT values were significantly lower over white background than over black background. The 50:50 % AT values were significantly greater over white background than over grey and black backgrounds. In most cases, the thresholds (PT and AT) values were significantly different for each color different metric over each background color (p ≤ 0.05). No difference was found between observers for PT and AT values. CONCLUSIONS: The perceptibility and acceptability thresholds in dentistry are affected by the color difference metrics and by the background color. SIGNIFICANCE: Dental color mismatches are more difficult to be accepted over a white background.


Asunto(s)
Percepción de Color , Incisivo , Color
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...