Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
JAMA Netw Open ; 7(3): e242609, 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38488790

RESUMEN

Importance: The lack of standardized genetics training in pediatrics residencies, along with a shortage of medical geneticists, necessitates innovative educational approaches. Objective: To compare pediatric resident recognition of Kabuki syndrome (KS) and Noonan syndrome (NS) after 1 of 4 educational interventions, including generative artificial intelligence (AI) methods. Design, Setting, and Participants: This comparative effectiveness study used generative AI to create images of children with KS and NS. From October 1, 2022, to February 28, 2023, US pediatric residents were provided images through a web-based survey to assess whether these images helped them recognize genetic conditions. Interventions: Participants categorized 20 images after exposure to 1 of 4 educational interventions (text-only descriptions, real images, and 2 types of images created by generative AI). Main Outcomes and Measures: Associations between educational interventions with accuracy and self-reported confidence. Results: Of 2515 contacted pediatric residents, 106 and 102 completed the KS and NS surveys, respectively. For KS, the sensitivity of text description was 48.5% (128 of 264), which was not significantly different from random guessing (odds ratio [OR], 0.94; 95% CI, 0.69-1.29; P = .71). Sensitivity was thus compared for real images vs random guessing (60.3% [188 of 312]; OR, 1.52; 95% CI, 1.15-2.00; P = .003) and 2 types of generative AI images vs random guessing (57.0% [212 of 372]; OR, 1.32; 95% CI, 1.04-1.69; P = .02 and 59.6% [193 of 324]; OR, 1.47; 95% CI, 1.12-1.94; P = .006) (denominators differ according to survey responses). The sensitivity of the NS text-only description was 65.3% (196 of 300). Compared with text-only, the sensitivity of the real images was 74.3% (205 of 276; OR, 1.53; 95% CI, 1.08-2.18; P = .02), and the sensitivity of the 2 types of images created by generative AI was 68.0% (204 of 300; OR, 1.13; 95% CI, 0.77-1.66; P = .54) and 71.0% (247 of 328; OR, 1.30; 95% CI, 0.92-1.83; P = .14). For specificity, no intervention was statistically different from text only. After the interventions, the number of participants who reported being unsure about important diagnostic facial features decreased from 56 (52.8%) to 5 (7.6%) for KS (P < .001) and 25 (24.5%) to 4 (4.7%) for NS (P < .001). There was a significant association between confidence level and sensitivity for real and generated images. Conclusions and Relevance: In this study, real and generated images helped participants recognize KS and NS; real images appeared most helpful. Generated images were noninferior to real images and could serve an adjunctive role, particularly for rare conditions.


Asunto(s)
Anomalías Múltiples , Inteligencia Artificial , Cara/anomalías , Enfermedades Hematológicas , Aprendizaje , Enfermedades Vestibulares , Humanos , Niño , Reconocimiento en Psicología , Escolaridad
2.
PLoS Genet ; 20(2): e1011168, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38412177

RESUMEN

Artificial intelligence (AI) for facial diagnostics is increasingly used in the genetics clinic to evaluate patients with potential genetic conditions. Current approaches focus on one type of AI called Deep Learning (DL). While DL- based facial diagnostic platforms have a high accuracy rate for many conditions, less is understood about how this technology assesses and classifies (categorizes) images, and how this compares to humans. To compare human and computer attention, we performed eye-tracking analyses of geneticist clinicians (n = 22) and non-clinicians (n = 22) who viewed images of people with 10 different genetic conditions, as well as images of unaffected individuals. We calculated the Intersection-over-Union (IoU) and Kullback-Leibler divergence (KL) to compare the visual attentions of the two participant groups, and then the clinician group against the saliency maps of our deep learning classifier. We found that human visual attention differs greatly from DL model's saliency results. Averaging over all the test images, IoU and KL metric for the successful (accurate) clinician visual attentions versus the saliency maps were 0.15 and 11.15, respectively. Individuals also tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians (IoU and KL of clinicians versus non-clinicians were 0.47 and 2.73, respectively). This study shows that humans (at different levels of expertise) and a computer vision model examine images differently. Understanding these differences can improve the design and use of AI tools, and lead to more meaningful interactions between clinicians and AI technologies.


Asunto(s)
Inteligencia Artificial , Computadores , Humanos , Simulación por Computador
3.
medRxiv ; 2023 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-37790417

RESUMEN

Artificial intelligence (AI) is used in an increasing number of areas, with recent interest in generative AI, such as using ChatGPT to generate programming code or DALL-E to make illustrations. We describe the use of generative AI in medical education. Specifically, we sought to determine whether generative AI could help train pediatric residents to better recognize genetic conditions. From publicly available images of individuals with genetic conditions, we used generative AI methods to create new images, which were checked for accuracy with an external classifier. We selected two conditions for study, Kabuki (KS) and Noonan (NS) syndromes, which are clinically important conditions that pediatricians may encounter. In this study, pediatric residents completed 208 surveys, where they each classified 20 images following exposure to one of 4 possible educational interventions, including with and without generative AI methods. Overall, we find that generative images perform similarly but appear to be slightly less helpful than real images. Most participants reported that images were useful, although real images were felt to be more helpful. We conclude that generative AI images may serve as an adjunctive educational tool, particularly for less familiar conditions, such as KS.

4.
medRxiv ; 2023 Jul 28.
Artículo en Inglés | MEDLINE | ID: mdl-37577564

RESUMEN

Deep learning (DL) and other types of artificial intelligence (AI) are increasingly used in many biomedical areas, including genetics. One frequent use in medical genetics involves evaluating images of people with potential genetic conditions to help with diagnosis. A central question involves better understanding how AI classifiers assess images compared to humans. To explore this, we performed eye-tracking analyses of geneticist clinicians and non-clinicians. We compared results to DL-based saliency maps. We found that human visual attention when assessing images differs greatly from the parts of images weighted by the DL model. Further, individuals tend to have a specific pattern of image inspection, and clinicians demonstrate different visual attention patterns than non-clinicians.

5.
Genet Med ; 24(8): 1593-1603, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35612590

RESUMEN

Deep learning (DL) is applied in many biomedical areas. We performed a scoping review on DL in medical genetics. We first assessed 14,002 articles, of which 133 involved DL in medical genetics. DL in medical genetics increased rapidly during the studied period. In medical genetics, DL has largely been applied to small data sets of affected individuals (mean = 95, median = 29) with genetic conditions (71 different genetic conditions were studied; 24 articles studied multiple conditions). A variety of data types have been used in medical genetics, including radiologic (20%), ophthalmologic (14%), microscopy (8%), and text-based data (4%); the most common data type was patient facial photographs (46%). DL authors and research subjects overrepresent certain geographic areas (United States, Asia, and Europe). Convolutional neural networks (89%) were the most common method. Results were compared with human performance in 31% of studies. In total, 51% of articles provided data access; 16% released source code. To further explore DL in genomics, we conducted an additional analysis, the results of which highlight future opportunities for DL in medical genetics. Finally, we expect DL applications to increase in the future. To aid data curation, we evaluated a DL, random forest, and rule-based classifier at categorizing article abstracts.


Asunto(s)
Aprendizaje Profundo , Genética Médica , Asia , Genómica , Humanos , Redes Neurales de la Computación
6.
Front Genet ; 13: 864092, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35480315

RESUMEN

Background: In medical genetics, one application of neural networks is the diagnosis of genetic diseases based on images of patient faces. While these applications have been validated in the literature with primarily pediatric subjects, it is not known whether these applications can accurately diagnose patients across a lifespan. We aimed to extend previous works to determine whether age plays a factor in facial diagnosis as well as to explore other factors that may contribute to the overall diagnostic accuracy. Methods: To investigate this, we chose two relatively common conditions, Williams syndrome and 22q11.2 deletion syndrome. We built a neural network classifier trained on images of affected and unaffected individuals of different ages and compared classifier accuracy to clinical geneticists. We analyzed the results of saliency maps and the use of generative adversarial networks to boost accuracy. Results: Our classifier outperformed clinical geneticists at recognizing face images of these two conditions within each of the age groups (the performance varied between the age groups): 1) under 2 years old, 2) 2-9 years old, 3) 10-19 years old, 4) 20-34 years old, and 5) ≥35 years old. The overall accuracy improvement by our classifier over the clinical geneticists was 15.5 and 22.7% for Williams syndrome and 22q11.2 deletion syndrome, respectively. Additionally, comparison of saliency maps revealed that key facial features learned by the neural network differed with respect to age. Finally, joint training real images with multiple different types of fake images created by a generative adversarial network showed up to 3.25% accuracy gain in classification accuracy. Conclusion: The ability of clinical geneticists to diagnose these conditions is influenced by the age of the patient. Deep learning technologies such as our classifier can more accurately identify patients across the lifespan based on facial features. Saliency maps of computer vision reveal that the syndromic facial feature attributes change with the age of the patient. Modest improvements in the classifier accuracy were observed when joint training was carried out with both real and fake images. Our findings highlight the need for a greater focus on age as a confounder in facial diagnosis.

7.
HGG Adv ; 3(1): 100053, 2022 Jan 13.
Artículo en Inglés | MEDLINE | ID: mdl-35047844

RESUMEN

Neural networks have shown strong potential in research and in healthcare. Mainly due to the need for large datasets, these applications have focused on common medical conditions, where more data are typically available. Leveraging publicly available data, we trained a neural network classifier on images of rare genetic conditions with skin findings. We used approximately 100 images per condition to classify 6 different genetic conditions. We analyzed both preprocessed images that were cropped to show only the skin lesions as well as more complex images showing features such as the entire body segment, the person, and/or the background. The classifier construction process included attribution methods to visualize which pixels were most important for computer-based classification. Our classifier was significantly more accurate than pediatricians or medical geneticists for both types of images and suggests steps for further research involving clinical scenarios and other applications.

8.
Mol Carcinog ; 49(4): 315-9, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20146250

RESUMEN

Nonmelanoma skin cancers (NMSCs) consist of a variety of tumor types including basal cell carcinoma, squamous cell carcinoma, a variety of hair follicle tumors, and sebaceous gland tumors. Genetic alterations that alter the fate of multipotent stem cells are believed to influence NMSC phenotype. We previously generated a transgenic mouse line which constitutively expressed c-myc under the control of the K14 promoter (K14.MYC2). These mice exhibited an increase in size and number of sebaceous glands, suggesting that c-myc diverted multipotential stem cells to a sebaceous lineage. Our goal in the current study was to determine if alterations in the commitment of multipotent stem cells to different cell fates would influence tumor phenotype. To this end, we exposed K14.MYC2 mice to a chemical carcinogenesis protocol and discovered that these mice were predisposed to develop sebaceous adenomas. Our data demonstrate that genetic alterations that alter the fate of multipotent stem cells during embryonic development can markedly influence the phenotype of NMSC that develop following exposure to carcinogens.


Asunto(s)
Proteínas Proto-Oncogénicas c-myc/genética , Neoplasias Cutáneas/genética , Neoplasias Cutáneas/patología , Células Madre/patología , 9,10-Dimetil-1,2-benzantraceno/toxicidad , Adenocarcinoma Sebáceo/patología , Animales , Carcinógenos/toxicidad , Diferenciación Celular/genética , Linaje de la Célula/genética , Cruzamientos Genéticos , Femenino , Heterocigoto , Masculino , Ratones , Ratones Endogámicos ICR , Ratones Endogámicos , Ratones Transgénicos , Células Madre Multipotentes/patología , Papiloma/patología , Fenotipo , Neoplasias de las Glándulas Sebáceas/patología , Acetato de Tetradecanoilforbol/farmacología , Transgenes
9.
Eur J Immunol ; 32(1): 104-12, 2002 01.
Artículo en Inglés | MEDLINE | ID: mdl-11754009

RESUMEN

The homing properties of subsets of lymphocytes and dendritic cells (DC) are regulated in part by the profile of chemokine receptors expressed. To determine how CCR6 influences cell trafficking, a mutant allele of the mouse CCR6 gene was produced that includes an enhanced green fluorescent protein (EGFP) reporter under the control of the CCR6 promoter. In mice heterozygous for the EGFP/CCR6 knock-in, CCR6 expression was detected on all mature B cells, subpopulations of splenic CD4(+) and CD8(+) T cells, and on some CD11c(+) DC. Most CD11b(+) myeloid DC expressed CCR6, but CD8alpha(+) lymphoid DC were negative for CCR6. Among myeloid DC, the CD4(+) subset was uniformly positive for CCR6 expression and the CD4(-) subset was mostly CCR6 positive. Epidermal Langerhans cells (LC) also expressed CCR6, but at lower levels than splenic myeloid DC. Culture of bone marrow precursors from the knock-in mice with GM-CSF for 4 to 6 days led to the appearance of a subset of CD11c(+) DC expressing CCR6. The differences in CCR6 expression among the major DC subsets indicate that CCR6 and its chemokine ligand MIP-3alpha participate in determining the positioning of DC subsets in epithelial and lymphoid tissues.


Asunto(s)
Células Dendríticas/metabolismo , Expresión Génica , Linfocitos/metabolismo , Células Mieloides/metabolismo , Receptores de Quimiocina/genética , Animales , Linaje de la Célula , Células Dendríticas/citología , Células Dendríticas/efectos de los fármacos , Células Epidérmicas , Femenino , Marcación de Gen , Genes Reporteros , Factor Estimulante de Colonias de Granulocitos y Macrófagos/farmacología , Proteínas Fluorescentes Verdes , Proteínas Luminiscentes/genética , Linfocitos/citología , Linfocitos/efectos de los fármacos , Masculino , Ratones , Ratones Endogámicos C57BL , Células Mieloides/citología , Células Mieloides/efectos de los fármacos , Ganglios Linfáticos Agregados/citología , Receptores CCR6 , Bazo/citología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...