Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Country/Region as subject
Language
Publication year range
1.
JMIR Form Res ; 8: e59914, 2024 Sep 18.
Article in English | MEDLINE | ID: mdl-39293049

ABSTRACT

BACKGROUND: Labeling color fundus photos (CFP) is an important step in the development of artificial intelligence screening algorithms for the detection of diabetic retinopathy (DR). Most studies use the International Classification of Diabetic Retinopathy (ICDR) to assign labels to CFP, plus the presence or absence of macular edema (ME). Images can be grouped as referrable or nonreferrable according to these classifications. There is little guidance in the literature about how to collect and use metadata as a part of the CFP labeling process. OBJECTIVE: This study aimed to improve the quality of the Multimodal Database of Retinal Images in Africa (MoDRIA) by determining whether the availability of metadata during the image labeling process influences the accuracy, sensitivity, and specificity of image labels. MoDRIA was developed as one of the inaugural research projects of the Mbarara University Data Science Research Hub, part of the Data Science for Health Discovery and Innovation in Africa (DS-I Africa) initiative. METHODS: This is a crossover assessment with 2 groups and 2 phases. Each group had 10 randomly assigned labelers who provided an ICDR score and the presence or absence of ME for each of the 50 CFP in a test image with and without metadata including blood pressure, visual acuity, glucose, and medical history. Specificity and sensitivity of referable retinopathy were based on ICDR scores, and ME was calculated using a 2-sided t test. Comparison of sensitivity and specificity for ICDR scores and ME with and without metadata for each participant was calculated using the Wilcoxon signed rank test. Statistical significance was set at P<.05. RESULTS: The sensitivity for identifying referrable DR with metadata was 92.8% (95% CI 87.6-98.0) compared with 93.3% (95% CI 87.6-98.9) without metadata, and the specificity was 84.9% (95% CI 75.1-94.6) with metadata compared with 88.2% (95% CI 79.5-96.8) without metadata. The sensitivity for identifying the presence of ME was 64.3% (95% CI 57.6-71.0) with metadata, compared with 63.1% (95% CI 53.4-73.0) without metadata, and the specificity was 86.5% (95% CI 81.4-91.5) with metadata compared with 87.7% (95% CI 83.9-91.5) without metadata. The sensitivity and specificity of the ICDR score and the presence or absence of ME were calculated for each labeler with and without metadata. No findings were statistically significant. CONCLUSIONS: The sensitivity and specificity scores for the detection of referrable DR were slightly better without metadata, but the difference was not statistically significant. We cannot make definitive conclusions about the impact of metadata on the sensitivity and specificity of image labels in our study. Given the importance of metadata in clinical situations, we believe that metadata may benefit labeling quality. A more rigorous study to determine the sensitivity and specificity of CFP labels with and without metadata is recommended.


Subject(s)
Diabetic Retinopathy , Metadata , Humans , Diabetic Retinopathy/diagnostic imaging , Diabetic Retinopathy/diagnosis , Uganda , Female , Male , Cross-Over Studies , Databases, Factual , Middle Aged , Fundus Oculi , Adult , Sensitivity and Specificity , Retina/diagnostic imaging , Retina/pathology
2.
PLOS Digit Health ; 3(1): e0000417, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38236824

ABSTRACT

The study provides a comprehensive review of OpenAI's Generative Pre-trained Transformer 4 (GPT-4) technical report, with an emphasis on applications in high-risk settings like healthcare. A diverse team, including experts in artificial intelligence (AI), natural language processing, public health, law, policy, social science, healthcare research, and bioethics, analyzed the report against established peer review guidelines. The GPT-4 report shows a significant commitment to transparent AI research, particularly in creating a systems card for risk assessment and mitigation. However, it reveals limitations such as restricted access to training data, inadequate confidence and uncertainty estimations, and concerns over privacy and intellectual property rights. Key strengths identified include the considerable time and economic investment in transparent AI research and the creation of a comprehensive systems card. On the other hand, the lack of clarity in training processes and data raises concerns about encoded biases and interests in GPT-4. The report also lacks confidence and uncertainty estimations, crucial in high-risk areas like healthcare, and fails to address potential privacy and intellectual property issues. Furthermore, this study emphasizes the need for diverse, global involvement in developing and evaluating large language models (LLMs) to ensure broad societal benefits and mitigate risks. The paper presents recommendations such as improving data transparency, developing accountability frameworks, establishing confidence standards for LLM outputs in high-risk settings, and enhancing industry research review processes. It concludes that while GPT-4's report is a step towards open discussions on LLMs, more extensive interdisciplinary reviews are essential for addressing bias, harm, and risk concerns, especially in high-risk domains. The review aims to expand the understanding of LLMs in general and highlights the need for new reflection forms on how LLMs are reviewed, the data required for effective evaluation, and addressing critical issues like bias and risk.

3.
PLOS Digit Health ; 3(1): e0000346, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38175828

ABSTRACT

In recent years, technology has been increasingly incorporated within healthcare for the provision of safe and efficient delivery of services. Although this can be attributed to the benefits that can be harnessed, digital technology has the potential to exacerbate and reinforce preexisting health disparities. Previous work has highlighted how sociodemographic, economic, and political factors affect individuals' interactions with digital health systems and are termed social determinants of health [SDOH]. But, there is a paucity of literature addressing how the intrinsic design, implementation, and use of technology interact with SDOH to influence health outcomes. Such interactions are termed digital determinants of health [DDOH]. This paper will, for the first time, propose a definition of DDOH and provide a conceptual model characterizing its influence on healthcare outcomes. Specifically, DDOH is implicit in the design of artificial intelligence systems, mobile phone applications, telemedicine, digital health literacy [DHL], and other forms of digital technology. A better appreciation of DDOH by the various stakeholders at the individual and societal levels can be channeled towards policies that are more digitally inclusive. In tandem with ongoing work to minimize the digital divide caused by existing SDOH, further work is necessary to recognize digital determinants as an important and distinct entity.

SELECTION OF CITATIONS
SEARCH DETAIL