RESUMEN
Background: Laryngeal cancer accounts for a third of all head and neck malignancies, necessitating timely detection for effective treatment and enhanced patient outcomes. Machine learning shows promise in medical diagnostics, but the impact of model complexity on diagnostic efficacy in laryngeal cancer detection can be ambiguous. Methods: In this study, we examine the relationship between model sophistication and diagnostic efficacy by evaluating three approaches: Logistic Regression, a small neural network with 4 layers of neurons and a more complex convolutional neural network with 50 layers and examine their efficacy on laryngeal cancer detection on computed tomography images. Results: Logistic regression achieved 82.5% accuracy. The 4-Layer NN reached 87.2% accuracy, while ResNet-50, a deep learning architecture, achieved the highest accuracy at 92.6%. Its deep learning capabilities excelled in discerning fine-grained CT image features. Conclusion: Our study highlights the choices involved in selecting a laryngeal cancer detection model. Logistic regression is interpretable but may struggle with complex patterns. The 4-Layer NN balances complexity and accuracy. ResNet-50 excels in image classification but demands resources. This research advances understanding affect machine learning model complexity could have on learning features of laryngeal tumor features in contrast CT images for purposes of disease prediction.
RESUMEN
PURPOSE: This study assesses the diagnostic efficacy of offline Medios Artificial Intelligence (AI) glaucoma software in a primary eye care setting, using nonmydriatic fundus images from Remidio's Fundus-on-Phone (FOP NM-10). Artificial intelligence results were compared with tele-ophthalmologists' diagnoses and with a glaucoma specialist's assessment for those participants referred to a tertiary eye care hospital. DESIGN: Prospective cross-sectional study PARTICIPANTS: Three hundred three participants from 6 satellite vision centers of a tertiary eye hospital. METHODS: At the vision center, participants underwent comprehensive eye evaluations, including clinical history, visual acuity measurement, slit lamp examination, intraocular pressure measurement, and fundus photography using the FOP NM-10 camera. Medios AI-Glaucoma software analyzed 42-degree disc-centric fundus images, categorizing them as normal, glaucoma, or suspect. Tele-ophthalmologists who were glaucoma fellows with a minimum of 3 years of ophthalmology and 1 year of glaucoma fellowship training, masked to artificial intelligence (AI) results, remotely diagnosed subjects based on the history and disc appearance. All participants labeled as disc suspects or glaucoma by AI or tele-ophthalmologists underwent further comprehensive glaucoma evaluation at the base hospital, including clinical examination, Humphrey visual field analysis, and OCT. Artificial intelligence and tele-ophthalmologist diagnoses were then compared with a glaucoma specialist's diagnosis. MAIN OUTCOME MEASURES: Sensitivity and specificity of Medios AI. RESULTS: Out of 303 participants, 299 with at least one eye of sufficient image quality were included in the study. The remaining 4 participants did not have sufficient image quality in both eyes. Medios AI identified 39 participants (13%) with referable glaucoma. The AI exhibited a sensitivity of 0.91 (95% confidence interval [CI]: 0.71-0.99) and specificity of 0.93 (95% CI: 0.89-0.96) in detecting referable glaucoma (definite perimetric glaucoma) when compared to tele-ophthalmologist. The agreement between AI and the glaucoma specialist was 80.3%, surpassing the 55.3% agreement between the tele-ophthalmologist and the glaucoma specialist amongst those participants who were referred to the base hospital. Both AI and the tele-ophthalmologist relied on fundus photos for diagnoses, whereas the glaucoma specialist's assessments at the base hospital were aided by additional tools such as Humphrey visual field analysis and OCT. Furthermore, AI had fewer false positive referrals (2 out of 10) compared to the tele-ophthalmologist (9 out of 10). CONCLUSIONS: Medios offline AI exhibited promising sensitivity and specificity in detecting referable glaucoma from remote vision centers in southern India when compared with teleophthalmologists. It also demonstrated better agreement with glaucoma specialist's diagnosis for referable glaucoma participants. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
RESUMEN
BACKGROUND: Many morphological and histological changes take place in aging skin. Topical tretinoin is the gold standard anti-aging agent used to reduce signs of aging through stimulation of epidermal growth and differentiation and inhibition of collagenase. OBJECTIVE: The aim of this systematic review is to summarize studies evaluating the efficacy of tretinoin compared with other topical medications and cosmeceuticals in reducing the appearance of skin aging. METHODS: A systematic review was conducted following the Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) guidelines. The literature search was conducted using the PubMed and Embase databases from conception to December 2023. Studies were included if they compared anti-aging outcomes of topical medications with those of topical tretinoin (also called all-trans retinoic acid and retinoic acid). Studies were excluded if they compared non-topical anti-aging treatments with tretinoin or were conducted on animal models. RESULTS: The literature search resulted in 25 studies that met all inclusion and exclusion criteria. The most common study comparators to tretinoin included other forms of vitamin A. Outcomes were reported on the basis of visual reduction of aging signs, histological assessment of the epidermis and dermis, and protein expression. Although comparators to tretinoin had variable efficacy (greater in 7 studies, equivalent in 13 studies, and less in 3 studies), most studies found the comparator to be less irritating and better tolerated by patients than tretinoin. DISCUSSION: Tretinoin is currently the gold standard therapy for the treatment of photoaging, but its poor tolerability often limits its use. Unfortunately, given that most studies comparing topical therapies with tretinoin are of poor quality and/or demonstrate bias, there is a lack of substantial evidence to support an alternative first-line therapy. However, given there are some data to support the efficacy of retinoid precursors, namely retinaldehyde, pro-retinal nanoparticles, and conjugated alpha-hydroxy acid and retinoid (AHA-ret), these agents can be considered a second-line option for anti-aging treatment in patients who cannot tolerate tretinoin.
Asunto(s)
Administración Cutánea , Envejecimiento de la Piel , Tretinoina , Humanos , Tretinoina/administración & dosificación , Tretinoina/farmacología , Envejecimiento de la Piel/efectos de los fármacos , Resultado del Tratamiento , Queratolíticos/administración & dosificación , Fármacos Dermatológicos/administración & dosificación , Fármacos Dermatológicos/farmacología , Piel/efectos de los fármacos , Piel/patología , Piel/efectos de la radiación , Cosmecéuticos/administración & dosificación , Cosmecéuticos/farmacologíaRESUMEN
OBJECTIVES: Despite global research on early detection of age-related macular degeneration (AMD), not enough is being done for large-scale screening. Automated analysis of retinal images captured via smartphone presents a potential solution; however, to our knowledge, such an artificial intelligence (AI) system has not been evaluated. The study aimed to assess the performance of an AI algorithm in detecting referable AMD on images captured on a portable fundus camera. DESIGN, SETTING: A retrospective image database from the Age-Related Eye Disease Study (AREDS) and target device was used. PARTICIPANTS: The algorithm was trained on two distinct data sets with macula-centric images: initially on 108,251 images (55% referable AMD) from AREDS and then fine-tuned on 1108 images (33% referable AMD) captured on Asian eyes using the target device. The model was designed to indicate the presence of referable AMD (intermediate and advanced AMD). Following the first training step, the test set consisted of 909 images (49% referable AMD). For the fine-tuning step, the test set consisted of 238 (34% referable AMD) images. The reference standard for the AREDS data set was fundus image grading by the central reading centre, and for the target device, it was consensus image grading by specialists. OUTCOME MEASURES: Area under receiver operating curve (AUC), sensitivity and specificity of algorithm. RESULTS: Before fine-tuning, the deep learning (DL) algorithm exhibited a test set (from AREDS) sensitivity of 93.48% (95% CI: 90.8% to 95.6%), specificity of 82.33% (95% CI: 78.6% to 85.7%) and AUC of 0.965 (95% CI:0.95 to 0.98). After fine-tuning, the DL algorithm displayed a test set (from the target device) sensitivity of 91.25% (95% CI: 82.8% to 96.4%), specificity of 84.18% (95% CI: 77.5% to 89.5%) and AUC 0.947 (95% CI: 0.911 to 0.982). CONCLUSION: The DL algorithm shows promising results in detecting referable AMD from a portable smartphone-based imaging system. This approach can potentially bring effective and affordable AMD screening to underserved areas.
Asunto(s)
Algoritmos , Aprendizaje Profundo , Degeneración Macular , Teléfono Inteligente , Humanos , Degeneración Macular/diagnóstico , Degeneración Macular/diagnóstico por imagen , Estudios Retrospectivos , Anciano , Fondo de Ojo , Femenino , Sensibilidad y Especificidad , Fotograbar/instrumentación , Masculino , Curva ROC , Persona de Mediana Edad , Tamizaje Masivo/métodos , Tamizaje Masivo/instrumentaciónRESUMEN
PURPOSE: This study aimed to determine the generalizability of an artificial intelligence (AI) algorithm trained on an ethnically diverse dataset to screen for referable diabetic retinopathy (RDR) in the Armenian population unseen during AI development. METHODS: This study comprised 550 patients with diabetes mellitus visiting the polyclinics of Armenia over 10 months requiring diabetic retinopathy (DR) screening. The Medios AI-DR algorithm was developed using a robust, diverse, ethnically balanced dataset with no inherent bias and deployed offline on a smartphone-based fundus camera. The algorithm here analyzed the retinal images captured using the target device for the presence of RDR (i.e., moderate non-proliferative diabetic retinopathy (NPDR) and/or clinically significant diabetic macular edema (CSDME) or more severe disease) and sight-threatening DR (STDR, i.e., severe NPDR and/or CSDME or more severe disease). The results compared the AI output to a consensus or majority image grading of three expert graders according to the International Clinical Diabetic Retinopathy severity scale. RESULTS: On 478 subjects included in the analysis, the algorithm achieved a high classification sensitivity of 95.30% (95% CI: 91.9%-98.7%) and a specificity of 83.89% (95% CI: 79.9%-87.9%) for the detection of RDR. The sensitivity for STDR detection was 100%. CONCLUSION: The study proved that Medios AI-DR algorithm yields good accuracy in screening for RDR in the Armenian population. In our literature search, this is the only smartphone-based, offline AI model validated in different populations.
Asunto(s)
Algoritmos , Inteligencia Artificial , Retinopatía Diabética , Humanos , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/etnología , Masculino , Femenino , Persona de Mediana Edad , Tamizaje Masivo/métodos , Etnicidad , Anciano , AdultoRESUMEN
Diagnostic accuracy is vital in otorhinolaryngology for effective patient care, yet diagnostic mismatches between non-otorhinolaryngology clinicians and ENT specialists can occur. However, studies investigating such mismatches in low-resource healthcare environments are limited. This study aims to analyze diagnostic mismatches in otorhinolaryngology within a low-resource healthcare environment. A publicly available dataset assessing diagnostic outcomes from non-otorhinolaryngology clinicians and ENT specialists was analyzed. The dataset included demographic characteristics, referral diagnoses, and final ENT specialist diagnoses. Descriptive statistics and appropriate statistical tests were employed to assess the prevalence of diagnostic mismatches and associated factors. The analysis comprised 1544 cases. The prevalence of diagnostic mismatches between non-otorhinolaryngology clinicians and ENT specialists was 67.4%. Certain specific ENT diseases demonstrated higher frequencies of diagnostic mismatches. Factors such as mismatch in the diagnosis and compliance of patient were found to influence the occurrence of diagnostic mismatches. This study highlights the presence of diagnostic mismatches in otorhinolaryngology within a low-resource healthcare environment. The prevalence of these mismatches underscores the need for improved diagnostic practices in such settings. Factors contributing to diagnostic mismatches should be further explored to develop strategies for enhancing diagnostic accuracy and reducing diagnostic errors in otorhinolaryngology.
RESUMEN
Lung cancer, the treacherous malignancy affecting the respiratory system of a human body, has a devastating impact on the health and well-being of an individual. Due to the lack of automated and noninvasive diagnostic tools, healthcare professionals look forward toward biopsy as a gold standard for diagnosis. However, biopsy could be traumatizing and expensive process. Additionally, the limited availability of dataset and inaccuracy in diagnosis is a major drawback experienced by researchers. The objective of the proposed research is to develop an automated diagnostic tool for screening of lung cancer using optimized hyperparameters such that convolutional neural network (CNN) model generalizes well for universally obtained computerized tomography (CT) slices of lung pathologies. The aforementioned objective is achieved in the following ways: (i) Initially, a preprocessing methodology specific to lung CT scans is formulated to avoid the loss of information due to random image smoothing, and (ii) a sine cosine algorithm optimization algorithm (SCA) is integrated in the CNN model, to optimally select the tuning parameters of CNN. The error rate is used as an objective function, and the SCA algorithm tries to minimize. The proposed method successfully achieved an average classification accuracy of 99% in classification of lung scans in normal, benign, and malignant classes. Further, the generalization ability of the proposed model is tested on unseen dataset, thereby achieving promising results. The quantitative results prove the efficacy of the system to be used by radiologists in a clinical scenario.
RESUMEN
Rare muscular disorders (RMDs) are disorders that affect a small percentage of the population. The disorders which are attributed to genetic mutations often manifest in the form of progressive weakness and atrophy of skeletal and heart muscles. RMDs includes disorders such as Duchenne muscular dystrophy (DMD), GNE myopathy, spinal muscular atrophy (SMA), limb girdle muscular dystrophy, and so on. Due to the infrequent occurrence of these disorders, development of therapeutic approaches elicits less attention compared with other more prevalent diseases. However, in recent times, improved understanding of pathogenesis has led to greater advances in developing therapeutic options to treat such diseases. Exon skipping, gene augmentation, and gene editing have taken the spotlight in drug development for rare neuromuscular disorders. The recent innovation in targeting and repairing mutations with the advent of CRISPR technology has in fact opened new possibilities in the development of gene therapy approaches for these disorders. Although these treatments show satisfactory therapeutic effects, the susceptibility to degradation, instability, and toxicity limits their application. So, an appropriate delivery vector is required for the delivery of these cargoes. Viral vectors are considered potential delivery systems for gene therapy; however, the associated concurrent immunogenic response and other limitations have paved the way for the applications of other non-viral systems like lipids, polymers, cellpenetrating peptides (CPPs), and other organic and inorganic materials. This review will focus on non-viral vectors for the delivery of therapeutic cargoes in order to treat muscular dystrophies.
Asunto(s)
Atrofia Muscular Espinal , Distrofia Muscular de Duchenne , Ácidos Nucleicos , Humanos , Enfermedades Raras/tratamiento farmacológico , Enfermedades Raras/genética , Distrofia Muscular de Duchenne/tratamiento farmacológico , Distrofia Muscular de Duchenne/genética , Atrofia Muscular Espinal/genética , Atrofia Muscular Espinal/terapia , MúsculosRESUMEN
BACKGROUND/OBJECTIVES: An affordable and scalable screening model is critical for undetected glaucoma. The study evaluated the performance of an offline, smartphone-based AI system for the detection of referable glaucoma against two benchmarks: specialist diagnosis following full glaucoma workup and consensus image grading. SUBJECTS/METHODS: This prospective study (tertiary glaucoma centre, India) included 243 subjects with varying severity of glaucoma and control group without glaucoma. Disc-centred images were captured using a validated smartphone-based fundus camera analysed by the AI system and graded by specialists. Diagnostic ability of the AI in detecting referable Glaucoma (Confirmed glaucoma) and no referable Glaucoma (Suspects and No glaucoma) when compared to a final diagnosis (comprehensive glaucoma workup) and majority grading (image grading) by Glaucoma specialists (pre-defined criteria) were evaluated. RESULTS: The AI system demonstrated a sensitivity and specificity of 93.7% (95% CI: 87.6-96.9%) and 85.6% (95% CI:78.6-90.6%), respectively, in the detection of referable glaucoma when compared against final diagnosis following full glaucoma workup. True negative rate in definite non-glaucoma cases was 94.7% (95% CI: 87.2-97.9%). Amongst the false negatives were 4 early and 3 moderate glaucoma. When the same set of images provided to the AI was also provided to the specialists for image grading, specialists detected 60% (67/111) of true glaucoma cases versus a detection rate of 94% (104/111) by the AI. CONCLUSION: The AI tool showed robust performance when compared against a stringent benchmark. It had modest over-referral of normal subjects despite being challenged with fundus images alone. The next step involves a population-level assessment.
Asunto(s)
Retinopatía Diabética , Glaucoma , Humanos , Inteligencia Artificial , Estudios Prospectivos , Teléfono Inteligente , Retinopatía Diabética/diagnóstico , Tamizaje Masivo/métodos , Glaucoma/diagnósticoRESUMEN
In this study, we present nanocomposites of bioactive glass (BG) and hyaluronic acid (HA) (nano-BGHA) for effective delivery of HA to skin and bone. The synthesis of the nanocomposites has been carried out through the bio-inspired method, which is a modification of the traditional Stober's synthesis as it avoids using ethanol, ammonia, synthetic surfactants, or high-temperature calcination. This environmentally friendly, bio-inspired route allowed the synthesis of mesoporous nanocomposites with an average hydrodynamic radius of â¼190 nm and an average net surface charge of â¼-21 mV. Most nanocomposites are amorphous and bioactive in nature with over 70 % cellular viability for skin and bone cell lines even at high concentrations, along with high cellular uptake (90-100 %). Furthermore, the nanocomposites could penetrate skin cells in a transwell set-up and artificial human skin membrane (StratM®), thus depicting an attractive strategy for the delivery of HA to the skin. The purpose of the study is to develop nanocomposites of HA and BG that can have potential applications in non-invasive treatments that require the delivery of high molecular weight HA such as in the case of osteoarthritis, sports injury treatments, eye drops, wound healing, and some anticancer treatments, if further investigated. The presence of BG further enhances the range to bone-related applications. Additionally, the nanocomposites can have potential cosmeceutical applications where HA is abundantly used, for instance in moisturizers, dermal fillers, shampoos, anti-wrinkle creams, etc.
Asunto(s)
Ácido Hialurónico , Nanocompuestos , Humanos , Piel , Huesos , Cicatrización de Heridas , Membranas Artificiales , VidrioRESUMEN
Purpose: The primary objective of this study was to develop and validate an AI algorithm as a screening tool for the detection of retinopathy of prematurity (ROP). Participants: Images were collected from infants enrolled in the KIDROP tele-ROP screening program. Methods: We developed a deep learning (DL) algorithm with 227,326 wide-field images from multiple camera systems obtained from the KIDROP tele-ROP screening program in India over an 11-year period. 37,477 temporal retina images were utilized with the dataset split into train (n = 25,982, 69.33%), validation (n = 4,006, 10.69%), and an independent test set (n = 7,489, 19.98%). The algorithm consists of a binary classifier that distinguishes between the presence of ROP (Stages 1-3) and the absence of ROP. The image labels were retrieved from the daily registers of the tele-ROP program. They consist of per-eye diagnoses provided by trained ROP graders based on all images captured during the screening session. Infants requiring treatment and a proportion of those not requiring urgent referral had an additional confirmatory diagnosis from an ROP specialist. Results: Of the 7,489 temporal images analyzed in the test set, 2,249 (30.0%) images showed the presence of ROP. The sensitivity and specificity to detect ROP was 91.46% (95% CI: 90.23%-92.59%) and 91.22% (95% CI: 90.42%-91.97%), respectively, while the positive predictive value (PPV) was 81.72% (95% CI: 80.37%-83.00%), negative predictive value (NPV) was 96.14% (95% CI: 95.60%-96.61%) and the AUROC was 0.970. Conclusion: The novel ROP screening algorithm demonstrated high sensitivity and specificity in detecting the presence of ROP. A prospective clinical validation in a real-world tele-ROP platform is under consideration. It has the potential to lower the number of screening sessions required to be conducted by a specialist for a high-risk preterm infant thus significantly improving workflow efficiency.
RESUMEN
INTRODUCTION: Numerous studies have demonstrated the use of artificial intelligence (AI) for early detection of referable diabetic retinopathy (RDR). A direct comparison of these multiple automated diabetic retinopathy (DR) image assessment softwares (ARIAs) is, however, challenging. We retrospectively compared the performance of two modern ARIAs, IDx-DR and Medios AI. METHODS: In this retrospective-comparative study, retinal images with sufficient image quality were run on both ARIAs. They were captured in 811 consecutive patients with diabetes visiting diabetic clinics in Poland. For each patient, four non-mydriatic images, 45° field of view, i.e., two sets of one optic disc and one macula-centered image using Topcon NW400 were captured. Images were manually graded for severity of DR as no DR, any DR (mild non-proliferative diabetic retinopathy [NPDR] or more severe disease), RDR (moderate NPDR or more severe disease and/or clinically significant diabetic macular edema [CSDME]), or sight-threatening DR (severe NPDR or more severe disease and/or CSDME) by certified graders. The ARIA output was compared to manual consensus image grading (reference standard). RESULTS: On 807 patients, based on consensus grading, there was no evidence of DR in 543 patients (67%). Any DR was seen in 264 (33%) patients, of which 174 (22%) were RDR and 41 (5%) were sight-threatening DR. The sensitivity of detecting RDR against reference standard grading was 95% (95% CI: 91, 98%) and the specificity was 80% (95% CI: 77, 83%) for Medios AI. They were 99% (95% CI: 96, 100%) and 68% (95% CI: 64, 72%) for IDx-DR, respectively. CONCLUSION: Both the ARIAs achieved satisfactory accuracy, with few false negatives. Although false-positive results generate additional costs and workload, missed cases raise the most concern whenever automated screening is debated.
Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Edema Macular , Humanos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Estudios Retrospectivos , Tamizaje Masivo/métodos , Edema Macular/diagnóstico , Programas InformáticosRESUMEN
We here introduce a novel bioreducible polymer-based gene delivery platform enabling widespread transgene expression in multiple brain regions with therapeutic relevance following intracranial convection-enhanced delivery. Our bioreducible nanoparticles provide markedly enhanced gene delivery efficacy in vitro and in vivo compared to nonbiodegradable nanoparticles primarily due to the ability to release gene payloads preferentially inside cells. Remarkably, our platform exhibits competitive gene delivery efficacy in a neuron-rich brain region compared to a viral vector under previous and current clinical investigations with demonstrated positive outcomes. Thus, our platform may serve as an attractive alternative for the intracranial gene therapy of neurological disorders.
Asunto(s)
Técnicas de Transferencia de Gen , Polímeros , Polímeros/metabolismo , Terapia Genética , Encéfalo/metabolismoRESUMEN
This letter is in response to the article "Enhancing India's Health Care during COVID Era: Role of Artificial Intelligence and Algorithms". While the integration of AI has the potential to improve patient outcomes and reduce the workload of healthcare professionals, there is a need for significant training and upskilling of healthcare providers. There are ethical and privacy concerns related to the use of AI in healthcare, which must be accompanied by rigorous guidelines. One solution to the overburdened healthcare systems in India is the use of new language generation models like ChatGPT to assist healthcare workers in writing discharge summaries. By using these technologies responsibly, we can improve healthcare outcomes and alleviate the burden on overworked healthcare professionals.
RESUMEN
This study aims to investigate public sentiment on laryngeal cancer via tweets in 2022 using machine learning. We aimed to analyze the public sentiment about laryngeal cancer on Twitter last year. A novel dataset was created for the purpose of this study by scraping all tweets from 1st Jan 2022 that included the hashtags #throatcancer, #laryngealcancer, #supraglotticcancer, #glotticcancer, and #subglotticcancer in their text. After all tweets underwent a fourfold data cleaning process, they were analyzed using natural language processing and sentiment analysis techniques to classify tweets into positive, negative, or neutral categories and to identify common themes and topics related to laryngeal cancer. The study analyzed a corpus of 733 tweets related to laryngeal cancer. The sentiment analysis revealed that 53% of the tweets were neutral, 34% were positive, and 13% were negative. The most common themes identified in the tweets were treatment and therapy, risk factors, symptoms and diagnosis, prevention and awareness, and emotional impact. This study highlights the potential of social media platforms like Twitter as a valuable source of real-time, patient-generated data that can inform healthcare research and practice. Our findings suggest that while Twitter is a popular platform, the limited number of tweets related to laryngeal cancer indicates that a better strategy could be developed for online communication among netizens regarding the awareness of laryngeal cancer.
RESUMEN
Accurate classification of laryngeal cancer is a critical step for diagnosis and appropriate treatment. Radiomics is a rapidly advancing field in medical image processing that uses various algorithms to extract many quantitative features from radiological images. The high dimensional features extracted tend to cause overfitting and increase the complexity of the classification model. Thereby, feature selection plays an integral part in selecting relevant features for the classification problem. In this study, we explore the predictive capabilities of radiomics on Computed Tomography (CT) images with the incidence of laryngeal cancer to predict the histopathological grade and T stage of the tumour. Working with a pilot dataset of 20 images, an experienced radiologist carefully annotated the supraglottic lesions in the three-dimensional plane. Over 280 radiomic features that quantify the shape, intensity and texture were extracted from each image. Machine learning classifiers were built and tested to predict the stage and grade of the malignant tumour based on the calculated radiomic features. To investigate if radiomic features extracted from CT images can be used for the classification of laryngeal tumours. Out of 280 features extracted from every image in the dataset, it was found that 24 features are potential classifiers of laryngeal tumour stage and 12 radiomic features are good classifiers of histopathological grade of the laryngeal tumor. The novelty of this paper lies in the ability to create these classifiers before the surgical biopsy procedure, giving the clinician valuable, timely information.
RESUMEN
PRCIS: The offline artificial intelligence (AI) on a smartphone-based fundus camera shows good agreement and correlation with the vertical cup-to-disc ratio (vCDR) from the spectral-domain optical coherence tomography (SD-OCT) and manual grading by experts. PURPOSE: The purpose of this study is to assess the agreement of vCDR measured by a new AI software from optic disc images obtained using a validated smartphone-based imaging device, with SD-OCT vCDR measurements, and manual grading by experts on a stereoscopic fundus camera. METHODS: In a prospective, cross-sectional study, participants above 18 years (Glaucoma and normal) underwent a dilated fundus evaluation, followed by optic disc imaging including a 42-degree monoscopic disc-centered image (Remidio NM-FOP-10), a 30-degree stereoscopic disc-centered image (Kowa nonmyd WX-3D desktop fundus camera), and disc analysis (Cirrus SD-OCT). Remidio FOP images were analyzed for vCDR using the new AI software, and Kowa stereoscopic images were manually graded by 3 fellowship-trained glaucoma specialists. RESULTS: We included 473 eyes of 244 participants. The vCDR values from the new AI software showed strong agreement with SD-OCT measurements [95% limits of agreement (LoA)=-0.13 to 0.16]. The agreement with SD-OCT was marginally better in eyes with higher vCDR (95% LoA=-0.15 to 0.12 for vCDR>0.8). Interclass correlation coefficient was 0.90 (95% CI, 0.88-0.91). The vCDR values from AI software showed a good correlation with the manual segmentation by experts (interclass correlation coefficient=0.89, 95% CI, 0.87-0.91) on stereoscopic images (95% LoA=-0.18 to 0.11) with agreement better for eyes with vCDR>0.8 (LoA=-0.12 to 0.08). CONCLUSIONS: The new AI software vCDR measurements had an excellent agreement and correlation with the SD-OCT and manual grading. The ability of the Medios AI to work offline, without requiring cloud-based inferencing, is an added advantage.
Asunto(s)
Glaucoma , Disco Óptico , Enfermedades del Nervio Óptico , Humanos , Tomografía de Coherencia Óptica/métodos , Inteligencia Artificial , Estudios Prospectivos , Estudios Transversales , Enfermedades del Nervio Óptico/diagnóstico , Presión Intraocular , Glaucoma/diagnóstico , Programas Informáticos , Fotograbar/métodos , Reproducibilidad de los ResultadosRESUMEN
BACKGROUND: Refraction is one of the key components of a comprehensive eye examination. Auto refractometers that are reliable and affordable can be beneficial, especially in a low-resource community setting. The study aimed to validate the accuracy of a novel wave-front aberrometry-based auto refractometer, Instaref R20 against the open-field system and subjective refraction in an adult population. METHODS: All the participants underwent a comprehensive eye examination including objective refraction, subjective acceptance, anterior and posterior segment evaluation. Refraction was performed without cycloplegia using WAM5500 open-field auto refractometer (OFAR) and Instaref R20, the study device. Agreement between both methods was evaluated using Bland-Altman analysis. The repeatability of the device based on three measurements in a subgroup of 40 adults was assessed. RESULTS: The refractive error was measured in 132 participants (mean age,30.53 ± 9.36 years, 58.3% female). The paired mean difference of the refraction values of the study device against OFAR was - 0.13D for M, - 0.0002D (J0) and - 0.13D (J45) and against subjective refraction (SR) was - 0.09D (M), 0.06 (J0) and 0.03D (J45). The device agreed within +/- 0.50D of OFAR in 78% of eyes for M, 79% for J0 and 78% for J45. The device agreed within +/- 0.5D of SR values for M (84%), J0 (86%) and J45 (89%). CONCLUSION: This study found a good agreement between the measurements obtained with the portable autorefractor against open-field refractometer and SR values. It has a potential application in population-based community vision screening programs for refractive error correction without the need for highly trained personnel.
Asunto(s)
Errores de Refracción , Selección Visual , Humanos , Adulto , Femenino , Adulto Joven , Masculino , Estudios Prospectivos , Aberrometría , Reproducibilidad de los Resultados , Refracción Ocular , Errores de Refracción/diagnóstico , Pruebas de Visión , Selección Visual/métodosRESUMEN
Purpose: InstaRef R20 is a handheld, affordable auto refractometer based on Shack Hartmann aberrometry technology. The study's objective was to compare InstaRef R20's performance for identifying refractive error in a paediatric population to that of standard subjective and objective refraction under both pre- and post-cycloplegic conditions. Methods: Refraction was performed using 1) standard clinical procedure consisting of retinoscopy followed by subjective refraction (SR) under pre- and post-cycloplegic conditions and 2) InstaRef R20. Agreement between both methods was evaluated using Bland-Altman analysis. The repeatability of the device based on three measurements in a subgroup of 20 children was assessed. Results: The refractive error was measured in 132 children (mean age 12.31 ± 3 years). The spherical equivalent (M) and cylindrical components (J0 and J45) of the device had clinically acceptable differences (within ±0.50D) and acceptable agreement compared to standard pre- and post-cycloplegic manual retinoscopy and subjective refraction (SR). The device agreed within ± 0.50D of retinoscopy in 67% of eyes for M, 78% for J0 and 80% for J45 and within ± 0.50D of SR in 70% for M and 77% for cylindrical components. Conclusion: InstaRef R20 has an acceptable agreement compared to standard retinoscopy in paediatric population. The measurements from this device can be used as a starting point for subjective acceptance. The device being simple to use, portable, reliable and affordable has the potential for large-scale community-based refractive error detection.
RESUMEN
Purpose: To evaluate the performance of a validated Artificial Intelligence (AI) algorithm developed for a smartphone-based camera on images captured using a standard desktop fundus camera to screen for diabetic retinopathy (DR). Participants: Subjects with established diabetes mellitus. Methods: Images captured on a desktop fundus camera (Topcon TRC-50DX, Japan) for a previous study with 135 consecutive patients (233 eyes) with established diabetes mellitus, with or without DR were analysed by the AI algorithm. The performance of the AI algorithm to detect any DR, referable DR (RDR Ie, worse than mild non proliferative diabetic retinopathy (NPDR) and/or diabetic macular edema (DME)) and sight-threatening DR (STDR Ie, severe NPDR or worse and/or DME) were assessed based on comparisons against both image-based consensus grades by two fellowship trained vitreo-retina specialists and clinical examination. Results: The sensitivity was 98.3% (95% CI 96%, 100%) and the specificity 83.7% (95% CI 73%, 94%) for RDR against image grading. The specificity for RDR decreased to 65.2% (95% CI 53.7%, 76.6%) and the sensitivity marginally increased to 100% (95% CI 100%, 100%) when compared against clinical examination. The sensitivity for detection of any DR when compared against image-based consensus grading and clinical exam were both 97.6% (95% CI 95%, 100%). The specificity for any DR detection was 90.9% (95% CI 82.3%, 99.4%) as compared against image grading and 88.9% (95% CI 79.7%, 98.1%) on clinical exam. The sensitivity for STDR was 99.0% (95% CI 96%, 100%) against image grading and 100% (95% CI 100%, 100%) as compared against clinical exam. Conclusion: The AI algorithm could screen for RDR and any DR with robust performance on images captured on a desktop fundus camera when compared to image grading, despite being previously optimized for a smartphone-based camera.