Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
J Neural Eng ; 21(1)2024 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-38237175

RESUMEN

Peripheral nerve interfaces (PNIs) are electrical systems designed to integrate with peripheral nerves in patients, such as following central nervous system (CNS) injuries to augment or replace CNS control and restore function. We review the literature for clinical trials and studies containing clinical outcome measures to explore the utility of human applications of PNIs. We discuss the various types of electrodes currently used for PNI systems and their functionalities and limitations. We discuss important design characteristics of PNI systems, including biocompatibility, resolution and specificity, efficacy, and longevity, to highlight their importance in the current and future development of PNIs. The clinical outcomes of PNI systems are also discussed. Finally, we review relevant PNI clinical trials that were conducted, up to the present date, to restore the sensory and motor function of upper or lower limbs in amputees, spinal cord injury patients, or intact individuals and describe their significant findings. This review highlights the current progress in the field of PNIs and serves as a foundation for future development and application of PNI systems.


Asunto(s)
Amputados , Nervios Periféricos , Humanos , Amputación Quirúrgica , Electrodos , Parálisis/cirugía
2.
IEEE Trans Image Process ; 33: 241-256, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38064329

RESUMEN

Accurate classification of nuclei communities is an important step towards timely treating the cancer spread. Graph theory provides an elegant way to represent and analyze nuclei communities within the histopathological landscape in order to perform tissue phenotyping and tumor profiling tasks. Many researchers have worked on recognizing nuclei regions within the histology images in order to grade cancerous progression. However, due to the high structural similarities between nuclei communities, defining a model that can accurately differentiate between nuclei pathological patterns still needs to be solved. To surmount this challenge, we present a novel approach, dubbed neural graph refinement, that enhances the capabilities of existing models to perform nuclei recognition tasks by employing graph representational learning and broadcasting processes. Based on the physical interaction of the nuclei, we first construct a fully connected graph in which nodes represent nuclei and adjacent nodes are connected to each other via an undirected edge. For each edge and node pair, appearance and geometric features are computed and are then utilized for generating the neural graph embeddings. These embeddings are used for diffusing contextual information to the neighboring nodes, all along a path traversing the whole graph to infer global information over an entire nuclei network and predict pathologically meaningful communities. Through rigorous evaluation of the proposed scheme across four public datasets, we showcase that learning such communities through neural graph refinement produces better results that outperform state-of-the-art methods.


Asunto(s)
Núcleo Celular , Aprendizaje , Técnicas Histológicas
3.
IEEE J Biomed Health Inform ; 28(2): 952-963, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37999960

RESUMEN

Early-stage cancer diagnosis potentially improves the chances of survival for many cancer patients worldwide. Manual examination of Whole Slide Images (WSIs) is a time-consuming task for analyzing tumor-microenvironment. To overcome this limitation, the conjunction of deep learning with computational pathology has been proposed to assist pathologists in efficiently prognosing the cancerous spread. Nevertheless, the existing deep learning methods are ill-equipped to handle fine-grained histopathology datasets. This is because these models are constrained via conventional softmax loss function, which cannot expose them to learn distinct representational embeddings of the similarly textured WSIs containing an imbalanced data distribution. To address this problem, we propose a novel center-focused affinity loss (CFAL) function that exhibits 1) constructing uniformly distributed class prototypes in the feature space, 2) penalizing difficult samples, 3) minimizing intra-class variations, and 4) placing greater emphasis on learning minority class features. We evaluated the performance of the proposed CFAL loss function on two publicly available breast and colon cancer datasets having varying levels of imbalanced classes. The proposed CFAL function shows better discrimination abilities as compared to the popular loss functions such as ArcFace, CosFace, and Focal loss. Moreover, it outperforms several SOTA methods for histology image classification across both datasets.


Asunto(s)
Mama , Neoplasias , Humanos , Mama/diagnóstico por imagen , Técnicas Histológicas , Microambiente Tumoral , Neoplasias/diagnóstico por imagen
4.
Sci Rep ; 13(1): 22885, 2023 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-38129680

RESUMEN

Tomatoes are a major crop worldwide, and accurately classifying their maturity is important for many agricultural applications, such as harvesting, grading, and quality control. In this paper, the authors propose a novel method for tomato maturity classification using a convolutional transformer. The convolutional transformer is a hybrid architecture that combines the strengths of convolutional neural networks (CNNs) and transformers. Additionally, this study introduces a new tomato dataset named KUTomaData, explicitly designed to train deep-learning models for tomato segmentation and classification. KUTomaData is a compilation of images sourced from a greenhouse in the UAE, with approximately 700 images available for training and testing. The dataset is prepared under various lighting conditions and viewing perspectives and employs different mobile camera sensors, distinguishing it from existing datasets. The contributions of this paper are threefold: firstly, the authors propose a novel method for tomato maturity classification using a modular convolutional transformer. Secondly, the authors introduce a new tomato image dataset that contains images of tomatoes at different maturity levels. Lastly, the authors show that the convolutional transformer outperforms state-of-the-art methods for tomato maturity classification. The effectiveness of the proposed framework in handling cluttered and occluded tomato instances was evaluated using two additional public datasets, Laboro Tomato and Rob2Pheno Annotated Tomato, as benchmarks. The evaluation results across these three datasets demonstrate the exceptional performance of our proposed framework, surpassing the state-of-the-art by 58.14%, 65.42%, and 66.39% in terms of mean average precision scores for KUTomaData, Laboro Tomato, and Rob2Pheno Annotated Tomato, respectively. This work can potentially improve the efficiency and accuracy of tomato harvesting, grading, and quality control processes.

5.
J Neurosurg Case Lessons ; 6(26)2023 Dec 25.
Artículo en Inglés | MEDLINE | ID: mdl-38145561

RESUMEN

BACKGROUND: Cancer-related or postoperative pain can occur following sacral chordoma resection. Despite a lack of current recommendations for cancer pain treatment, spinal cord stimulation (SCS) has demonstrated effectiveness in addressing cancer-related pain. OBSERVATIONS: A 76-year-old female with a sacral chordoma underwent anterior osteotomies and partial en bloc sacrectomy. She subsequently presented with chronic pain affecting both buttocks and posterior thighs and legs, significantly impeding her daily activities. She underwent a staged epidural SCS paddle trial and permanent system placement using intraoperative neuromonitoring. The utilization of percutaneous leads was not viable because of her history of spinal fluid leakage, multiple lumbosacral surgeries, and previous complex plastic surgery closure. The patient reported a 62.5% improvement in her lower-extremity pain per the modified Quadruple Visual Analog Scale and a 50% improvement in the modified Pain and Sleep Questionnaire 3-item index during the SCS trial. Following permanent SCS system placement and removal of her externalized lead extenders, she had an uncomplicated postoperative course and reported notable improvements in her pain symptoms. LESSONS: This case provides a compelling illustration of the successful treatment of chronic pain using SCS following radical sacral chordoma resection. Surgeons may consider this treatment approach in patients presenting with refractory pain following spinal tumor resection.

6.
Cureus ; 15(9): e45962, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37900519

RESUMEN

Spinal surgical procedures are steadily increasing globally due to broad indications of certain techniques encompassing a wide spectrum of conditions, including degenerative spine disorders, congenital anomalies, spinal metastases, and traumatic spinal fractures. The two specialties, neurosurgery (NS) and orthopedic surgery (OS), both possess the clinical adeptness to perform these procedures. With the advancing focus on comparative effectiveness research, it is vital to compare patient outcomes in spine surgeries performed by orthopedic surgeons and neurosurgeons, given their distinct approaches and training backgrounds to guide hospital programs and physicians to consider surgeon specialty when making informed decisions. Our review of the available literature revealed no significant difference in postoperative outcomes in terms of blood loss, neurological deficit, dural injury, intraoperative complications, and postoperative wound dehiscence in procedures performed by neurosurgeons and orthopedic surgeons. An increase in blood transfusion rates among patients operated by orthopedic surgeons and a longer operative time of procedures performed by neurosurgeons was a consistent finding among several studies. Other findings include a prolonged hospital stay, higher hospital readmission rates, and lower cost of procedures in patients operated on by orthopedic surgeons. A few studies revealed lower sepsis rates unplanned intubation rates and higher incidence of urinary tract infections (UTIs) and pneumonia postoperatively among patient cohorts operated by neurosurgeons. Certain limitations were identified in the studies including the use of large databases with incomplete information related to patient and surgeon demographics. Hence, it is imperative to account for these confounding variables in future studies to alleviate any biases. Nevertheless, it is essential to embrace a multidisciplinary approach integrating the surgical expertise of the two specialties and develop standardized management guidelines and techniques for spinal disorders to mitigate complications and enhance patient outcomes.

7.
Proc (Bayl Univ Med Cent) ; 36(6): 722-727, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37829212

RESUMEN

Purpose: To compare the lobbying expenditures and political action committee (PAC) campaign finance activities of the American Academy of Ophthalmology (AAO), American Society of Cataract and Refractive Surgery (ASCRS), and American Optometric Association (AOA) from 2015 to 2022. Methods: Financial data were collected from the Federal Election Commission and OpenSecrets database. Analysis was performed to characterize and compare financial activity among the organizations. P < 0.05 was considered significant and all analyses were two-sided. Results: From 2015 to 2022, the AAO, ASCRS, and AOA spent $6,745,000, $5,354,406, and $13,335,000 on lobbying, respectively. The AOA's annual lobbying expenditure (median, $1,725,000) was significantly greater than AAO's ($842,500, P = 0.03) and ASCRS's ($694,289, P < 0.001). In PAC donations, OPHTHPAC, affiliated with AAO, received $3,221,737 from 2079 donors (median, $900); eyePAC, affiliated with ASCRS, received $506,255 from 349 donors ($500); and AOA-PAC received $6,642,588 from 3641 donors ($825). Compared to eyePAC, median donations to OPHTHPAC (P = 0.01) and AOA-PAC (P = 0.04) were significantly higher. In campaign spending, OPHTHPAC contributed $2,728,500 to 326 campaigns (median, $5000), eyePAC contributed $293,500 to 58 campaigns ($3000), and AOA-PAC contributed $5,128,673 to 617 campaigns ($5500). eyePAC's median campaign contribution was significantly lower than the AOA's (P < 0.001) and AAO's (P = 0.007). Every PAC directed most of its contributions toward Republican campaigns; eyePAC donated the highest proportion (64.9%). Conclusions: AOA was more assertive in shaping policy by increasing lobbying expenditures, fundraising, and donating to a greater number of election campaigns.

8.
Cancers (Basel) ; 15(17)2023 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-37686561

RESUMEN

BACKGROUND: The outcomes of orbital exenteration (OE) in patients with craniofacial lesions (CFLs) remain unclear. The present review summarizes the available literature on the clinical outcomes of OE, including surgical outcomes and overall survival (OS). METHODS: Relevant articles were retrieved from Medline, Scopus, and Cochrane according to PRISMA guidelines. A systematic review and meta-analysis were conducted on the clinical characteristics, management, and outcomes. RESULTS: A total of 33 articles containing 957 patients who underwent OE for CFLs were included (weighted mean age: 64.3 years [95% CI: 59.9-68.7]; 58.3% were male). The most common lesion was squamous cell carcinoma (31.8%), and the most common symptom was disturbed vision/reduced visual acuity (22.5%). Of the patients, 302 (31.6%) had total OE, 248 (26.0%) had extended OE, and 87 (9.0%) had subtotal OE. Free flaps (33.3%), endosseous implants (22.8%), and split-thickness skin grafts (17.2%) were the most used reconstructive methods. Sino-orbital or sino-nasal fistula (22.6%), flap or graft failure (16.9%), and hyperostosis (13%) were the most reported complications. Regarding tumor recurrences, 38.6% were local, 32.3% were distant, and 6.7% were regional. The perineural invasion rate was 17.4%, while the lymphovascular invasion rate was 5.0%. Over a weighted mean follow-up period of 23.6 months (95% CI: 13.8-33.4), a weighted overall mortality rate of 39% (95% CI: 28-50%) was observed. The 5-year OS rate was 50% (median: 61 months [95% CI: 46-83]). The OS multivariable analysis did not show any significant findings. CONCLUSIONS: Although OE is a disfiguring procedure with devastating outcomes, it is a viable option for carefully selected patients with advanced CFLs. A patient-tailored approach based on tumor pathology, extension, and overall patient condition is warranted.

9.
Brain Inform ; 10(1): 25, 2023 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-37689601

RESUMEN

Early identification of mental disorders, based on subjective interviews, is extremely challenging in the clinical setting. There is a growing interest in developing automated screening tools for potential mental health problems based on biological markers. Here, we demonstrate the feasibility of an AI-powered diagnosis of different mental disorders using EEG data. Specifically, this work aims to classify different mental disorders in the following ecological context accurately: (1) using raw EEG data, (2) collected during rest, (3) during both eye open, and eye closed conditions, (4) at short 2-min duration, (5) on participants with different psychiatric conditions, (6) with some overlapping symptoms, and (7) with strongly imbalanced classes. To tackle this challenge, we designed and optimized a transformer-based architecture, where class imbalance is addressed through focal loss and class weight balancing. Using the recently released TDBRAIN dataset (n= 1274 participants), our method classifies each participant as either a neurotypical or suffering from major depressive disorder (MDD), attention deficit hyperactivity disorder (ADHD), subjective memory complaints (SMC), or obsessive-compulsive disorder (OCD). We evaluate the performance of the proposed architecture on both the window-level and the patient-level. The classification of the 2-min raw EEG data into five classes achieved a window-level accuracy of 63.2% and 65.8% for open and closed eye conditions, respectively. When the classification is limited to three main classes (MDD, ADHD, SMC), window level accuracy improved to 75.1% and 69.9% for eye open and eye closed conditions, respectively. Our work paves the way for developing novel AI-based methods for accurately diagnosing mental disorders using raw resting-state EEG data.

11.
J Neurosurg Case Lessons ; 5(26)2023 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-37399140

RESUMEN

BACKGROUND: Schwannomas are common peripheral nerve sheath tumors. Imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT) can help to distinguish schwannomas from other types of lesions. However, there have been several reported cases describing the misdiagnosis of aneurysms as schwannomas. OBSERVATIONS: A 70-year-old male with ongoing pain despite spinal fusion surgery underwent MRI. A lesion was noted along the left sciatic nerve, which was believed to be a sciatic nerve schwannoma. During the surgery for planned neurolysis and tumor resection, the lesion was noted to be pulsatile. Electromyography mapping and intraoperative ultrasound confirmed vascular pulsations and turbulent flow within the aneurysm, so the surgery was aborted. A formal CT angiogram revealed the lesion to be an internal iliac artery (IIA) branch aneurysm. The patient underwent coil embolization with complete obliteration of the aneurysm. LESSONS: The authors report the first case of an IIA aneurysm misdiagnosed as a sciatic nerve schwannoma. Surgeons should be aware of this potential misdiagnosis and potentially use other imaging modalities to confirm the lesion before proceeding with surgery.

12.
Biomed Signal Process Control ; 85: 104855, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36987448

RESUMEN

Chest X-rays (CXR) are the most commonly used imaging methodology in radiology to diagnose pulmonary diseases with close to 2 billion CXRs taken every year. The recent upsurge of COVID-19 and its variants accompanied by pneumonia and tuberculosis can be fatal in some cases and lives could be saved through early detection and appropriate intervention for the advanced cases. Thus CXRs can be used for an automated severity grading of pulmonary diseases that can aid radiologists in making better and informed diagnoses. In this article, we propose a single framework for disease classification and severity scoring produced by segmenting the lungs into six regions. We present a modified progressive learning technique in which the amount of augmentations at each step is capped. Our base network in the framework is first trained using modified progressive learning and can then be tweaked for new data sets. Furthermore, the segmentation task makes use of an attention map generated within and by the network itself. This attention mechanism allows to achieve segmentation results that are on par with networks having an order of magnitude or more parameters. We also propose severity score grading for 4 thoracic diseases that can provide a single-digit score corresponding to the spread of opacity in different lung segments with the help of radiologists. The proposed framework is evaluated using the BRAX data set for segmentation and classification into six classes with severity grading for a subset of the classes. On the BRAX validation data set, we achieve F1 scores of 0.924 and 0.939 without and with fine-tuning, respectively. A mean matching score of 80.8% is obtained for severity score grading while an average area under receiver operating characteristic curve of 0.88 is achieved for classification.

13.
Comput Biol Med ; 150: 106124, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36208597

RESUMEN

Prostate cancer (PCa) is one of the deadliest cancers in men, and identifying cancerous tissue patterns at an early stage can assist clinicians in timely treating the PCa spread. Many researchers have developed deep learning systems for mass-screening PCa. These systems, however, are commonly trained with well-annotated datasets in order to produce accurate results. Obtaining such data for training is often time and resource-demanding in clinical settings and can result in compromised screening performance. To address these limitations, we present a novel knowledge distillation-based instance segmentation scheme that allows conventional semantic segmentation models to perform instance-aware segmentation to extract stroma, benign, and the cancerous prostate tissues from the whole slide images (WSI) with incremental few-shot training. The extracted tissues are then used to compute majority and minority Gleason scores, which, afterward, are used in grading the PCa as per the clinical standards. The proposed scheme has been thoroughly tested on two datasets, containing around 10,516 and 11,000 WSI scans, respectively. Across both datasets, the proposed scheme outperforms state-of-the-art methods by 2.01% and 4.45%, respectively, in terms of the mean IoU score for identifying prostate tissues, and 10.73% and 11.42% in terms of F1 score for grading PCa according to the clinical standards. Furthermore, the applicability of the proposed scheme is tested under a blind experiment with a panel of expert pathologists, where it achieved a statistically significant Pearson correlation of 0.9192 and 0.8984 with the clinicians' grading.


Asunto(s)
Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Clasificación del Tumor
14.
Med Image Anal ; 79: 102480, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35598521

RESUMEN

Identification of nuclear components in the histology landscape is an important step towards developing computational pathology tools for the profiling of tumor micro-environment. Most existing methods for the identification of such components are limited in scope due to heterogeneous nature of the nuclei. Graph-based methods offer a natural way to formulate the nucleus classification problem to incorporate both appearance and geometric locations of the nuclei. The main challenge is to define models that can handle such an unstructured domain. Current approaches focus on learning better features and then employ well-known classifiers for identifying distinct nuclear phenotypes. In contrast, we propose a message passing network that is a fully learnable framework build on classical network flow formulation. Based on physical interaction of the nuclei, a nearest neighbor graph is constructed such that the nodes represent the nuclei centroids. For each edge and node, appearance and geometric features are computed which are then used for the construction of messages utilized for diffusing contextual information to the neighboring nodes. Such an algorithm can infer global information over an entire network and predict biologically meaningful nuclear communities. We show that learning such communities improves the performance of nucleus classification task in histology images. The proposed algorithm can be used as a component in existing state-of-the-art methods resulting in improved nucleus classification performance across four different publicly available datasets.


Asunto(s)
Técnicas Histológicas , Redes Neurales de la Computación , Algoritmos , Núcleo Celular , Humanos
15.
Sci Rep ; 12(1): 4132, 2022 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-35260715

RESUMEN

This paper presents a deep learning-driven portable, accurate, low-cost, and easy-to-use device to perform Reverse-Transcription Loop-Mediated Isothermal Amplification (RT-LAMP) to facilitate rapid detection of COVID-19. The 3D-printed device-powered using only a 5 Volt AC-DC adapter-can perform 16 simultaneous RT-LAMP reactions and can be used multiple times. Moreover, the experimental protocol is devised to obviate the need for separate, expensive equipment for RNA extraction in addition to eliminating sample evaporation. The entire process from sample preparation to the qualitative assessment of the LAMP amplification takes only 45 min (10 min for pre-heating and 35 min for RT-LAMP reactions). The completion of the amplification reaction yields a fuchsia color for the negative samples and either a yellow or orange color for the positive samples, based on a pH indicator dye. The device is coupled with a novel deep learning system that automatically analyzes the amplification results and pays attention to the pH indicator dye to screen the COVID-19 subjects. The proposed device has been rigorously tested on 250 RT-LAMP clinical samples, where it achieved an overall specificity and sensitivity of 0.9666 and 0.9722, respectively with a recall of 0.9892 for Ct < 30. Also, the proposed system can be widely used as an accurate, sensitive, rapid, and portable tool to detect COVID-19 in settings where access to a lab is difficult, or the results are urgently required.


Asunto(s)
COVID-19/diagnóstico , Aprendizaje Profundo , Técnicas de Diagnóstico Molecular/métodos , Técnicas de Amplificación de Ácido Nucleico/métodos , SARS-CoV-2/genética , Área Bajo la Curva , Prueba de COVID-19 , Colorantes/química , Humanos , Técnicas de Diagnóstico Molecular/instrumentación , Nasofaringe/virología , Técnicas de Amplificación de Ácido Nucleico/instrumentación , Sistemas de Atención de Punto , Impresión Tridimensional , ARN Viral/análisis , ARN Viral/metabolismo , Curva ROC , SARS-CoV-2/aislamiento & purificación , Sensibilidad y Especificidad
16.
Sensors (Basel) ; 22(4)2022 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-35214568

RESUMEN

Human beings tend to incrementally learn from the rapidly changing environment without comprising or forgetting the already learned representations. Although deep learning also has the potential to mimic such human behaviors to some extent, it suffers from catastrophic forgetting due to which its performance on already learned tasks drastically decreases while learning about newer knowledge. Many researchers have proposed promising solutions to eliminate such catastrophic forgetting during the knowledge distillation process. However, to our best knowledge, there is no literature available to date that exploits the complex relationships between these solutions and utilizes them for the effective learning that spans over multiple datasets and even multiple domains. In this paper, we propose a continual learning objective that encompasses mutual distillation loss to understand such complex relationships and allows deep learning models to effectively retain the prior knowledge while adapting to the new classes, new datasets, and even new applications. The proposed objective was rigorously tested on nine publicly available, multi-vendor, and multimodal datasets that span over three applications, and it achieved the top-1 accuracy of 0.9863% and an F1-score of 0.9930.


Asunto(s)
Redes Neurales de la Computación , Humanos
17.
Comput Biol Med ; 136: 104727, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34385089

RESUMEN

BACKGROUND: In anti-vascular endothelial growth factor (anti-VEGF) therapy, an accurate estimation of multi-class retinal fluid (MRF) is required for the activity prescription and intravitreal dose. This study proposes an end-to-end deep learning-based retinal fluids segmentation network (RFS-Net) to segment and recognize three MRF lesion manifestations, namely, intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED), from multi-vendor optical coherence tomography (OCT) imagery. The proposed image analysis tool will optimize anti-VEGF therapy and contribute to reducing the inter- and intra-observer variability. METHOD: The proposed RFS-Net architecture integrates the atrous spatial pyramid pooling (ASPP), residual, and inception modules in the encoder path to learn better features and conserve more global information for precise segmentation and characterization of MRF lesions. The RFS-Net model is trained and validated using OCT scans from multiple vendors (Topcon, Cirrus, Spectralis), collected from three publicly available datasets. The first dataset consisted of OCT volumes obtained from 112 subjects (a total of 11,334 B-scans) is used for both training and evaluation purposes. Moreover, the remaining two datasets are only used for evaluation purposes to check the trained RFS-Net's generalizability on unseen OCT scans. The two evaluation datasets contain a total of 1572 OCT B-scans from 1255 subjects. The performance of the proposed RFS-Net model is assessed through various evaluation metrics. RESULTS: The proposed RFS-Net model achieved the mean F1 scores of 0.762, 0.796, and 0.805 for segmenting IRF, SRF, and PED. Moreover, with the automated segmentation of the three retinal manifestations, the RFS-Net brings a considerable gain in efficiency compared to the tedious and demanding manual segmentation procedure of the MRF. CONCLUSIONS: Our proposed RFS-Net is a potential diagnostic tool for the automatic segmentation of MRF (IRF, SRF, and PED) lesions. It is expected to strengthen the inter-observer agreement, and standardization of dosimetry is envisaged as a result.


Asunto(s)
Aprendizaje Profundo , Tomografía de Coherencia Óptica , Humanos , Cintigrafía , Retina/diagnóstico por imagen , Líquido Subretiniano/diagnóstico por imagen
18.
Comput Biol Med ; 134: 104435, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34010791

RESUMEN

The human respiratory network is a vital system that provides oxygen supply and nourishment to the whole body. Pulmonary diseases can cause severe respiratory problems, leading to sudden death if not treated timely. Many researchers have utilized deep learning systems (in both transfer learning and fine-tuning modes) to diagnose pulmonary disorders using chest X-rays (CXRs). However, such systems require exhaustive training efforts on large-scale (and well-annotated) data to effectively diagnose chest abnormalities (at the inference stage). Furthermore, procuring such large-scale data (in a clinical setting) is often infeasible and impractical, especially for rare diseases. With the recent advances in incremental learning, researchers have periodically tuned deep neural networks to learn different classification tasks with few training examples. Although, such systems can resist catastrophic forgetting, they treat the knowledge representations (which the network learns periodically) independently of each other, and this limits their classification performance. Also, to the best of our knowledge, there is no incremental learning-driven image diagnostic framework (to date) that is specifically designed to screen pulmonary disorders from the CXRs. To address this, we present a novel framework that can learn to screen different chest abnormalities incrementally (via few-shot training). In addition to this, the proposed framework is penalized through an incremental learning loss function that infers Bayesian theory to recognize structural and semantic inter-dependencies between incrementally learned knowledge representations to diagnose the pulmonary diseases effectively (at the inference stage), regardless of the scanner specifications. We tested the proposed framework on five public CXR datasets containing different chest abnormalities, where it achieved an accuracy of 0.8405 and the F1 score of 0.8303, outperforming various state-of-the-art incremental learning schemes. It also achieved a highly competitive performance compared to the conventional fine-tuning (transfer learning) approaches while significantly reducing the training and computational requirements.


Asunto(s)
Aprendizaje Profundo , Enfermedades Pulmonares , Teorema de Bayes , Humanos , Enfermedades Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Radiografía
19.
IEEE J Biomed Health Inform ; 25(1): 108-120, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32224467

RESUMEN

The identification of retinal lesions plays a vital role in accurately classifying and grading retinopathy. Many researchers have presented studies on optical coherence tomography (OCT) based retinal image analysis over the past. However, to the best of our knowledge, there is no framework yet available that can extract retinal lesions from multi-vendor OCT scans and utilize them for the intuitive severity grading of the human retina. To cater this lack, we propose a deep retinal analysis and grading framework (RAG-FW). RAG-FW is a hybrid convolutional framework that extracts multiple retinal lesions from OCT scans and utilizes them for lesion-influenced grading of retinopathy as per the clinical standards. RAG-FW has been rigorously tested on 43,613 scans from five highly complex publicly available datasets, containing multi-vendor scans, where it achieved the mean intersection-over-union score of 0.8055 for extracting the retinal lesions and the accuracy of 98.70% for the correct severity grading of retinopathy.


Asunto(s)
Retina , Enfermedades de la Retina , Humanos , Procesamiento de Imagen Asistido por Computador , Retina/diagnóstico por imagen , Enfermedades de la Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica
20.
IEEE Trans Biomed Eng ; 68(7): 2140-2151, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33044925

RESUMEN

OBJECTIVE: Glaucoma is the second leading cause of blindness worldwide. Glaucomatous progression can be easily monitored by analyzing the degeneration of retinal ganglion cells (RGCs). Many researchers have screened glaucoma by measuring cup-to-disc ratios from fundus and optical coherence tomography scans. However, this paper presents a novel strategy that pays attention to the RGC atrophy for screening glaucomatous pathologies and grading their severity. METHODS: The proposed framework encompasses a hybrid convolutional network that extracts the retinal nerve fiber layer, ganglion cell with the inner plexiform layer and ganglion cell complex regions, allowing thus a quantitative screening of glaucomatous subjects. Furthermore, the severity of glaucoma in screened cases is objectively graded by analyzing the thickness of these regions. RESULTS: The proposed framework is rigorously tested on publicly available Armed Forces Institute of Ophthalmology (AFIO) dataset, where it achieved the F1 score of 0.9577 for diagnosing glaucoma, a mean dice coefficient score of 0.8697 for extracting the RGC regions and an accuracy of 0.9117 for grading glaucomatous progression. Furthermore, the performance of the proposed framework is clinically verified with the markings of four expert ophthalmologists, achieving a statistically significant Pearson correlation coefficient of 0.9236. CONCLUSION: An automated assessment of RGC degeneration yields better glaucomatous screening and grading as compared to the state-of-the-art solutions. SIGNIFICANCE: An RGC-aware system not only screens glaucoma but can also grade its severity and here we present an end-to-end solution that is thoroughly evaluated on a standardized dataset and is clinically validated for analyzing glaucomatous pathologies.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Técnicas de Diagnóstico Oftalmológico , Glaucoma/diagnóstico por imagen , Humanos , Presión Intraocular , Células Ganglionares de la Retina , Tomografía de Coherencia Óptica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...