Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(4)2022 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-35214568

RESUMEN

Human beings tend to incrementally learn from the rapidly changing environment without comprising or forgetting the already learned representations. Although deep learning also has the potential to mimic such human behaviors to some extent, it suffers from catastrophic forgetting due to which its performance on already learned tasks drastically decreases while learning about newer knowledge. Many researchers have proposed promising solutions to eliminate such catastrophic forgetting during the knowledge distillation process. However, to our best knowledge, there is no literature available to date that exploits the complex relationships between these solutions and utilizes them for the effective learning that spans over multiple datasets and even multiple domains. In this paper, we propose a continual learning objective that encompasses mutual distillation loss to understand such complex relationships and allows deep learning models to effectively retain the prior knowledge while adapting to the new classes, new datasets, and even new applications. The proposed objective was rigorously tested on nine publicly available, multi-vendor, and multimodal datasets that span over three applications, and it achieved the top-1 accuracy of 0.9863% and an F1-score of 0.9930.


Asunto(s)
Redes Neurales de la Computación , Humanos
2.
Sensors (Basel) ; 20(22)2020 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-33198071

RESUMEN

Screening baggage against potential threats has become one of the prime aviation security concerns all over the world, where manual detection of prohibited items is a time-consuming and hectic process. Many researchers have developed autonomous systems to recognize baggage threats using security X-ray scans. However, all of these frameworks are vulnerable against screening cluttered and concealed contraband items. Furthermore, to the best of our knowledge, no framework possesses the capacity to recognize baggage threats across multiple scanner specifications without an explicit retraining process. To overcome this, we present a novel meta-transfer learning-driven tensor-shot detector that decomposes the candidate scan into dual-energy tensors and employs a meta-one-shot classification backbone to recognize and localize the cluttered baggage threats. In addition, the proposed detection framework can be well-generalized to multiple scanner specifications due to its capacity to generate object proposals from the unified tensor maps rather than diversified raw scans. We have rigorously evaluated the proposed tensor-shot detector on the publicly available SIXray and GDXray datasets (containing a cumulative of 1,067,381 grayscale and colored baggage X-ray scans). On the SIXray dataset, the proposed framework achieved a mean average precision (mAP) of 0.6457, and on the GDXray dataset, it achieved the precision and F1 score of 0.9441 and 0.9598, respectively. Furthermore, it outperforms state-of-the-art frameworks by 8.03% in terms of mAP, 1.49% in terms of precision, and 0.573% in terms of F1 on the SIXray and GDXray dataset, respectively.

3.
Sensors (Basel) ; 19(13)2019 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-31284442

RESUMEN

Macular edema (ME) is a retinal condition in which central vision of a patient is affected. ME leads to accumulation of fluid in the surrounding macular region resulting in a swollen macula. Optical coherence tomography (OCT) and the fundus photography are the two widely used retinal examination techniques that can effectively detect ME. Many researchers have utilized retinal fundus and OCT imaging for detecting ME. However, to the best of our knowledge, no work is found in the literature that fuses the findings from both retinal imaging modalities for the effective and more reliable diagnosis of ME. In this paper, we proposed an automated framework for the classification of ME and healthy eyes using retinal fundus and OCT scans. The proposed framework is based on deep ensemble learning where the input fundus and OCT scans are recognized through the deep convolutional neural network (CNN) and are processed accordingly. The processed scans are further passed to the second layer of the deep CNN model, which extracts the required feature descriptors from both images. The extracted descriptors are then concatenated together and are passed to the supervised hybrid classifier made through the ensemble of the artificial neural networks, support vector machines and naïve Bayes. The proposed framework has been trained on 73,791 retinal scans and is validated on 5100 scans of publicly available Zhang dataset and Rabbani dataset. The proposed framework achieved the accuracy of 94.33% for diagnosing ME and healthy subjects and achieved the mean dice coefficient of 0.9019 ± 0.04 for accurately extracting the retinal fluids, 0.7069 ± 0.11 for accurately extracting hard exudates and 0.8203 ± 0.03 for accurately extracting retinal blood vessels against the clinical markings.


Asunto(s)
Técnicas de Diagnóstico Oftalmológico , Procesamiento de Imagen Asistido por Computador/métodos , Edema Macular/diagnóstico por imagen , Retina/diagnóstico por imagen , Teorema de Bayes , Bases de Datos Factuales , Aprendizaje Profundo , Fondo de Ojo , Humanos , Redes Neurales de la Computación , Fotograbar/métodos , Retina/patología , Máquina de Vectores de Soporte , Tomografía de Coherencia Óptica/métodos
5.
J Digit Imaging ; 31(4): 464-476, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29204763

RESUMEN

Age-related macular degeneration (ARMD) is one of the most common retinal syndromes that occurs in elderly people. Different eye testing techniques such as fundus photography and optical coherence tomography (OCT) are used to clinically examine the ARMD-affected patients. Many researchers have worked on detecting ARMD from fundus images, few of them also worked on detecting ARMD from OCT images. However, there are only few systems that establish the correspondence between fundus and OCT images to give an accurate prediction of ARMD pathology. In this paper, we present fully automated decision support system that can automatically detect ARMD by establishing correspondence between OCT and fundus imagery. The proposed system also distinguishes between early, suspect and confirmed ARMD by correlating OCT B-scans with respective region of the fundus image. In first phase, proposed system uses different B-scan based features along with support vector machine (SVM) to detect the presence of drusens and classify it as ARMD or normal case. In case input OCT scan is classified as ARMD, region of interest from corresponding fundus image is considered for further evaluation. The analysis of fundus image is performed using contrast enhancement and adaptive thresholding to detect possible drusens from fundus image and proposed system finally classified it as early stage ARMD or advance stage ARMD. The proposed system is tested on local data set of 100 patients with100 fundus images and 6800 OCT B-scans. Proposed system detects ARMD with the accuracy, sensitivity, and specificity ratings of 98.0, 100, and 97.14%, respectively.


Asunto(s)
Fondo de Ojo , Degeneración Macular/diagnóstico por imagen , Degeneración Macular/patología , Máquina de Vectores de Soporte , Tomografía de Coherencia Óptica/métodos , Anciano , Sistemas de Apoyo a Decisiones Clínicas , Femenino , Humanos , Masculino , Persona de Mediana Edad , Índice de Severidad de la Enfermedad
6.
J Med Syst ; 42(11): 223, 2018 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-30284052

RESUMEN

Maculopathy is the group of diseases that affects central vision of a person and they are often associated with diabetes. Many researchers reported automated diagnosis of maculopathy from optical coherence tomography (OCT) images. However, to the best of our knowledge there is no literature that presents a complete 3D suite for the extraction as well as diagnosis of macula. Therefore, this paper presents a multilayered convolutional neural networks (CNN) structure tensor Delaunay triangulation and morphing based fully autonomous system that extracts up to nine retinal and choroidal layers along with the macular fluids. Furthermore, the proposed system utilizes the extracted retinal information for the automated diagnosis of maculopathy as well as for the robust reconstruction of 3D macula of retina. The proposed system has been validated on 41,921 retinal OCT scans acquired from different OCT machines and it significantly outperformed existing state of the art solutions by achieving the mean accuracy of 95.27% for extracting retinal and choroidal layers, mean dice coefficient of 0.90 for extracting fluid pathology and the overall accuracy of 96.07% for maculopathy diagnosis. To the best of our knowledge, the proposed framework is first of its kind that provides a fully automated and complete 3D integrated solution for the extraction of candidate macula along with its fully automated diagnosis against different macular syndromes.


Asunto(s)
Redes Neurales de la Computación , Enfermedades de la Retina/diagnóstico , Humanos , Proyectos de Investigación , Retina , Tomografía de Coherencia Óptica
7.
J Opt Soc Am A Opt Image Sci Vis ; 33(4): 455-63, 2016 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-27140751

RESUMEN

Macular edema (ME) and central serous retinopathy (CSR) are two macular diseases that affect the central vision of a person if they are left untreated. Optical coherence tomography (OCT) imaging is the latest eye examination technique that shows a cross-sectional region of the retinal layers and that can be used to detect many retinal disorders in an early stage. Many researchers have done clinical studies on ME and CSR and reported significant findings in macular OCT scans. However, this paper proposes an automated method for the classification of ME and CSR from OCT images using a support vector machine (SVM) classifier. Five distinct features (three based on the thickness profiles of the sub-retinal layers and two based on cyst fluids within the sub-retinal layers) are extracted from 30 labeled images (10 ME, 10 CSR, and 10 healthy), and SVM is trained on these. We applied our proposed algorithm on 90 time-domain OCT (TD-OCT) images (30 ME, 30 CSR, 30 healthy) of 73 patients. Our algorithm correctly classified 88 out of 90 subjects with accuracy, sensitivity, and specificity of 97.77%, 100%, and 93.33%, respectively.


Asunto(s)
Coriorretinopatía Serosa Central/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Edema Macular/diagnóstico por imagen , Tomografía de Coherencia Óptica , Adulto , Algoritmos , Automatización , Estudios de Casos y Controles , Femenino , Humanos , Masculino
8.
Appl Opt ; 55(3): 454-61, 2016 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-26835917

RESUMEN

Macular edema (ME) is considered as one of the major indications of proliferative diabetic retinopathy and it is commonly caused due to diabetes. ME causes retinal swelling due to the accumulation of protein deposits within subretinal layers. Optical coherence tomography (OCT) imaging provides an early detection of ME by showing the cross-sectional view of macular pathology. Many researchers have worked on automated identification of macular edema from fundus images, but this paper proposes a fully automated method for extracting and analyzing subretinal layers from OCT images using coherent tensors. These subretinal layers are then used to predict ME from candidate images using a support vector machine (SVM) classifier. A total of 71 OCT images of 64 patients are collected locally in which 15 persons have ME and 49 persons are healthy. Our proposed system has an overall accuracy of 97.78% in correctly classifying ME patients and healthy persons. We have also tested our proposed implementation on spectral domain OCT (SD-OCT) images of the Duke dataset consisting of 109 images from 10 patients and it correctly classified all healthy and ME images in the dataset.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Edema Macular/diagnóstico , Retina/patología , Anciano , Automatización , Coroides/patología , Humanos , Persona de Mediana Edad , Reproducibilidad de los Resultados , Máquina de Vectores de Soporte , Tomografía de Coherencia Óptica
9.
IEEE J Biomed Health Inform ; 28(2): 952-963, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37999960

RESUMEN

Early-stage cancer diagnosis potentially improves the chances of survival for many cancer patients worldwide. Manual examination of Whole Slide Images (WSIs) is a time-consuming task for analyzing tumor-microenvironment. To overcome this limitation, the conjunction of deep learning with computational pathology has been proposed to assist pathologists in efficiently prognosing the cancerous spread. Nevertheless, the existing deep learning methods are ill-equipped to handle fine-grained histopathology datasets. This is because these models are constrained via conventional softmax loss function, which cannot expose them to learn distinct representational embeddings of the similarly textured WSIs containing an imbalanced data distribution. To address this problem, we propose a novel center-focused affinity loss (CFAL) function that exhibits 1) constructing uniformly distributed class prototypes in the feature space, 2) penalizing difficult samples, 3) minimizing intra-class variations, and 4) placing greater emphasis on learning minority class features. We evaluated the performance of the proposed CFAL loss function on two publicly available breast and colon cancer datasets having varying levels of imbalanced classes. The proposed CFAL function shows better discrimination abilities as compared to the popular loss functions such as ArcFace, CosFace, and Focal loss. Moreover, it outperforms several SOTA methods for histology image classification across both datasets.


Asunto(s)
Mama , Neoplasias , Humanos , Mama/diagnóstico por imagen , Técnicas Histológicas , Microambiente Tumoral , Neoplasias/diagnóstico por imagen
10.
IEEE Trans Image Process ; 33: 241-256, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38064329

RESUMEN

Accurate classification of nuclei communities is an important step towards timely treating the cancer spread. Graph theory provides an elegant way to represent and analyze nuclei communities within the histopathological landscape in order to perform tissue phenotyping and tumor profiling tasks. Many researchers have worked on recognizing nuclei regions within the histology images in order to grade cancerous progression. However, due to the high structural similarities between nuclei communities, defining a model that can accurately differentiate between nuclei pathological patterns still needs to be solved. To surmount this challenge, we present a novel approach, dubbed neural graph refinement, that enhances the capabilities of existing models to perform nuclei recognition tasks by employing graph representational learning and broadcasting processes. Based on the physical interaction of the nuclei, we first construct a fully connected graph in which nodes represent nuclei and adjacent nodes are connected to each other via an undirected edge. For each edge and node pair, appearance and geometric features are computed and are then utilized for generating the neural graph embeddings. These embeddings are used for diffusing contextual information to the neighboring nodes, all along a path traversing the whole graph to infer global information over an entire nuclei network and predict pathologically meaningful communities. Through rigorous evaluation of the proposed scheme across four public datasets, we showcase that learning such communities through neural graph refinement produces better results that outperform state-of-the-art methods.


Asunto(s)
Núcleo Celular , Aprendizaje , Técnicas Histológicas
11.
Heliyon ; 10(12): e32500, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38994043

RESUMEN

As the population of Somaliland continues to grow rapidly, the demand for electricity is anticipated to rise exponentially over the next few decades. The provision of reliable and cost-effective electricity service is at the core of the economic and social development of Somaliland. Wind energy might offer a sustainable solution to the exceptionally high electricity prices. In this study, a techno-economic assessment of the wind energy potential in some parts of the western region of Somaliland is performed. Measured data of wind speed and wind direction for three sites around the capital city of Hargeisa are utilized to characterize the resource using Weibull distribution functions. Technical and economic performances of several commercial wind turbines are examined. Out of the three sites, Xumba Weyne stands out as the most favorable site for wind energy harnessing with average annual power and energy densities at 80 m hub height of 317 kW/m2 and 2782 kWh/m2, respectively. Wind turbines installed in Xumba Weyne yielded the lowest levelized cost of electricity (LCOE) of not more than 0.07 $/kWh, shortest payback times (i.e., less than 7.2 years) with minimum return on investment (ROI) of approximately 150%.

12.
Cureus ; 16(5): e59768, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38846243

RESUMEN

Cerebrovascular accidents (CVAs) often occur suddenly and abruptly, leaving patients with long-lasting disabilities that place a huge emotional and economic burden on everyone involved. CVAs result when emboli or thrombi travel to the brain and impede blood flow; the subsequent lack of oxygen supply leads to ischemia and eventually tissue infarction. The most important factor determining the prognosis of CVA patients is time, specifically the time from the onset of disease to treatment. Artificial intelligence (AI)-assisted neuroimaging alleviates the time constraints of analysis faced using traditional diagnostic imaging modalities, thus shortening the time from diagnosis to treatment. Numerous recent studies support the increased accuracy and processing capabilities of AI-assisted imaging modalities. However, the learning curve is steep, and huge barriers still exist preventing a full-scale implementation of this technology. Thus, the potential for AI to revolutionize medicine and healthcare delivery demands attention. This paper aims to elucidate the progress of AI-powered imaging in CVA diagnosis while considering traditional imaging techniques and suggesting methods to overcome adoption barriers in the hope that AI-assisted neuroimaging will be considered normal practice in the near future. There are multiple modalities for AI neuroimaging, all of which require collecting sufficient data to establish inclusive, accurate, and uniform detection platforms. Future efforts must focus on developing methods for data harmonization and standardization. Furthermore, transparency in the explainability of these technologies needs to be established to facilitate trust between physicians and AI-powered technology. This necessitates considerable resources, both financial and expertise wise which are not available everywhere.

13.
J Neural Eng ; 21(1)2024 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-38237175

RESUMEN

Peripheral nerve interfaces (PNIs) are electrical systems designed to integrate with peripheral nerves in patients, such as following central nervous system (CNS) injuries to augment or replace CNS control and restore function. We review the literature for clinical trials and studies containing clinical outcome measures to explore the utility of human applications of PNIs. We discuss the various types of electrodes currently used for PNI systems and their functionalities and limitations. We discuss important design characteristics of PNI systems, including biocompatibility, resolution and specificity, efficacy, and longevity, to highlight their importance in the current and future development of PNIs. The clinical outcomes of PNI systems are also discussed. Finally, we review relevant PNI clinical trials that were conducted, up to the present date, to restore the sensory and motor function of upper or lower limbs in amputees, spinal cord injury patients, or intact individuals and describe their significant findings. This review highlights the current progress in the field of PNIs and serves as a foundation for future development and application of PNI systems.


Asunto(s)
Amputados , Nervios Periféricos , Humanos , Amputación Quirúrgica , Electrodos , Parálisis/cirugía
14.
Brain Inform ; 10(1): 25, 2023 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-37689601

RESUMEN

Early identification of mental disorders, based on subjective interviews, is extremely challenging in the clinical setting. There is a growing interest in developing automated screening tools for potential mental health problems based on biological markers. Here, we demonstrate the feasibility of an AI-powered diagnosis of different mental disorders using EEG data. Specifically, this work aims to classify different mental disorders in the following ecological context accurately: (1) using raw EEG data, (2) collected during rest, (3) during both eye open, and eye closed conditions, (4) at short 2-min duration, (5) on participants with different psychiatric conditions, (6) with some overlapping symptoms, and (7) with strongly imbalanced classes. To tackle this challenge, we designed and optimized a transformer-based architecture, where class imbalance is addressed through focal loss and class weight balancing. Using the recently released TDBRAIN dataset (n= 1274 participants), our method classifies each participant as either a neurotypical or suffering from major depressive disorder (MDD), attention deficit hyperactivity disorder (ADHD), subjective memory complaints (SMC), or obsessive-compulsive disorder (OCD). We evaluate the performance of the proposed architecture on both the window-level and the patient-level. The classification of the 2-min raw EEG data into five classes achieved a window-level accuracy of 63.2% and 65.8% for open and closed eye conditions, respectively. When the classification is limited to three main classes (MDD, ADHD, SMC), window level accuracy improved to 75.1% and 69.9% for eye open and eye closed conditions, respectively. Our work paves the way for developing novel AI-based methods for accurately diagnosing mental disorders using raw resting-state EEG data.

15.
Biomed Signal Process Control ; 85: 104855, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36987448

RESUMEN

Chest X-rays (CXR) are the most commonly used imaging methodology in radiology to diagnose pulmonary diseases with close to 2 billion CXRs taken every year. The recent upsurge of COVID-19 and its variants accompanied by pneumonia and tuberculosis can be fatal in some cases and lives could be saved through early detection and appropriate intervention for the advanced cases. Thus CXRs can be used for an automated severity grading of pulmonary diseases that can aid radiologists in making better and informed diagnoses. In this article, we propose a single framework for disease classification and severity scoring produced by segmenting the lungs into six regions. We present a modified progressive learning technique in which the amount of augmentations at each step is capped. Our base network in the framework is first trained using modified progressive learning and can then be tweaked for new data sets. Furthermore, the segmentation task makes use of an attention map generated within and by the network itself. This attention mechanism allows to achieve segmentation results that are on par with networks having an order of magnitude or more parameters. We also propose severity score grading for 4 thoracic diseases that can provide a single-digit score corresponding to the spread of opacity in different lung segments with the help of radiologists. The proposed framework is evaluated using the BRAX data set for segmentation and classification into six classes with severity grading for a subset of the classes. On the BRAX validation data set, we achieve F1 scores of 0.924 and 0.939 without and with fine-tuning, respectively. A mean matching score of 80.8% is obtained for severity score grading while an average area under receiver operating characteristic curve of 0.88 is achieved for classification.

16.
Proc (Bayl Univ Med Cent) ; 36(6): 722-727, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37829212

RESUMEN

Purpose: To compare the lobbying expenditures and political action committee (PAC) campaign finance activities of the American Academy of Ophthalmology (AAO), American Society of Cataract and Refractive Surgery (ASCRS), and American Optometric Association (AOA) from 2015 to 2022. Methods: Financial data were collected from the Federal Election Commission and OpenSecrets database. Analysis was performed to characterize and compare financial activity among the organizations. P < 0.05 was considered significant and all analyses were two-sided. Results: From 2015 to 2022, the AAO, ASCRS, and AOA spent $6,745,000, $5,354,406, and $13,335,000 on lobbying, respectively. The AOA's annual lobbying expenditure (median, $1,725,000) was significantly greater than AAO's ($842,500, P = 0.03) and ASCRS's ($694,289, P < 0.001). In PAC donations, OPHTHPAC, affiliated with AAO, received $3,221,737 from 2079 donors (median, $900); eyePAC, affiliated with ASCRS, received $506,255 from 349 donors ($500); and AOA-PAC received $6,642,588 from 3641 donors ($825). Compared to eyePAC, median donations to OPHTHPAC (P = 0.01) and AOA-PAC (P = 0.04) were significantly higher. In campaign spending, OPHTHPAC contributed $2,728,500 to 326 campaigns (median, $5000), eyePAC contributed $293,500 to 58 campaigns ($3000), and AOA-PAC contributed $5,128,673 to 617 campaigns ($5500). eyePAC's median campaign contribution was significantly lower than the AOA's (P < 0.001) and AAO's (P = 0.007). Every PAC directed most of its contributions toward Republican campaigns; eyePAC donated the highest proportion (64.9%). Conclusions: AOA was more assertive in shaping policy by increasing lobbying expenditures, fundraising, and donating to a greater number of election campaigns.

17.
Sci Rep ; 13(1): 22885, 2023 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-38129680

RESUMEN

Tomatoes are a major crop worldwide, and accurately classifying their maturity is important for many agricultural applications, such as harvesting, grading, and quality control. In this paper, the authors propose a novel method for tomato maturity classification using a convolutional transformer. The convolutional transformer is a hybrid architecture that combines the strengths of convolutional neural networks (CNNs) and transformers. Additionally, this study introduces a new tomato dataset named KUTomaData, explicitly designed to train deep-learning models for tomato segmentation and classification. KUTomaData is a compilation of images sourced from a greenhouse in the UAE, with approximately 700 images available for training and testing. The dataset is prepared under various lighting conditions and viewing perspectives and employs different mobile camera sensors, distinguishing it from existing datasets. The contributions of this paper are threefold: firstly, the authors propose a novel method for tomato maturity classification using a modular convolutional transformer. Secondly, the authors introduce a new tomato image dataset that contains images of tomatoes at different maturity levels. Lastly, the authors show that the convolutional transformer outperforms state-of-the-art methods for tomato maturity classification. The effectiveness of the proposed framework in handling cluttered and occluded tomato instances was evaluated using two additional public datasets, Laboro Tomato and Rob2Pheno Annotated Tomato, as benchmarks. The evaluation results across these three datasets demonstrate the exceptional performance of our proposed framework, surpassing the state-of-the-art by 58.14%, 65.42%, and 66.39% in terms of mean average precision scores for KUTomaData, Laboro Tomato, and Rob2Pheno Annotated Tomato, respectively. This work can potentially improve the efficiency and accuracy of tomato harvesting, grading, and quality control processes.

18.
J Neurosurg Case Lessons ; 6(26)2023 Dec 25.
Artículo en Inglés | MEDLINE | ID: mdl-38145561

RESUMEN

BACKGROUND: Cancer-related or postoperative pain can occur following sacral chordoma resection. Despite a lack of current recommendations for cancer pain treatment, spinal cord stimulation (SCS) has demonstrated effectiveness in addressing cancer-related pain. OBSERVATIONS: A 76-year-old female with a sacral chordoma underwent anterior osteotomies and partial en bloc sacrectomy. She subsequently presented with chronic pain affecting both buttocks and posterior thighs and legs, significantly impeding her daily activities. She underwent a staged epidural SCS paddle trial and permanent system placement using intraoperative neuromonitoring. The utilization of percutaneous leads was not viable because of her history of spinal fluid leakage, multiple lumbosacral surgeries, and previous complex plastic surgery closure. The patient reported a 62.5% improvement in her lower-extremity pain per the modified Quadruple Visual Analog Scale and a 50% improvement in the modified Pain and Sleep Questionnaire 3-item index during the SCS trial. Following permanent SCS system placement and removal of her externalized lead extenders, she had an uncomplicated postoperative course and reported notable improvements in her pain symptoms. LESSONS: This case provides a compelling illustration of the successful treatment of chronic pain using SCS following radical sacral chordoma resection. Surgeons may consider this treatment approach in patients presenting with refractory pain following spinal tumor resection.

19.
J Neurosurg Case Lessons ; 5(26)2023 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-37399140

RESUMEN

BACKGROUND: Schwannomas are common peripheral nerve sheath tumors. Imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT) can help to distinguish schwannomas from other types of lesions. However, there have been several reported cases describing the misdiagnosis of aneurysms as schwannomas. OBSERVATIONS: A 70-year-old male with ongoing pain despite spinal fusion surgery underwent MRI. A lesion was noted along the left sciatic nerve, which was believed to be a sciatic nerve schwannoma. During the surgery for planned neurolysis and tumor resection, the lesion was noted to be pulsatile. Electromyography mapping and intraoperative ultrasound confirmed vascular pulsations and turbulent flow within the aneurysm, so the surgery was aborted. A formal CT angiogram revealed the lesion to be an internal iliac artery (IIA) branch aneurysm. The patient underwent coil embolization with complete obliteration of the aneurysm. LESSONS: The authors report the first case of an IIA aneurysm misdiagnosed as a sciatic nerve schwannoma. Surgeons should be aware of this potential misdiagnosis and potentially use other imaging modalities to confirm the lesion before proceeding with surgery.

20.
Cancers (Basel) ; 15(17)2023 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-37686561

RESUMEN

BACKGROUND: The outcomes of orbital exenteration (OE) in patients with craniofacial lesions (CFLs) remain unclear. The present review summarizes the available literature on the clinical outcomes of OE, including surgical outcomes and overall survival (OS). METHODS: Relevant articles were retrieved from Medline, Scopus, and Cochrane according to PRISMA guidelines. A systematic review and meta-analysis were conducted on the clinical characteristics, management, and outcomes. RESULTS: A total of 33 articles containing 957 patients who underwent OE for CFLs were included (weighted mean age: 64.3 years [95% CI: 59.9-68.7]; 58.3% were male). The most common lesion was squamous cell carcinoma (31.8%), and the most common symptom was disturbed vision/reduced visual acuity (22.5%). Of the patients, 302 (31.6%) had total OE, 248 (26.0%) had extended OE, and 87 (9.0%) had subtotal OE. Free flaps (33.3%), endosseous implants (22.8%), and split-thickness skin grafts (17.2%) were the most used reconstructive methods. Sino-orbital or sino-nasal fistula (22.6%), flap or graft failure (16.9%), and hyperostosis (13%) were the most reported complications. Regarding tumor recurrences, 38.6% were local, 32.3% were distant, and 6.7% were regional. The perineural invasion rate was 17.4%, while the lymphovascular invasion rate was 5.0%. Over a weighted mean follow-up period of 23.6 months (95% CI: 13.8-33.4), a weighted overall mortality rate of 39% (95% CI: 28-50%) was observed. The 5-year OS rate was 50% (median: 61 months [95% CI: 46-83]). The OS multivariable analysis did not show any significant findings. CONCLUSIONS: Although OE is a disfiguring procedure with devastating outcomes, it is a viable option for carefully selected patients with advanced CFLs. A patient-tailored approach based on tumor pathology, extension, and overall patient condition is warranted.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA