Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Appl Intell (Dordr) ; 51(5): 2864-2889, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34764572

RESUMEN

Novel coronavirus (COVID-19) is started from Wuhan (City in China), and is rapidly spreading among people living in other countries. Today, around 215 countries are affected by COVID-19 disease. WHO announced approximately number of cases 11,274,600 worldwide. Due to rapidly rising cases daily in the hospitals, there are a limited number of resources available to control COVID-19 disease. Therefore, it is essential to develop an accurate diagnosis of COVID-19 disease. Early diagnosis of COVID-19 patients is important for preventing the disease from spreading to others. In this paper, we proposed a deep learning based approach that can differentiate COVID- 19 disease patients from viral pneumonia, bacterial pneumonia, and healthy (normal) cases. In this approach, deep transfer learning is adopted. We used binary and multi-class dataset which is categorized in four types for experimentation: (i) Collection of 728 X-ray images including 224 images with confirmed COVID-19 disease and 504 normal condition images (ii) Collection of 1428 X-ray images including 224 images with confirmed COVID-19 disease, 700 images with confirmed common bacterial pneumonia, and 504 normal condition images. (iii) Collections of 1442 X- ray images including 224 images with confirmed COVID-19 disease, 714 images with confirmed bacterial and viral pneumonia, and 504 images of normal conditions (iv) Collections of 5232 X- ray images including 2358 images with confirmed bacterial and 1345 with viral pneumonia, and 1346 images of normal conditions. In this paper, we have used nine convolutional neural network based architecture (AlexNet, GoogleNet, ResNet-50, Se-ResNet-50, DenseNet121, Inception V4, Inception ResNet V2, ResNeXt-50, and Se-ResNeXt-50). Experimental results indicate that the pre trained model Se-ResNeXt-50 achieves the highest classification accuracy of 99.32% for binary class and 97.55% for multi-class among all pre-trained models.

2.
Int J Comput Assist Radiol Surg ; 17(10): 1801-1811, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35635639

RESUMEN

PURPOSE: Surgeons' skill in the operating room is a major determinant of patient outcomes. Assessment of surgeons' skill is necessary to improve patient outcomes and quality of care through surgical training and coaching. Methods for video-based assessment of surgical skill can provide objective and efficient tools for surgeons. Our work introduces a new method based on attention mechanisms and provides a comprehensive comparative analysis of state-of-the-art methods for video-based assessment of surgical skill in the operating room. METHODS: Using a dataset of 99 videos of capsulorhexis, a critical step in cataract surgery, we evaluated image feature-based methods and two deep learning methods to assess skill using RGB videos. In the first method, we predict instrument tips as keypoints and predict surgical skill using temporal convolutional neural networks. In the second method, we propose a frame-wise encoder (2D convolutional neural network) followed by a temporal model (recurrent neural network), both of which are augmented by visual attention mechanisms. We computed the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and predictive values through fivefold cross-validation. RESULTS: To classify a binary skill label (expert vs. novice), the range of AUC estimates was 0.49 (95% confidence interval; CI = 0.37 to 0.60) to 0.76 (95% CI = 0.66 to 0.85) for image feature-based methods. The sensitivity and specificity were consistently high for none of the methods. For the deep learning methods, the AUC was 0.79 (95% CI = 0.70 to 0.88) using keypoints alone, 0.78 (95% CI = 0.69 to 0.88) and 0.75 (95% CI = 0.65 to 0.85) with and without attention mechanisms, respectively. CONCLUSION: Deep learning methods are necessary for video-based assessment of surgical skill in the operating room. Attention mechanisms improved discrimination ability of the network. Our findings should be evaluated for external validity in other datasets.


Asunto(s)
Extracción de Catarata , Oftalmología , Cirujanos , Capsulorrexis , Humanos , Redes Neurales de la Computación
3.
Front Neurol ; 13: 963968, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36034311

RESUMEN

Background: Nystagmus identification and interpretation is challenging for non-experts who lack specific training in neuro-ophthalmology or neuro-otology. This challenge is magnified when the task is performed via telemedicine. Deep learning models have not been heavily studied in video-based eye movement detection. Methods: We developed, trained, and validated a deep-learning system (aEYE) to classify video recordings as normal or bearing at least two consecutive beats of nystagmus. The videos were retrospectively collected from a subset of the monocular (right eye) video-oculography (VOG) recording used in the Acute Video-oculography for Vertigo in Emergency Rooms for Rapid Triage (AVERT) clinical trial (#NCT02483429). Our model was derived from a preliminary dataset representing about 10% of the total AVERT videos (n = 435). The videos were trimmed into 10-sec clips sampled at 60 Hz with a resolution of 240 × 320 pixels. We then created 8 variations of the videos by altering the sampling rates (i.e., 30 Hz and 15 Hz) and image resolution (i.e., 60 × 80 pixels and 15 × 20 pixels). The dataset was labeled as "nystagmus" or "no nystagmus" by one expert provider. We then used a filtered image-based motion classification approach to develop aEYE. The model's performance at detecting nystagmus was calculated by using the area under the receiver-operating characteristic curve (AUROC), sensitivity, specificity, and accuracy. Results: An ensemble between the ResNet-soft voting and the VGG-hard voting models had the best performing metrics. The AUROC, sensitivity, specificity, and accuracy were 0.86, 88.4, 74.2, and 82.7%, respectively. Our validated folds had an average AUROC, sensitivity, specificity, and accuracy of 0.86, 80.3, 80.9, and 80.4%, respectively. Models created from the compressed videos decreased in accuracy as image sampling rate decreased from 60 Hz to 15 Hz. There was only minimal change in the accuracy of nystagmus detection when decreasing image resolution and keeping sampling rate constant. Conclusion: Deep learning is useful in detecting nystagmus in 60 Hz video recordings as well as videos with lower image resolutions and sampling rates, making it a potentially useful tool to aid future automated eye-movement enabled neurologic diagnosis.

4.
Phys Med Biol ; 67(18)2022 09 12.
Artículo en Inglés | MEDLINE | ID: mdl-36093921

RESUMEN

Objective.To establish an open framework for developing plan optimization models for knowledge-based planning (KBP).Approach.Our framework includes radiotherapy treatment data (i.e. reference plans) for 100 patients with head-and-neck cancer who were treated with intensity-modulated radiotherapy. That data also includes high-quality dose predictions from 19 KBP models that were developed by different research groups using out-of-sample data during the OpenKBP Grand Challenge. The dose predictions were input to four fluence-based dose mimicking models to form 76 unique KBP pipelines that generated 7600 plans (76 pipelines × 100 patients). The predictions and KBP-generated plans were compared to the reference plans via: the dose score, which is the average mean absolute voxel-by-voxel difference in dose; the deviation in dose-volume histogram (DVH) points; and the frequency of clinical planning criteria satisfaction. We also performed a theoretical investigation to justify our dose mimicking models.Main results.The range in rank order correlation of the dose score between predictions and their KBP pipelines was 0.50-0.62, which indicates that the quality of the predictions was generally positively correlated with the quality of the plans. Additionally, compared to the input predictions, the KBP-generated plans performed significantly better (P< 0.05; one-sided Wilcoxon test) on 18 of 23 DVH points. Similarly, each optimization model generated plans that satisfied a higher percentage of criteria than the reference plans, which satisfied 3.5% more criteria than the set of all dose predictions. Lastly, our theoretical investigation demonstrated that the dose mimicking models generated plans that are also optimal for an inverse planning model.Significance.This was the largest international effort to date for evaluating the combination of KBP prediction and optimization models. We found that the best performing models significantly outperformed the reference dose and dose predictions. In the interest of reproducibility, our data and code is freely available.


Asunto(s)
Planificación de la Radioterapia Asistida por Computador , Radioterapia de Intensidad Modulada , Humanos , Bases del Conocimiento , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos , Reproducibilidad de los Resultados
5.
Sci Rep ; 11(1): 13656, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-34211009

RESUMEN

With over 3500 mosquito species described, accurate species identification of the few implicated in disease transmission is critical to mosquito borne disease mitigation. Yet this task is hindered by limited global taxonomic expertise and specimen damage consistent across common capture methods. Convolutional neural networks (CNNs) are promising with limited sets of species, but image database requirements restrict practical implementation. Using an image database of 2696 specimens from 67 mosquito species, we address the practical open-set problem with a detection algorithm for novel species. Closed-set classification of 16 known species achieved 97.04 ± 0.87% accuracy independently, and 89.07 ± 5.58% when cascaded with novelty detection. Closed-set classification of 39 species produces a macro F1-score of 86.07 ± 1.81%. This demonstrates an accurate, scalable, and practical computer vision solution to identify wild-caught mosquitoes for implementation in biosurveillance and targeted vector control programs, without the need for extensive image database development for each new target region.


Asunto(s)
Culicidae/clasificación , Redes Neurales de la Computación , Algoritmos , Animales , Culicidae/anatomía & histología , Bases de Datos Factuales , Procesamiento de Imagen Asistido por Computador/métodos , Mosquitos Vectores/anatomía & histología , Mosquitos Vectores/clasificación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA