Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 123
Filtrar
Mais filtros

País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Am Nat ; 203(5): 618-627, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38635364

RESUMO

AbstractAutonomous sensors provide opportunities to observe organisms across spatial and temporal scales that humans cannot directly observe. By processing large data streams from autonomous sensors with deep learning methods, researchers can make novel and important natural history discoveries. In this study, we combine automated acoustic monitoring with deep learning models to observe breeding-associated activity in the endangered Sierra Nevada yellow-legged frog (Rana sierrae), a behavior that current surveys do not measure. By deploying inexpensive hydrophones and developing a deep learning model to recognize breeding-associated vocalizations, we discover three undocumented R. sierrae vocalization types and find an unexpected temporal pattern of nocturnal breeding-associated vocal activity. This study exemplifies how the combination of autonomous sensor data and deep learning can shed new light on species' natural history, especially during times or in locations where human observation is limited or impossible.


Assuntos
Ranidae , Vocalização Animal , Animais , Humanos , Acústica
2.
BMC Ophthalmol ; 24(1): 242, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38853240

RESUMO

BACKGROUND: Learning to perform strabismus surgery is an essential aspect of ophthalmologists' surgical training. Automated classification strategy for surgical steps can improve the effectiveness of training curricula and the efficient evaluation of residents' performance. To this end, we aimed to develop and validate a deep learning (DL) model for automated detecting strabismus surgery steps in the videos. METHODS: In this study, we gathered 479 strabismus surgery videos from Shanghai Children's Hospital, affiliated to Shanghai Jiao Tong University School of Medicine, spanning July 2017 to October 2021. The videos were manually cut into 3345 clips of the eight strabismus surgical steps based on the International Council of Ophthalmology's Ophthalmology Surgical Competency Assessment Rubrics (ICO-OSCAR: strabismus). The videos dataset was randomly split by eye-level into a training (60%), validation (20%) and testing dataset (20%). We evaluated two hybrid DL algorithms: a Recurrent Neural Network (RNN) based and a Transformer-based model. The evaluation metrics included: accuracy, area under the receiver operating characteristic curve, precision, recall and F1-score. RESULTS: DL models identified the steps in video clips of strabismus surgery achieved macro-average AUC of 1.00 (95% CI 1.00-1.00) with Transformer-based model and 0.98 (95% CI 0.97-1.00) with RNN-based model, respectively. The Transformer-based model yielded a higher accuracy compared with RNN-based models (0.96 vs. 0.83, p < 0.001). In detecting different steps of strabismus surgery, the predictive ability of the Transformer-based model was better than that of the RNN. Precision ranged between 0.90 and 1 for the Transformer-based model and 0.75 to 0.94 for the RNN-based model. The f1-score ranged between 0.93 and 1 for the Transformer-based model and 0.78 to 0.92 for the RNN-based model. CONCLUSION: The DL models can automate identify video steps of strabismus surgery with high accuracy and Transformer-based algorithms show excellent performance when modeling spatiotemporal features of video frames.


Assuntos
Aprendizado Profundo , Músculos Oculomotores , Procedimentos Cirúrgicos Oftalmológicos , Estrabismo , Gravação em Vídeo , Humanos , Estrabismo/cirurgia , Músculos Oculomotores/cirurgia , Oftalmologia/educação , Curva ROC , Competência Clínica , Redes Neurais de Computação , Algoritmos , Internato e Residência , Educação de Pós-Graduação em Medicina/métodos
3.
J Dairy Sci ; 107(5): 3140-3156, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37949402

RESUMO

The objective of this diagnostic accuracy study was to develop and validate an alert to identify calves at risk for a diarrhea bout using milk feeding behavior data (behavior) from automated milk feeders (AMF). We enrolled Holstein calves (n = 259) as a convenience sample size from 2 facilities that were health scored daily preweaning and offered either 10 or 15 L/d of milk replacer. For alert development, 132 calves were enrolled and the ability of milk intake, drinking speed, and rewarded visits collected from AMF to identify calves at risk for diarrhea was tested. Alerts that had high diagnostic accuracy in the alert development phase were validated using a holdout validation strategy of 127 different calves from the same facilities (all offered 15 L/d) for -3 to 1 d relative to diarrhea diagnosis. We enrolled calves that were either healthy or had a first diarrheal bout (loose feces ≥2 d or watery feces ≥1 d). Relative change and rolling dividends for each milk feeding behavior were calculated for each calf from the previous 2 d. Logistic regression models and receiver operator curves (ROC) were used to assess the diagnostic ability for relative change and rolling dividends behavior relative to alert d) to classify calves at risk for a diarrhea bout from -2 to 0 d relative to diagnosis. To maximize sensitivity (Se), alert thresholds were based on ROC optimal classification cutoff. Diagnostic accuracy was met when the alert had a moderate area under the ROC curve (≥0.70), high accuracy (Acc; ≥0.80), high Se (≥0.80), and very high precision (Pre; ≥0.85). For alert development, deviations in rolling dividend milk intake with drinking speed had the best performance (10 L/d: ROC area under the curve [AUC] = 0.79, threshold ≤0.70; 15 L/d: ROC AUC = 0.82, threshold ≤0.60). Our diagnostic criteria were only met in calves offered 15 L/d (10 L/d: Se 75%, Acc 72%, Pre 92%, specificity [Sp] 55% vs. 15 L/d: Se 91%, Acc 91%, Pre 89%, Sp 73%). For holdout validation, rolling dividend milk intake with drinking speed met diagnostic criteria for one facility (threshold ≤0.60, Se 86%, Acc 82%, Pre 94%, Sp 50%). However, no milk feeding behavior alerts met diagnostic criteria for the second facility due to poor Se (relative change milk intake -0.36 threshold, Se 71%, Acc 70%, and Pre 97%). We suggest that changes in milk feeding behavior may indicate diarrhea bouts in dairy calves. Future research should validate this alert in commercial settings; furthermore, software updates, support, and new analytics might be required for on-farm application to implement these types of alerts.

4.
Sensors (Basel) ; 24(10)2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38793908

RESUMO

Cervical auscultation is a simple, noninvasive method for diagnosing dysphagia, although the reliability of the method largely depends on the subjectivity and experience of the evaluator. Recently developed methods for the automatic detection of swallowing sounds facilitate a rough automatic diagnosis of dysphagia, although a reliable method of detection specialized in the peculiar feature patterns of swallowing sounds in actual clinical conditions has not been established. We investigated a novel approach for automatically detecting swallowing sounds by a method wherein basic statistics and dynamic features were extracted based on acoustic features: Mel Frequency Cepstral Coefficients and Mel Frequency Magnitude Coefficients, and an ensemble learning model combining Support Vector Machine and Multi-Layer Perceptron were applied. The evaluation of the effectiveness of the proposed method, based on a swallowing-sounds database synchronized to a video fluorographic swallowing study compiled from 74 advanced-age patients with dysphagia, demonstrated an outstanding performance. It achieved an F1-micro average of approximately 0.92 and an accuracy of 95.20%. The method, proven effective in the current clinical recording database, suggests a significant advancement in the objectivity of cervical auscultation. However, validating its efficacy in other databases is crucial for confirming its broad applicability and potential impact.


Assuntos
Auscultação , Bases de Dados Factuais , Transtornos de Deglutição , Deglutição , Humanos , Deglutição/fisiologia , Transtornos de Deglutição/diagnóstico , Transtornos de Deglutição/fisiopatologia , Auscultação/métodos , Máquina de Vetores de Suporte , Masculino , Feminino , Idoso , Aprendizado de Máquina , Algoritmos , Som
5.
Sol Phys ; 298(11): 133, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38028404

RESUMO

Coronal Holes (CHs) are regions of open magnetic-field lines, resulting in high-speed solar wind. Accurate detection of CHs is vital for space-weather prediction. This paper presents an intramethod ensemble for coronal-hole detection based on the Active Contours Without Edges (ACWE) segmentation algorithm. The purpose of this ensemble is to develop a confidence map that defines, for all ondisk regions of a solar extreme ultraviolet (EUV) image, the likelihood that each region belongs to a CH based on that region's proximity to, and homogeneity with, the core of identified CH regions. By relying on region homogeneity, and not intensity (which can vary due to various factors, including line-of-sight changes and stray light from nearby bright regions), to define the final confidence of any given region, this ensemble is able to provide robust, consistent delineations of the CH regions. Using the metrics of global consistency error (GCE), local consistency error (LCE), intersection over union (IOU), and the structural similarity index measure (SSIM), the method is shown to be robust to different spatial resolutions maintaining a median IOU >0.75 and minimum SSIM >0.93 even when the segmentation process was performed on an EUV image decimated from 4096×4096 pixels down to 512×512 pixels. Furthermore, using the same metrics, the method is shown to be robust across short timescales, producing segmentation with a mean IOU of 0.826 from EUV images taken at a 1-h cadence, and showing a smooth decay in similarity across all metrics as a function of time, indicating self-consistent segmentations even when corrections for exposure time have not been applied to the data. Finally, the accuracy of the segmentations and confidence maps are validated by considering the skewness (i.e., unipolarity) of the underlying magnetic field.

6.
Microsc Microanal ; 29(2): 777-785, 2023 04 05.
Artigo em Inglês | MEDLINE | ID: mdl-37749743

RESUMO

In hereditary spherocytosis (HS), genetic mutations in the cell membrane and cytoskeleton proteins cause structural defects in red blood cells (RBCs). As a result, cells are rigid and misshapen, usually with a characteristic spherical form (spherocytes), too stiff to circulate through microcirculation regions, so they are prone to undergo hemolysis and phagocytosis by splenic macrophages. Mild to severe anemia arises in HS, and other derived symptoms like splenomegaly, jaundice, and cholelithiasis. Although abnormally shaped RBCs can be identified under conventional light microscopy, HS diagnosis relies on several clinical factors and sometimes on the results of complex molecular testing. It is specially challenging when other causes of anemia coexist or after recent blood transfusions. We propose two different approaches to characterize RBCs in HS: (i) an immunofluorescence assay targeting protein band 3, which is affected in most HS cases and (ii) a three-dimensional morphology assay, with living cells, staining the membrane with fluorescent dyes. Confocal laser scanning microscopy (CLSM) was used to carry out both assays, and in order to complement the latter, a software was developed for the automated detection of spherocytes in blood samples. CLSM allowed the precise and unambiguous assessment of cell shape and protein expression.


Assuntos
Eritrócitos , Proteínas de Membrana , Microscopia Confocal , Membrana Celular , Forma Celular
7.
Sensors (Basel) ; 23(15)2023 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-37571620

RESUMO

With a view of the post-COVID-19 world and probable future pandemics, this paper presents an Internet of Things (IoT)-based automated healthcare diagnosis model that employs a mixed approach using data augmentation, transfer learning, and deep learning techniques and does not require physical interaction between the patient and physician. Through a user-friendly graphic user interface and availability of suitable computing power on smart devices, the embedded artificial intelligence allows the proposed model to be effectively used by a layperson without the need for a dental expert by indicating any issues with the teeth and subsequent treatment options. The proposed method involves multiple processes, including data acquisition using IoT devices, data preprocessing, deep learning-based feature extraction, and classification through an unsupervised neural network. The dataset contains multiple periapical X-rays of five different types of lesions obtained through an IoT device mounted within the mouth guard. A pretrained AlexNet, a fast GPU implementation of a convolutional neural network (CNN), is fine-tuned using data augmentation and transfer learning and employed to extract the suitable feature set. The data augmentation avoids overtraining, whereas accuracy is improved by transfer learning. Later, support vector machine (SVM) and the K-nearest neighbors (KNN) classifiers are trained for lesion classification. It was found that the proposed automated model based on the AlexNet extraction mechanism followed by the SVM classifier achieved an accuracy of 98%, showing the effectiveness of the presented approach.


Assuntos
COVID-19 , Aprendizado Profundo , Internet das Coisas , Humanos , Inteligência Artificial , Análise por Conglomerados
8.
J Stroke Cerebrovasc Dis ; 32(12): 107396, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37883825

RESUMO

INTRODUCTION: The prompt detection of intracranial hemorrhage (ICH) on a non-contrast head CT (NCCT) is critical for the appropriate triage of patients, particularly in high volume/high acuity settings. Several automated ICH detection tools have been introduced; however, at present, most suffer from suboptimal specificity leading to false-positive notifications. METHODS: NCCT scans from 4 large databases were evaluated for the presence of an ICH (IPH, IVH, SAH or SDH) of >0.4 ml using fully-automated RAPID ICH 3.0 as compared to consensus detection from at least two neuroradiology experts. Scans were excluded for (1) severe CT artifacts, (2) prior neurosurgical procedures, or (3) recent intravenous contrast. ICH detection accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and positive and negative likelihood ratios by were determined. RESULTS: A total of 881 studies were included. The automated software correctly identified 453/463 ICH-positive cases and 416/418 ICH-negative cases, resulting in a sensitivity of 97.84% and specificity 99.52%, positive predictive value 99.56%, and negative predictive value 97.65% for ICH detection. The positive and negative likelihood ratios for ICH detection were similarly favorable at 204.49 and 0.02 respectively. Mean processing time was <40 seconds. CONCLUSIONS: In this large data set of nearly 900 patients, the automated software demonstrated high sensitivity and specificity for ICH detection, with rare false-positives.


Assuntos
Hemorragias Intracranianas , Tomografia Computadorizada por Raios X , Humanos , Hemorragias Intracranianas/diagnóstico por imagem , Valor Preditivo dos Testes , Tomografia Computadorizada por Raios X/métodos , Software , Estudos Retrospectivos
9.
Int J Comput Dent ; 26(4): 311-317, 2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-36749284

RESUMO

AIM: The present study aimed to evaluate the accuracy of automated detection of preparation finish lines in teeth with defective margins. MATERIALS AND METHODS: An extracted first molar was prepared for a full veneer crown, and marginal defects were created and scanned (discontinuity of finish line: 0.5, 1.0, and 1.5 mm; additional line angle: connected, partially connected, and disconnected). Six virtual defect models were entered into CAD software and the preparation finish line was designated by 20 clinicians (CAD-experienced group: n = 10; CAD-inexperienced group: n = 10) using the automated finish line detection method. The accuracy of automatic detection was evaluated by calculating the 3D deviation of the registered finish line. The Kruskal-Wallis and Mann-Whitney U tests were used for between-group comparisons (α = 0.05). RESULTS: The deviation values of the registered finish lines were significantly different according to conditions with different amounts of finish line discontinuity (P < 0.001). There was no statistical difference in the deviation of the registered finish line between models with additional line angles around the margin. Moreover, no statistical difference was found in the results between CAD-experienced and CAD-inexperienced operators. CONCLUSIONS: The accuracy of automated finish line detection for tooth preparation can differ when the finish line is discontinuous. The presence of an additional line angle around the preparation margin and prior experience in dental CAD software do not affect the accuracy of automated finish line detection.


Assuntos
Coroas , Preparo Prostodôntico do Dente , Humanos , Preparo Prostodôntico do Dente/métodos , Planejamento de Prótese Dentária , Desenho Assistido por Computador , Adaptação Marginal Dentária , Zircônio , Preparo do Dente
10.
Epilepsia ; 2022 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-35194778

RESUMO

OBJECTIVE: The objective of this study was to evaluate the accuracy of a semiautomated classification of nocturnal seizures using a hybrid system consisting of an artificial intelligence-based algorithm, which selects epochs with potential clinical relevance to be reviewed by human experts. METHODS: Consecutive patients with nocturnal motor seizures admitted for video-electroencephalographic long-term monitoring (LTM) were prospectively recruited. We determined the extent of data reduction by using the algorithm, and we evaluated the accuracy of seizure classification from the hybrid system compared with the gold standard of LTM. RESULTS: Forty consecutive patients (24 male; median age = 15 years) were analyzed. The algorithm reduced the duration of epochs to be reviewed to 14% of the total recording time (1874 h). There was a fair agreement beyond chance in seizure classification between the hybrid system and the gold standard (agreement coefficient = .33, 95% confidence interval = .20-.47). The hybrid system correctly identified all tonic-clonic and clonic seizures and 82% of focal motor seizures. However, there was low accuracy in identifying seizure types with more discrete or subtle motor phenomena. SIGNIFICANCE: Using a hybrid (algorithm-human) system for reviewing nocturnal video recordings significantly decreased the workload and provided accurate classification of major motor seizures (tonic-clonic, clonic, and focal motor seizures).

11.
Epilepsia ; 63(7): 1619-1629, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35357698

RESUMO

OBJECTIVES: High counts of averaged interictal epileptiform discharges (IEDs) are key components of accurate interictal electric source imaging (ESI) in patients with focal epilepsy. Automated detections may be time-efficient, but they need to identify the correct IED types. Thus we compared semiautomated and automated detection of IED types in long-term video-EEG (electroencephalography) monitoring (LTM) using an extended scalp EEG array and short-term high-density EEG (hdEEG) with visual detection of IED types and the seizure-onset zone (SOZ). METHODS: We prospectively recruited consecutive patients from four epilepsy centers who underwent both LTM with 40-electrode scalp EEG and short-term hdEEG with 256 electrodes. Only patients with a single circumscribed SOZ in LTM were included. In LTM and hdEEG, IED types were identified visually, semiautomatically and automatically. Concordances of semiautomated and automated detections in LTM and hdEEG, as well as visual detections in hdEEG, were compared against visually detected IED types and the SOZ in LTM. RESULTS: Fifty-two of 62 patients with LTM and hdEEG were included. The most frequent IED types per patient, detected semiautomatically and automatically in LTM and visually in hdEEG, were significantly concordant with the most frequently visually identified IED type in LTM and the SOZ. Semiautomated and automated detections of IED types in hdEEG were significantly concordant with visually identified IED types in LTM, only when IED types with more than 50 detected single IEDs were selected. The threshold of 50 detected IED in hdEEG was reached in half of the patients. For all IED types per patient, agreement between visual and semiautomated detections in LTM was high. SIGNIFICANCE: Semiautomated and automated detections of IED types in LTM show significant agreement with visually detected IED types and the SOZ. In short-term hdEEG, semiautomated detections of IED types are concordant with visually detected IED types and the SOZ in LTM if high IED counts were detected.


Assuntos
Epilepsias Parciais , Couro Cabeludo , Eletroencefalografia/métodos , Epilepsias Parciais/diagnóstico , Humanos , Imageamento por Ressonância Magnética/métodos , Estudos Prospectivos , Convulsões
12.
Epilepsia ; 2022 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-35176173

RESUMO

OBJECTIVE: Our primary goal was to measure the accuracy of fully automated absence seizure detection, using a wearable electroencephalographic (EEG) device. As a secondary goal, we also tested the feasibility of automated behavioral testing triggered by the automated detection. METHODS: We conducted a phase 3 clinical trial (NCT04615442), with a prospective, multicenter, blinded study design. The input was the one-channel EEG recorded with dry electrodes embedded into a wearable headband device connected to a smartphone. The seizure detection algorithm was developed using artificial intelligence (convolutional neural networks). During the study, the predefined algorithm, with predefined cutoff value, analyzed the EEG in real time. The gold standard was derived from expert evaluation of simultaneously recorded full-array video-EEGs. In addition, we evaluated the patients' responsiveness to the automated alarms on the smartphone, and we compared it with the behavioral changes observed in the clinical video-EEGs. RESULTS: We recorded 102 consecutive patients (57 female, median age = 10 years) on suspicion of absence seizures. We recorded 364 absence seizures in 39 patients. Device deficiency was 4.67%, with a total recording time of 309 h. Average sensitivity per patient was 78.83% (95% confidence interval [CI] = 69.56%-88.11%), and median sensitivity was 92.90% (interquartile range [IQR] = 66.7%-100%). The average false detection rate was .53/h (95% CI = .32-.74). Most patients (n = 66, 64.71%) did not have any false alarms. The median F1 score per patient was .823 (IQR = .57-1). For the total recording duration, F1 score was .74. We assessed the feasibility of automated behavioral testing in 36 seizures; it correctly documented nonresponsiveness in 30 absence seizures, and responsiveness in six electrographic seizures. SIGNIFICANCE: Automated detection of absence seizures with a wearable device will improve seizure quantification and will promote assessment of patients in their home environment. Linking automated seizure detection to automated behavioral testing will provide valuable information from wearable devices.

13.
Epilepsia ; 63(5): 1064-1073, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35184276

RESUMO

OBJECTIVE: To evaluate the diagnostic performance of artificial intelligence (AI)-based algorithms for identifying the presence of interictal epileptiform discharges (IEDs) in routine (20-min) electroencephalography (EEG) recordings. METHODS: We evaluated two approaches: a fully automated one and a hybrid approach, where three human raters applied an operational IED definition to assess the automated detections grouped into clusters by the algorithms. We used three previously developed AI algorithms: Encevis, SpikeNet, and Persyst. The diagnostic gold standard (epilepsy or not) was derived from video-EEG recordings of patients' habitual clinical episodes. We compared the algorithms with the gold standard at the recording level (epileptic or not). The independent validation data set (not used for training) consisted of 20-min EEG recordings containing sharp transients (epileptiform or not) from 60 patients: 30 with epilepsy (with a total of 340 IEDs) and 30 with nonepileptic paroxysmal events. We compared sensitivity, specificity, overall accuracy, and the review time-burden of the fully automated and hybrid approaches, with the conventional visual assessment of the whole recordings, based solely on unrestricted expert opinion. RESULTS: For all three AI algorithms, the specificity of the fully automated approach was too low for clinical implementation (16.67%; 63.33%; 3.33%), despite the high sensitivity (96.67%; 66.67%; 100.00%). Using the hybrid approach significantly increased the specificity (93.33%; 96.67%; 96.67%) with good sensitivity (93.33%; 56.67%; 76.67%). The overall accuracy of the hybrid methods (93.33%; 76.67%; 86.67%) was similar to the conventional visual assessment of the whole recordings (83.33%; 95% confidence interval [CI]: 71.48-91.70%; p > .5), yet the time-burden of review was significantly lower (p < .001). SIGNIFICANCE: The hybrid approach, where human raters apply the operational IED criteria to automated detections of AI-based algorithms, has high specificity, good sensitivity, and overall accuracy similar to conventional EEG reading, with a significantly lower time-burden. The hybrid approach is accurate and suitable for clinical implementation.


Assuntos
Inteligência Artificial , Epilepsia , Algoritmos , Eletroencefalografia/métodos , Epilepsia/diagnóstico , Humanos , Gravação em Vídeo
14.
Neuroradiology ; 64(4): 727-734, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34599377

RESUMO

PURPOSE: White matter hyperintensity (WMHI) lesions on MR images are an important indication of various types of brain diseases that involve inflammation and blood vessel abnormalities. Automated quantification of the WMHI can be valuable for the clinical management of patients, but existing automated software is often developed for a single type of disease and may not be applicable for clinical scans with thick slices and different scanning protocols. The purpose of the study is to develop and validate an algorithm for automatic quantification of white matter hyperintensity suitable for heterogeneous MRI data with different disease types. METHODS: We developed and evaluated "DeepWML", a deep learning method for fully automated white matter lesion (WML) segmentation of multicentre FLAIR images. We used MRI from 507 patients, including three distinct white matter diseases, obtained in 9 centres, with a wide range of scanners and acquisition protocols. The automated delineation tool was evaluated through quantitative parameters of Dice similarity, sensitivity and precision compared to manual delineation (gold standard). RESULTS: The overall median Dice similarity coefficient was 0.78 (range 0.64 ~ 0.86) across the three disease types and multiple centres. The median sensitivity and precision were 0.84 (range 0.67 ~ 0.94) and 0.81 (range 0.64 ~ 0.92), respectively. The tool's performance increased with larger lesion volumes. CONCLUSION: DeepWML was successfully applied to a wide spectrum of MRI data in the three white matter disease types, which has the potential to improve the practical workflow of white matter lesion delineation.


Assuntos
Aprendizado Profundo , Leucoaraiose , Leucoencefalopatias , Substância Branca , Algoritmos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Humanos , Leucoaraiose/patologia , Leucoencefalopatias/patologia , Imageamento por Ressonância Magnética/métodos , Substância Branca/diagnóstico por imagem , Substância Branca/patologia
15.
BMC Med Imaging ; 22(1): 43, 2022 03 14.
Artigo em Inglês | MEDLINE | ID: mdl-35282821

RESUMO

BACKGROUND: The aim of this study was to develop and evaluate a deep neural network model in the automated detection of pulmonary embolism (PE) from computed tomography pulmonary angiograms (CTPAs) using only weakly labelled training data. METHODS: We developed a deep neural network model consisting of two parts: a convolutional neural network architecture called InceptionResNet V2 and a long-short term memory network to process whole CTPA stacks as sequences of slices. Two versions of the model were created using either chest X-rays (Model A) or natural images (Model B) as pre-training data. We retrospectively collected 600 CTPAs to use in training and validation and 200 CTPAs to use in testing. CTPAs were annotated only with binary labels on both stack- and slice-based levels. Performance of the models was evaluated with ROC and precision-recall curves, specificity, sensitivity, accuracy, as well as positive and negative predictive values. RESULTS: Both models performed well on both stack- and slice-based levels. On the stack-based level, Model A reached specificity and sensitivity of 93.5% and 86.6%, respectively, outperforming Model B slightly (specificity 90.7% and sensitivity 83.5%). However, the difference between their ROC AUC scores was not statistically significant (0.94 vs 0.91, p = 0.07). CONCLUSIONS: We show that a deep learning model trained with a relatively small, weakly annotated dataset can achieve excellent performance results in detecting PE from CTPAs.


Assuntos
Aprendizado Profundo , Embolia Pulmonar , Angiografia , Humanos , Embolia Pulmonar/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
16.
J Dairy Sci ; 105(10): 8411-8425, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36028340

RESUMO

The primary objective of this randomized controlled experiment was to evaluate the insemination dynamic and reproductive performance of cows managed with a targeted reproductive management (TRM) program designed to prioritize artificial insemination (AI) at detected estrus (AIE) and optimize timing of AI by grouping cows based on detection of estrus during the voluntary waiting period (VWP). Our secondary objective was to evaluate reproductive outcomes for cows with or without estrus during the VWP. Lactating Holstein cows fitted with an ear-attached sensor for detection of estrus were randomly assigned to a TRM treatment that prioritized AIE based on detection of estrus during the VWP (TP-AIE; n = 488), a non-TRM treatment that prioritized AIE (P-AIE; n = 489), or an all timed AI (TAI) treatment with extended VWP (ALL-TAI; n = 491). In TP-AIE, cows with or without automated estrus alerts (AEA) recorded during the VWP received AIE if detected in estrus for at least 31 ± 3 or 17 ± 3 d after a 49 d VWP, respectively. Cows not AIE with or without AEA during the VWP received TAI after Ovsynch with progesterone supplementation and 2 PGF2α treatments (P4-Ov) at 90 ± 3 or 74 ± 3 d in milk (DIM), respectively. In P-AIE, cows received AIE if detected in estrus for 24 ± 3 d after a 49 d VWP, and if not AIE received TAI at 83 ± 3 DIM after P4-Ov. In ALL-TAI, cows received TAI at 83 ± 3 DIM after a Double-Ovsynch protocol. Data were analyzed by logistic and Cox's proportional hazard regression. The proportion of cows AIE did not differ for TP-AIE (71.0%) and P-AIE (74.6%). Overall P/AI at 39 d after first service was greater for the ALL-TAI (47.6%) than for the P-AIE (40.2%) and TP-AIE (39.5%) treatments. The hazard of pregnancy up to 150 DIM was greater for cows in TP-AIE (hazard ratio = 1.2; 95% confidence interval: 1.1-1.4) and P-AIE (hazard ratio = 1.2; 95% confidence interval: 1.1-1.4) than for cows in the ALL-TAI treatment which resulted in median time to pregnancy of 89, 89, and 107 d. Conversely, the proportion of cows pregnant at 150 DIM did not differ (ALL-TAI 78.5%, P-AIE 76.3%, TP-AIE 76.0%). Except for a few outcomes for which no difference was observed, cows detected in estrus during the VWP had better performance than cows not detected in estrus. Cows with AEA during the VWP were more likely to receive AIE, had greater P/AI, and greater pregnancy rate up to 150 DIM regardless of first service management. We conclude that a TRM program designed to prioritize AIE by grouping cows based on detection of estrus during the VWP was an effective strategy to submit cows for first service resulting in similar or improved performance than a non-TRM program that prioritized AIE or an all-TAI program with extended VWP. Also, AEA recorded during the VWP might be used as a strategy for identifying subgroups of cows with different reproductive performance.


Assuntos
Detecção do Estro , Sincronização do Estro , Animais , Bovinos , Dinoprosta , Estro , Detecção do Estro/métodos , Sincronização do Estro/métodos , Feminino , Hormônio Liberador de Gonadotropina , Inseminação Artificial/métodos , Inseminação Artificial/veterinária , Lactação , Gravidez , Progesterona , Prostaglandinas F
17.
Sensors (Basel) ; 22(5)2022 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-35270949

RESUMO

Diabetic Retinopathy (DR) is a predominant cause of visual impairment and loss. Approximately 285 million worldwide population is affected with diabetes, and one-third of these patients have symptoms of DR. Specifically, it tends to affect the patients with 20 years or more with diabetes, but it can be reduced by early detection and proper treatment. Diagnosis of DR by using manual methods is a time-consuming and expensive task which involves trained ophthalmologists to observe and evaluate DR using digital fundus images of the retina. This study aims to systematically find and analyze high-quality research work for the diagnosis of DR using deep learning approaches. This research comprehends the DR grading, staging protocols and also presents the DR taxonomy. Furthermore, identifies, compares, and investigates the deep learning-based algorithms, techniques, and, methods for classifying DR stages. Various publicly available dataset used for deep learning have also been analyzed and dispensed for descriptive and empirical understanding for real-time DR applications. Our in-depth study shows that in the last few years there has been an increasing inclination towards deep learning approaches. 35% of the studies have used Convolutional Neural Networks (CNNs), 26% implemented the Ensemble CNN (ECNN) and, 13% Deep Neural Networks (DNN) are amongst the most used algorithms for the DR classification. Thus using the deep learning algorithms for DR diagnostics have future research potential for DR early detection and prevention based solution.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Computadores , Retinopatia Diabética/diagnóstico por imagem , Fundo de Olho , Humanos , Redes Neurais de Computação
18.
J Environ Manage ; 320: 115807, 2022 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-35944320

RESUMO

In the field of species conservation, the use of unmanned aerial vehicles (UAV) is increasing in popularity as wildlife observation and monitoring tools. With large datasets created by UAV-based species surveying, the need arose to automate the detection process of the species. Although the use of computer learning algorithms for wildlife detection from UAV-derived imagery is an increasing trend, it depends on a large amount of imagery of the species to train the object detector effectively. However, there are alternatives like object-based image analysis (OBIA) software available if a large amount of imagery of the species is not available to develop a computer-learned object detector. The study tested the semi-automated detection of reintroduced Arabian Oryx (O. leucoryx), using the specie's coat sRGB-colour profiles as input for OBIA to identify adult O. leucoryx, applied to UAV acquired imagery. Our method uses lab-measured spectral reflection of hair sample values, collected from captive O. leucoryx as an input for OBIA ruleset to identify adult O. leucoryx from UAV survey imagery using semi-automated supervised classification. The converted mean CIE Lab reflective spectrometry colour values of n = 50 hair samples of adult O. leucoryx to 8-bit sRGB-colour profiles of the species resulted in the red-band value of 157.450, the green-band value of 151.390 and blue-band value of 140.832. The sRGB values and a minimum size permitter were added as the input of the OBIA ruleset identified adult O. leucoryx with a high degree of efficiency when applied to three UAV census datasets. Using species sRGB-colour profiles to identify re-introduced O. leucoryx and extract location data using a non-invasive UAV-based tool is a novel method with enormous application possibilities. Coat refection sRGB-colour profiles can be developed for a range of species and customised to autodetect and classify the species from remote sensing data.


Assuntos
Algoritmos , Tecnologia de Sensoriamento Remoto , Animais , Animais Selvagens , Processamento de Imagem Assistida por Computador , Tecnologia de Sensoriamento Remoto/métodos , Software , Análise Espectral
19.
Medicina (Kaunas) ; 58(11)2022 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-36422216

RESUMO

Background and Objectives: The number of patients who undergo multiple operations on a knee is increasing. The objective of this study was to develop a deep learning algorithm that could detect 17 different surgical implants on plain knee radiographs. Materials and Methods: An internal dataset consisted of 5206 plain knee antero-posterior X-rays from a single, tertiary institute for model development. An external set contained 238 X-rays from another tertiary institute. A total of 17 different types of implants including total knee arthroplasty, unicompartmental knee arthroplasty, plate, and screw were labeled. The internal dataset was approximately split into a train set, a validation set, and an internal test set at a ratio of 7:1:2. You Only look Once (YOLO) was selected as the detection network. Model performances with the validation set, internal test set, and external test set were compared. Results: Total accuracy, total sensitivity, total specificity value of the validation set, internal test set, and external test set were (0.978, 0.768, 0.999), (0.953, 0.810, 0.990), and (0.956, 0.493, 0.975), respectively. Means ± standard deviations (SDs) of diagonal components of confusion matrix for these three subsets were 0.858 ± 0.242, 0.852 ± 0.182, and 0.576 ± 0.312, respectively. True positive rate of total knee arthroplasty, the most dominant class of the dataset, was higher than 0.99 with internal subsets and 0.96 with an external test set. Conclusion: Implant identification on plain knee radiographs could be automated using a deep learning technique. The detection algorithm dealt with overlapping cases while maintaining high accuracy on total knee arthroplasty. This could be applied in future research that analyzes X-ray images with deep learning, which would help prompt decision-making in clinics.


Assuntos
Artroplastia do Joelho , Aprendizado Profundo , Humanos , Radiografia , Algoritmos , Articulação do Joelho/diagnóstico por imagem , Articulação do Joelho/cirurgia
20.
J Med Virol ; 93(5): 2838-2847, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33231312

RESUMO

The ongoing coronavirus disease 2019 (COVID-19) epidemic has made a huge impact on health, economies, and societies all over the world. Although reverse transcription-polymerase chain reaction (RT-PCR)-based nucleic acid detection has been primarily used in the diagnosis of COVID-19, it is time-consuming with limited application scenarios and must be operated by qualified personnel. Antibody test, particularly point-of-care antibody testing, is a suitable complement to nucleic acid test as it provides rapid, portable, and cost-effective detection of infections. In this study, a Rapid Antibody Test Kit was developed based on fluorescence immunochromatography for the sensitive, accurate, and automated detection of immunoglobulin M (IgM) and IgG antibodies against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in human serum, plasma, and whole blood samples within 10 min. The sensitivity, specificity, precision, and stability of the test kit were of good performance. No cross-activity and no interference was observed. In the multiple-center parallel study, 223 samples from hospitalized patients were used to evaluate the clinical specificity of the test. Both SARS-CoV-2 IgM and IgG achieved a clinical specificity of 98.21%. The clinical sensitivities of SARS-CoV-2 IgM and IgG were 79.54% and 87.45%, respectively, among 733 reverse transcription-polymerase chain reaction (RT-PCR) confirmed SARS-CoV-2 samples. For the combined IgM and IgG assays, the sensitivity and specificity were 89.22% and 96.86%, respectively. Our results demonstrate that the combined use of IgM and IgG could serve as a more suitable alternative detection method for patients with COVID-19, and the developed kit is of great public health significance for the prevention and control of the COVID-19 pandemic.


Assuntos
Anticorpos Antivirais/sangue , Teste para COVID-19/métodos , COVID-19/diagnóstico , Imunofluorescência/métodos , Imunoensaio/métodos , Imunoglobulina G/sangue , Imunoglobulina M/sangue , Kit de Reagentes para Diagnóstico , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Animais , COVID-19/imunologia , Criança , Pré-Escolar , Feminino , Fluorescência , Humanos , Masculino , Camundongos , Pessoa de Meia-Idade , Testes Imediatos , Proteínas Recombinantes , SARS-CoV-2/imunologia , SARS-CoV-2/isolamento & purificação , Sensibilidade e Especificidade , Glicoproteína da Espícula de Coronavírus/genética , Glicoproteína da Espícula de Coronavírus/imunologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA