Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 14611, 2024 06 25.
Artigo em Inglês | MEDLINE | ID: mdl-38918593

RESUMO

Residents learn the vesico-urethral anastomosis (VUA), a key step in robot-assisted radical prostatectomy (RARP), early in their training. VUA assessment and training significantly impact patient outcomes and have high educational value. This study aimed to develop objective prediction models for the Robotic Anastomosis Competency Evaluation (RACE) metrics using electroencephalogram (EEG) and eye-tracking data. Data were recorded from 23 participants performing robot-assisted VUA (henceforth 'anastomosis') on plastic models and animal tissue using the da Vinci surgical robot. EEG and eye-tracking features were extracted, and participants' anastomosis subtask performance was assessed by three raters using the RACE tool and operative videos. Random forest regression (RFR) and gradient boosting regression (GBR) models were developed to predict RACE scores using extracted features, while linear mixed models (LMM) identified associations between features and RACE scores. Overall performance scores significantly differed among inexperienced, competent, and experienced skill levels (P value < 0.0001). For plastic anastomoses, R2 values for predicting unseen test scores were: needle positioning (0.79), needle entry (0.74), needle driving and tissue trauma (0.80), suture placement (0.75), and tissue approximation (0.70). For tissue anastomoses, the values were 0.62, 0.76, 0.65, 0.68, and 0.62, respectively. The models could enhance RARP anastomosis training by offering objective performance feedback to trainees.


Assuntos
Anastomose Cirúrgica , Competência Clínica , Eletroencefalografia , Aprendizado de Máquina , Procedimentos Cirúrgicos Robóticos , Uretra , Humanos , Anastomose Cirúrgica/métodos , Procedimentos Cirúrgicos Robóticos/educação , Procedimentos Cirúrgicos Robóticos/métodos , Eletroencefalografia/métodos , Masculino , Uretra/cirurgia , Tecnologia de Rastreamento Ocular , Prostatectomia/métodos , Bexiga Urinária/cirurgia
2.
Brain Res Bull ; 214: 110992, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38825253

RESUMO

Electroencephalogram (EEG) represents an effective, non-invasive technology to study mental workload. However, volume conduction, a common EEG artifact, influences functional connectivity analysis of EEG data. EEG coherence has been used traditionally to investigate functional connectivity between brain areas associated with mental workload, while weighted Phase Lag Index (wPLI) is a measure that improves on coherence by reducing susceptibility to volume conduction, a common EEG artifact. The goal of this study was to compare two methods of functional connectivity analysis, wPLI and coherence, in the context of mental workload evaluation. The study involved model development for mental workload domains and comparing their performance using coherence-based features, wPLI-based features, and a combination of both. Generalized linear mixed-effects model (GLMM) with the least absolute shrinkage and selection operator (LASSO) feature selection method was used for model development. Results indicated that the model developed using a combination of both feature types demonstrated improved predictive performance across all mental workload domains, compared to models that used each feature type individually. The R2 values were 0.82 for perceived task complexity, 0.71 for distraction, 0.91 for mental demand, 0.85 for physical demand, 0.74 for situational stress, and 0.74 for temporal demand. Furthermore, task complexity and functional connectivity patterns in different brain areas were identified as significant contributors to perceived mental workload (p-value<0.05). Findings showed the potential of using EEG data for mental workload evaluation which suggests that combination of coherence and wPLI can improve the accuracy of mental workload domains prediction. Future research should aim to validate these results on larger, diverse datasets to confirm their generalizability and refine the predictive models.


Assuntos
Encéfalo , Eletroencefalografia , Carga de Trabalho , Humanos , Eletroencefalografia/métodos , Masculino , Feminino , Adulto , Encéfalo/fisiologia , Adulto Jovem , Mapeamento Encefálico/métodos
3.
NPJ Sci Learn ; 9(1): 3, 2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38242909

RESUMO

The existing performance evaluation methods in robot-assisted surgery (RAS) are mainly subjective, costly, and affected by shortcomings such as the inconsistency of results and dependency on the raters' opinions. The aim of this study was to develop models for an objective evaluation of performance and rate of learning RAS skills while practicing surgical simulator tasks. The electroencephalogram (EEG) and eye-tracking data were recorded from 26 subjects while performing Tubes, Suture Sponge, and Dots and Needles tasks. Performance scores were generated by the simulator program. The functional brain networks were extracted using EEG data and coherence analysis. Then these networks, along with community detection analysis, facilitated the extraction of average search information and average temporal flexibility features at 21 Brodmann areas (BA) and four band frequencies. Twelve eye-tracking features were extracted and used to develop linear random intercept models for performance evaluation and multivariate linear regression models for the evaluation of the learning rate. Results showed that subject-wise standardization of features improved the R2 of the models. Average pupil diameter and rate of saccade were associated with performance in the Tubes task (multivariate analysis; p-value = 0.01 and p-value = 0.04, respectively). Entropy of pupil diameter was associated with performance in Dots and Needles task (multivariate analysis; p-value = 0.01). Average temporal flexibility and search information in several BAs and band frequencies were associated with performance and rate of learning. The models may be used to objectify performance and learning rate evaluation in RAS once validated with a broader sample size and tasks.

4.
J Robot Surg ; 17(6): 2963-2971, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37864129

RESUMO

The aim of this study was to develop machine learning classification models using electroencephalogram (EEG) and eye-gaze features to predict the level of surgical expertise in robot-assisted surgery (RAS). EEG and eye-gaze data were recorded from 11 participants who performed cystectomy, hysterectomy, and nephrectomy using the da Vinci robot. Skill level was evaluated by an expert RAS surgeon using the modified Global Evaluative Assessment of Robotic Skills (GEARS) tool, and data from three subtasks were extracted to classify skill levels using three classification models-multinomial logistic regression (MLR), random forest (RF), and gradient boosting (GB). The GB algorithm was used with a combination of EEG and eye-gaze data to classify skill levels, and differences between the models were tested using two-sample t tests. The GB model using EEG features showed the best performance for blunt dissection (83% accuracy), retraction (85% accuracy), and burn dissection (81% accuracy). The combination of EEG and eye-gaze features using the GB algorithm improved the accuracy of skill level classification to 88% for blunt dissection, 93% for retraction, and 86% for burn dissection. The implementation of objective skill classification models in clinical settings may enhance the RAS surgical training process by providing objective feedback about performance to surgeons and their teachers.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgiões , Feminino , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgiões/educação , Eletroencefalografia , Aprendizado de Máquina , Competência Clínica
5.
NPJ Aging ; 9(1): 22, 2023 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-37803137

RESUMO

Cognition, defined as the ability to learn, remember, sustain attention, make decisions, and solve problems, is essential in daily activities and in learning new skills. The purpose of this study was to develop cognitive workload and performance evaluation models using features that were extracted from Electroencephalogram (EEG) data through functional brain network and spectral analyses. The EEG data were recorded from 124 brain areas of 26 healthy participants conducting two cognitive tasks on a robot simulator. The functional brain network and Power Spectral Density features were extracted from EEG data using coherence and spectral analyses, respectively. Participants reported their perceived cognitive workload using the SURG-TLX questionnaire after each exercise, and the simulator generated actual performance scores. The extracted features, actual performance scores, and subjectively assessed cognitive workload values were used to develop linear models for evaluating performance and cognitive workload. Furthermore, the Pearson correlation was used to find the correlation between participants' age, performance, and cognitive workload. The findings demonstrated that combined EEG features retrieved from spectral analysis and functional brain networks can be used to evaluate cognitive workload and performance. The cognitive workload in conducting only Matchboard level 3, which is more challenging than Matchboard level 2, was correlated with age (0.54, p-value = 0.01). This finding may suggest playing more challenging computer games are more helpful in identifying changes in cognitive workload caused by aging. The findings could open the door for a new era of objective evaluation and monitoring of cognitive workload and performance.

6.
Surg Endosc ; 37(11): 8447-8463, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37730852

RESUMO

OBJECTIVE: This study explored the use of electroencephalogram (EEG) and eye gaze features, experience-related features, and machine learning to evaluate performance and learning rates in fundamentals of laparoscopic surgery (FLS) and robotic-assisted surgery (RAS). METHODS: EEG and eye-tracking data were collected from 25 participants performing three FLS and 22 participants performing two RAS tasks. Generalized linear mixed models, using L1-penalized estimation, were developed to objectify performance evaluation using EEG and eye gaze features, and linear models were developed to objectify learning rate evaluation using these features and performance scores at the first attempt. Experience metrics were added to evaluate their role in learning robotic surgery. The differences in performance across experience levels were tested using analysis of variance. RESULTS: EEG and eye gaze features and experience-related features were important for evaluating performance in FLS and RAS tasks with reasonable results. Residents outperformed faculty in FLS peg transfer (p value = 0.04), while faculty and residents both excelled over pre-medical students in the FLS pattern cut (p value = 0.01 and p value < 0.001, respectively). Fellows outperformed pre-medical students in FLS suturing (p value = 0.01). In RAS tasks, both faculty and fellows surpassed pre-medical students (p values for the RAS pattern cut were 0.001 for faculty and 0.003 for fellows, while for RAS tissue dissection, the p value was less than 0.001 for both groups), with residents also showing superior skills in tissue dissection (p value = 0.03). CONCLUSION: Findings could be used to develop training interventions for improving surgical skills and have implications for understanding motor learning and designing interventions to enhance learning outcomes.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Humanos , Fixação Ocular , Competência Clínica , Laparoscopia/métodos , Eletroencefalografia , Aprendizado de Máquina
7.
Ann Surg Open ; 4(2)2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37305561

RESUMO

Objective: Assessment of surgical skills is crucial for improving training standards and ensuring the quality of primary care. This study aimed to develop a gradient boosting classification model (GBM) to classify surgical expertise into inexperienced, competent, and experienced levels in robot-assisted surgery (RAS) using visual metrics. Methods: Eye gaze data were recorded from 11 participants performing four subtasks; blunt dissection, retraction, cold dissection, and hot dissection using live pigs and the da Vinci robot. Eye gaze data were used to extract the visual metrics. One expert RAS surgeon evaluated each participant's performance and expertise level using the modified Global Evaluative Assessment of Robotic Skills (GEARS) assessment tool. The extracted visual metrics were used to classify surgical skill levels and to evaluate individual GEARS metrics. Analysis of Variance (ANOVA) was used to test the differences for each feature across skill levels. Results: Classification accuracies for blunt dissection, retraction, cold dissection, and burn dissection were 95%, 96%, 96%, and 96%, respectively. The time to complete only the retraction was significantly different among the 3 skill levels (p-value = 0.04). Performance was significantly different for 3 categories of surgical skill level for all subtasks (p-values<0.01). The extracted visual metrics were strongly associated with GEARS metrics (R2>0.7 for GEARS metrics evaluation models). Conclusions: Machine learning (ML) algorithms trained by visual metrics of RAS surgeons can classify surgical skill levels and evaluate GEARS measures. The time to complete a surgical subtask may not be considered a stand-alone factor for skill level assessment.

8.
J Dairy Sci ; 105(10): 8257-8271, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36055837

RESUMO

Dry matter intake (DMI) is a fundamental component of the animal's feed efficiency, but measuring DMI of individual cows is expensive. Mid-infrared reflectance spectroscopy (MIRS) on milk samples could be an inexpensive alternative to predict DMI. The objectives of this study were (1) to assess if milk MIRS data could improve DMI predictions of Canadian Holstein cows using artificial neural networks (ANN); (2) to investigate the ability of different ANN architectures to predict unobserved DMI; and (3) to validate the robustness of developed prediction models. A total of 7,398 milk samples from 509 dairy cows distributed over Canada, Denmark, and the United States were analyzed. Data from Denmark and the United States were used to increase the training data size and variability to improve the generalization of the prediction models over the lactation. For each milk spectra record, the corresponding weekly average DMI (kg/d), test-day milk yield (MY, kg/d), fat yield (FY, g/d), and protein yield (PY, g/d), metabolic body weight (MBW), age at calving, year of calving, season of calving, days in milk, lactation number, country, and herd were available. The weekly average DMI was predicted with various ANN architectures using 7 predictor sets, which were created by different combinations MY, FY, PY, MBW, and MIRS data. All predictor sets also included age of calving and days in milk. In addition, the classification effects of season of calving, country, and lactation number were included in all models. The explored ANN architectures consisted of 3 training algorithms (Bayesian regularization, Levenberg-Marquardt, and scaled conjugate gradient), 2 types of activation functions (hyperbolic tangent and linear), and from 1 to 10 neurons in hidden layers). In addition, partial least squares regression was also applied to predict the DMI. Models were compared using cross-validation based on leaving out 10% of records (validation A) and leaving out 10% of cows (validation B). Superior fitting statistics of models comprising MIRS information compared with the models fitting milk, fat and protein yields suggest that other unknown milk components may help explain variation in weekly average DMI. For instance, using MY, FY, PY, and MBW as predictor variables produced a predictive accuracy (r) ranging from 0.510 to 0.652 across ANN models and validation sets. Using MIRS together with MY, FY, PY, and MBW as predictors resulted in improved fitting (r = 0.679-0.777). Including MIRS data improved the weekly average DMI prediction of Canadian Holstein cows, but it seems that MIRS predicts DMI mostly through its association with milk production traits and its utility to estimate a measure of feed efficiency that accounts for the level of production, such as residual feed intake, might be limited and needs further investigation. The better predictive ability of nonlinear ANN compared with linear ANN and partial least squares regression indicated possible nonlinear relationships between weekly average DMI and the predictor variables. In general, ANN using Bayesian regularization and scaled conjugate gradient training algorithms yielded slightly better weekly average DMI predictions compared with ANN using the Levenberg-Marquardt training algorithm.


Assuntos
Lactação , Leite , Animais , Teorema de Bayes , Peso Corporal , Canadá , Bovinos , Dieta/veterinária , Feminino , Leite/química , Redes Neurais de Computação , Espectrofotometria Infravermelho/veterinária
9.
J Dairy Sci ; 105(10): 8272-8285, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36055858

RESUMO

Interest in reducing eructed CH4 is growing, but measuring CH4 emissions is expensive and difficult in large populations. In this study, we investigated the effectiveness of milk mid-infrared spectroscopy (MIRS) data to predict CH4 emission in lactating Canadian Holstein cows. A total of 181 weekly average CH4 records from 158 Canadian cows and 217 records from 44 Danish cows were used. For each milk spectra record, the corresponding weekly average CH4 emission (g/d), test-day milk yield (MY, kg/d), fat yield (FY, g/d), and protein yield (PY, g/d) were available. The weekly average CH4 emission was predicted using various artificial neural networks (ANN), partial least squares regression, and different sets of predictors. The ANN architectures consisted of 3 training algorithms, 1 to 10 neurons with hyperbolic tangent activation function in the hidden layer, and 1 neuron with linear (purine) activation function in the hidden layer. Random cross-validation was used to compared the predictor sets: MY (set 1); FY (set 2); PY (set 3); MY and FY (set 4); MY and PY (set 5); MY, FY, and PY (set 6); MIRS (set 7); and MY, FY, PY, and MIRS (set 8). All predictor sets also included age at calving and days in milk, in addition to country, season of calving, and lactation number as categorical effects. Using only MY (set 1), the predictive accuracy (r) ranged from 0.245 to 0.457 and the root mean square error (RMSE) ranged from 87.28 to 99.39 across all prediction models and validation sets. Replacing MY with FY (set 2; r = 0.288-0.491; RMSE = 85.94-98.04) improved the predictive accuracy, but using PY (set 3; r = 0.260-0.468; RMSE = 86.95-98.47) did not. Adding FY to MY (set 4; r = 0.272-0.469; RMSE = 87.21-100.76) led to a negligible improvement compared with sets 1 and 3, but it slightly decreased accuracy compared with set 2. Adding PY to MY (set 5; r = 0.250-0.451; RMSE = 87.66-100.94) did not improve prediction ability. Combining MY, FY, and PY (set 6; r = 0.252-0.455; RMSE = 87.74-101.93) yielded accuracy slightly lower than sets 2 and 3. Using only MIRS data (set 7; r = 0.586-0.717; RMSE = 69.09-96.20) resulted in superior accuracy compared with all previous sets. Finally, the combination of MIRS data with MY, FY, and PY (set 8; r = 0.590-0.727; RMSE = 68.02-87.78) yielded similar accuracy to set 7. Overall, sets including the MIRS data yielded significantly better predictions than the other sets. To assess the predictive ability in a new unseen herd, a limited block cross-validation was performed using 20 cows in the same Canadian herd, which yielded r = 0.229 and RMSE = 154.44, which were clearly much worse than the average r = 0.704 and RMSE = 70.83 when predictions were made by random cross-validation. These results warrant further investigation when more data become available to allow for a more comprehensive block cross-validation before applying the calibrated models for large-scale prediction of CH4 emissions.


Assuntos
Lactação , Leite , Animais , Canadá , Bovinos , Feminino , Lactação/metabolismo , Metano/metabolismo , Leite/química , Redes Neurais de Computação , Purinas , Espectrofotometria Infravermelho/veterinária
10.
J Anim Sci ; 99(2)2021 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-33626149

RESUMO

Monitoring, recording, and predicting livestock body weight (BW) allows for timely intervention in diets and health, greater efficiency in genetic selection, and identification of optimal times to market animals because animals that have already reached the point of slaughter represent a burden for the feedlot. There are currently two main approaches (direct and indirect) to measure the BW in livestock. Direct approaches include partial-weight or full-weight industrial scales placed in designated locations on large farms that measure passively or dynamically the weight of livestock. While these devices are very accurate, their acquisition, intended purpose and operation size, repeated calibration and maintenance costs associated with their placement in high-temperature variability, and corrosive environments are significant and beyond the affordability and sustainability limits of small and medium size farms and even of commercial operators. As a more affordable alternative to direct weighing approaches, indirect approaches have been developed based on observed or inferred relationships between biometric and morphometric measurements of livestock and their BW. Initial indirect approaches involved manual measurements of animals using measuring tapes and tubes and the use of regression equations able to correlate such measurements with BW. While such approaches have good BW prediction accuracies, they are time consuming, require trained and skilled farm laborers, and can be stressful for both animals and handlers especially when repeated daily. With the concomitant advancement of contactless electro-optical sensors (e.g., 2D, 3D, infrared cameras), computer vision (CV) technologies, and artificial intelligence fields such as machine learning (ML) and deep learning (DL), 2D and 3D images have started to be used as biometric and morphometric proxies for BW estimations. This manuscript provides a review of CV-based and ML/DL-based BW prediction methods and discusses their strengths, weaknesses, and industry applicability potential.


Assuntos
Inteligência Artificial , Gado , Animais , Peso Corporal , Aprendizado de Máquina , Seleção Genética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA