Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 316.517
Filtrar
1.
Rev. esp. patol ; 57(2): 77-83, Abr-Jun, 2024. tab, ilus
Artículo en Español | IBECS | ID: ibc-232410

RESUMEN

Introducción: En un servicio de anatomía patológica se analiza la carga laboral en tiempo médico en función de la complejidad de las muestras recibidas, y se valora su distribución entre los patólogos, presentado un nuevo algoritmo informático que favorece una distribución equitativa. Métodos: Siguiendo las directrices para la «Estimación de la carga de trabajo en citopatología e histopatología (tiempo médico) atendiendo al catálogo de muestras y procedimientos de la SEAP-IAP (2.ª edición)» se determinan las unidades de carga laboral (UCL) por patólogo y UCL global del servicio, la carga media laboral que soporta el servicio (factor MU), el tiempo de dedicación de cada patólogo a la actividad asistencial y el número de patólogos óptimo según la carga laboral del servicio. Resultados: Determinamos 12.197 UCL totales anuales para el patólogo jefe de servicio, así como 14.702 y 13.842 para los patólogos adjuntos, con una UCL global del servicio de 40.742. El factor MU calculado es 4,97. El jefe ha dedicado el 72,25% de su jornada a la asistencia y los adjuntos el 87,09 y 82,01%. El número de patólogos óptimo para el servicio es de 3,55. Conclusiones: Todos los resultados obtenidos demuestran la sobrecarga laboral médica, y la distribución de las UCL entre los patólogos no resulta equitativa. Se propone un algoritmo informático capaz de distribuir la carga laboral de manera equitativa, asociado al sistema de información del laboratorio, y que tenga en cuenta el tipo de muestra, su complejidad y la dedicación asistencial de cada patólogo.(AU)


Introduction: In a pathological anatomy service, the workload in medical time is analyzed based on the complexity of the samples received and its distribution among pathologists is assessed, presenting a new computer algorithm that favors an equitable distribution. Methods: Following the second edition of the Spanish guidelines for the estimation of workload in cytopathology and histopathology (medical time) according to the Spanish Pathology Society-International Academy of Pathology (SEAP-IAP) catalog of samples and procedures, we determined the workload units (UCL) per pathologist and the overall UCL of the service, the average workload of the service (MU factor), the time dedicated by each pathologist to healthcare activity and the optimal number of pathologists according to the workload of the service. Results: We determined 12 197 total annual UCL for the chief pathologist, as well as 14 702 and 13 842 UCL for associate pathologists, with an overall of 40 742 UCL for the whole service. The calculated MU factor is 4.97. The chief pathologist devoted 72.25% of his working day to healthcare activity while associate pathologists dedicated 87.09% and 82.01% of their working hours. The optimal number of pathologists for the service is found to be 3.55. Conclusions: The results demonstrate medical work overload and a non-equitable distribution of UCLs among pathologists. We propose a computer algorithm capable of distributing the workload in an equitable manner. It would be associated with the laboratory information system and take into account the type of specimen, its complexity and the dedication of each pathologist to healthcare activity.(AU)


Asunto(s)
Humanos , Masculino , Femenino , Patología , Carga de Trabajo , Patólogos , Servicio de Patología en Hospital , Algoritmos
2.
J Tradit Chin Med ; 44(3): 505-514, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38767634

RESUMEN

OBJECTIVE: To evaluate the quality of Moyao (Myrrh) in the identification of the geographical origin and processing of the products. METHODS: Raw Moyao (Myrrh) and two kinds of Moyao (Myrrh) processed with vinegar from three countries were identified using near-infrared (NIR) spectroscopy combined with chemometric techniques. Principal component analysis (PCA) was used to reduce the dimensionality of the data and visualize the clustering of samples from different categories. A classical chemometric algorithm (PLS-DA) and two machine learning algorithms [K-nearest neighbor (KNN) and support vector machine] were used to conduct a classification analysis of the near-infrared spectra of the Moyao (Myrrh) samples, and their discriminative performance was evaluated. RESULTS: Based on the accuracy, precision, recall rate, and F1 value in each model, the results showed that the classical chemometric algorithm and the machine learning algorithm obtained positive results. In all of the chemometric analyses, the NIR spectrum of Moyao (Myrrh) preprocessed by standard normal variation or Multivariate scattering correction combined with KNN achieved the highest accuracy in identifying the geographical origins, and the accuracy of identifying the processing technology established by the KNN method after first-order derivative pretreatment was the best. The best accuracy of geographical origin discrimination and processing technology discrimination were 0.9853 and 0.9706 respectively. CONCLUSIONS: NIR spectroscopy combined with chemometric technology can be an important tool for tracking the origin and processing technology of Moyao (Myrrh) and can also provide a reference for evaluations of its quality and the clinical use.


Asunto(s)
Espectroscopía Infrarroja Corta , Espectroscopía Infrarroja Corta/métodos , Análisis de Componente Principal , Quimiometría/métodos , Medicamentos Herbarios Chinos/química , Geografía , Algoritmos , China
3.
PLoS One ; 19(5): e0300924, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38768105

RESUMEN

The identification research of hydrogenation catalyst information has always been one of the most important businesses in the chemical industry. In order to aid researchers in efficiently screening high-performance catalyst carriers and tackle the pressing challenge at hand, it is imperative to find a solution for the intelligent recognition of hydrogenation catalyst images. To address the issue of low recognition accuracy caused by adhesion and stacking of hydrogenation catalysts, An image recognition algorithm of hydrogenation catalyst based on FPNC Net was proposed in this paper. In the present study, Resnet50 backbone network was used to extract the features, and spatially-separable convolution kernel was used to extract the multi-scale features of catalyst fringe. In addition, to effectively segment the adhesive regions of stripes, FPN (Feature Pyramid Network) is added to the backbone network for deep and shallow feature fusion. Introducing an attention module to adaptively adjust weights can effectively highlight the target features of the catalyst. The experimental results showed that the FPNC Net model achieved an accuracy of 94.2% and an AP value improvement of 19.37% compared to the original CenterNet model. The improved model demonstrates a significant enhancement in detection accuracy, indicating a high capability for detecting hydrogenation catalyst targets.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Catálisis , Hidrogenación , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
4.
PLoS One ; 19(5): e0300017, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38768119

RESUMEN

With the increasing applications of traffic scene image classification in intelligent transportation systems, there is a growing demand for improved accuracy and robustness in this classification task. However, due to weather conditions, time, lighting variations, and annotation costs, traditional deep learning methods still have limitations in extracting complex traffic scene features and achieving higher recognition accuracy. The previous classification methods for traffic scene images had gaps in multi-scale feature extraction and the combination of frequency domain, spatial, and channel attention. To address these issues, this paper proposes a multi-scale and multi-attention model based on Res2Net. Our proposed framework introduces an Adaptive Feature Refinement Pyramid Module (AFRPM) to enhance multi-scale feature extraction, thus improving the accuracy of traffic scene image classification. Additionally, we integrate frequency domain and spatial-channel attention mechanisms to develop recognition capabilities for complex backgrounds, objects of different scales, and local details in traffic scene images. The paper conducts the task of classifying traffic scene images using the Traffic-Net dataset. The experimental results demonstrate that our model achieves an accuracy of 96.88% on this dataset, which is an improvement of approximately 2% compared to the baseline Res2Net network. Furthermore, we validate the effectiveness of the proposed modules through ablation experiments.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Aprendizaje Profundo , Redes Neurales de la Computación , Humanos
5.
PLoS One ; 19(5): e0297462, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38768117

RESUMEN

Considering the advantages of q-rung orthopair fuzzy 2-tuple linguistic set (q-RFLS), which includes both linguistic and numeric data to describe evaluations, this article aims to design a new decision-making methodology by integrating Vlsekriterijumska Optimizacija I Kompromisno Resenje (VIKOR) and qualitative flexible (QUALIFLEX) methods based on the revised aggregation operators to solve multiple criteria group decision making (MCGDM). To accomplish this, we first revise the extant operational laws of q-RFLSs to make up for their shortcomings. Based on novel operational laws, we develop q-rung orthopair fuzzy 2-tuple linguistic (q-RFL) weighted averaging and geometric operators and provide the corresponding results. Next, we develop a maximization deviation model to determine the criterion weights in the decision-making procedure, which accounts for partial weight unknown information. Then, the VIKOR and QUALIFLEX methodologies are combined, which can assess the concordance index of each ranking combination using group utility and individual maximum regret value of alternative and acquire the ranking result based on each permutation's general concordance index values. Consequently, a case study is conducted to select the best bike-sharing recycling supplier utilizing the suggested VIKOR-QUALIFLEX MCGDM method, demonstrating the method's applicability and availability. Finally, through sensitivity and comparative analysis, the validity and superiority of the proposed method are demonstrated.


Asunto(s)
Toma de Decisiones , Lógica Difusa , Lingüística , Humanos , Algoritmos
6.
PLoS One ; 19(5): e0303143, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38768124

RESUMEN

In response to increasingly complex social emergencies, this study realizes the optimization of logistics information flow and resource allocation by constructing the Emergency logistics information Traceability model (ELITM-CBT) based on alliance blockchain technology. Using the decentralized, data immutable and transparent characteristics of alliance blockchain technology, this research breaks through the limitations of traditional emergency logistics models and improves the accuracy and efficiency of information management. Combined with the hybrid genetic simulated Annealing algorithm (HGASA), the improved model shows significant advantages in emergency logistics scenarios, especially in terms of total transportation time, total cost, and fairness of resource allocation. The simulation results verify the high efficiency of the model in terms of timeliness of emergency response and accuracy of resource allocation, and provide innovative theoretical support and practical scheme for the field of emergency logistics. Future research will explore more efficient consensus mechanisms, and combine big data and artificial intelligence technology to further improve the performance and adaptability of emergency logistics systems.


Asunto(s)
Algoritmos , Cadena de Bloques , Asignación de Recursos , Urgencias Médicas , Modelos Teóricos , Humanos
7.
PLoS One ; 19(5): e0303276, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38768166

RESUMEN

Binary classification methods encompass various algorithms to categorize data points into two distinct classes. Binary prediction, in contrast, estimates the likelihood of a binary event occurring. We introduce a novel graphical and quantitative approach, the U-smile method, for assessing prediction improvement stratified by binary outcome class. The U-smile method utilizes a smile-like plot and novel coefficients to measure the relative and absolute change in prediction compared with the reference method. The likelihood-ratio test was used to assess the significance of the change in prediction. Logistic regression models using the Heart Disease dataset and generated random variables were employed to validate the U-smile method. The receiver operating characteristic (ROC) curve was used to compare the results of the U-smile method. The likelihood-ratio test demonstrated that the proposed coefficients consistently generated smile-shaped U-smile plots for the most informative predictors. The U-smile plot proved more effective than the ROC curve in comparing the effects of adding new predictors to the reference method. It effectively highlighted differences in model performance for both non-events and events. Visual analysis of the U-smile plots provided an immediate impression of the usefulness of different predictors at a glance. The U-smile method can guide the selection of the most valuable predictors. It can also be helpful in applications beyond prediction.


Asunto(s)
Curva ROC , Humanos , Modelos Logísticos , Algoritmos , Funciones de Verosimilitud , Cardiopatías
8.
J R Soc Interface ; 21(214): 20230732, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38774958

RESUMEN

The concept of an autocatalytic network of reactions that can form and persist, starting from just an available food source, has been formalized by the notion of a reflexively autocatalytic and food-generated (RAF) set. The theory and algorithmic results concerning RAFs have been applied to a range of settings, from metabolic questions arising at the origin of life, to ecological networks, and cognitive models in cultural evolution. In this article, we present new structural and algorithmic results concerning RAF sets, by studying more complex modes of catalysis that allow certain reactions to require multiple catalysts (or to not require catalysis at all), and discuss the differing ways catalysis has been viewed in the literature. We also focus on the structure and analysis of minimal RAFs and derive structural results and polynomial-time algorithms. We then apply these new methods to a large metabolic network to gain insights into possible biochemical scenarios near the origin of life.


Asunto(s)
Algoritmos , Catálisis , Modelos Biológicos , Bioquímica , Origen de la Vida
9.
Skin Res Technol ; 30(5): e13690, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38716749

RESUMEN

BACKGROUND: The response of AI in situations that mimic real life scenarios is poorly explored in populations of high diversity. OBJECTIVE: To assess the accuracy and validate the relevance of an automated, algorithm-based analysis geared toward facial attributes devoted to the adornment routines of women. METHODS: In a cross-sectional study, two diversified groups presenting similar distributions such as age, ancestry, skin phototype, and geographical location was created from the selfie images of 1041 female in a US population. 521 images were analyzed as part of a new training dataset aimed to improve the original algorithm and 520 were aimed to validate the performance of the AI. From a total 23 facial attributes (16 continuous and 7 categorical), all images were analyzed by 24 make-up experts and by the automated descriptor tool. RESULTS: For all facial attributes, the new and the original automated tool both surpassed the grading of the experts on a diverse population of women. For the 16 continuous attributes, the gradings obtained by the new system strongly correlated with the assessment made by make-up experts (r ≥ 0.80; p < 0.0001) and supported by a low error rate. For the seven categorical attributes, the overall accuracy of the AI-facial descriptor was improved via enrichment of the training dataset. However, some weaker performance in spotting specific facial attributes were noted. CONCLUSION: In conclusion, the AI-automatic facial descriptor tool was deemed accurate for analysis of facial attributes for diverse women although some skin complexion, eye color, and hair features required some further finetuning.


Asunto(s)
Algoritmos , Cara , Humanos , Femenino , Estudios Transversales , Adulto , Cara/anatomía & histología , Cara/diagnóstico por imagen , Estados Unidos , Persona de Mediana Edad , Adulto Joven , Fotograbar , Reproducibilidad de los Resultados , Inteligencia Artificial , Adolescente , Anciano , Pigmentación de la Piel/fisiología
10.
Rev Saude Publica ; 58: 17, 2024.
Artículo en Inglés, Portugués | MEDLINE | ID: mdl-38716929

RESUMEN

OBJECTIVE: This study aims to integrate the concepts of planetary health and big data into the Donabedian model to evaluate the Brazilian dengue control program in the state of São Paulo. METHODS: Data science methods were used to integrate and analyze dengue-related data, adding context to the structure and outcome components of the Donabedian model. This data, considering the period from 2010 to 2019, was collected from sources such as Department of Informatics of the Unified Health System (DATASUS), the Brazilian Institute of Geography and Statistics (IBGE), WorldClim, and MapBiomas. These data were integrated into a Data Warehouse. K-means algorithm was used to identify groups with similar contexts. Then, statistical analyses and spatial visualizations of the groups were performed, considering socioeconomic and demographic variables, soil, health structure, and dengue cases. OUTCOMES: Using climate variables, the K-means algorithm identified four groups of municipalities with similar characteristics. The comparison of their indicators revealed certain patterns in the municipalities with the worst performance in terms of dengue case outcomes. Although presenting better economic conditions, these municipalities held a lower average number of community healthcare agents and basic health units per inhabitant. Thus, economic conditions did not reflect better health structure among the three studied indicators. Another characteristic of these municipalities is urbanization. The worst performing municipalities presented a higher rate of urban population and human activity related to urbanization. CONCLUSIONS: This methodology identified important deficiencies in the implementation of the dengue control program in the state of São Paulo. The integration of several databases and the use of Data Science methods allowed the evaluation of the program on a large scale, considering the context in which activities are conducted. These data can be used by the public administration to plan actions and invest according to the deficiencies of each location.


Asunto(s)
Macrodatos , Dengue , Humanos , Dengue/prevención & control , Dengue/epidemiología , Brasil/epidemiología , Evaluación de Programas y Proyectos de Salud , Factores Socioeconómicos , Programas Nacionales de Salud , Algoritmos
11.
PLoS One ; 19(5): e0302595, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38718024

RESUMEN

Diabetes Mellitus is one of the oldest diseases known to humankind, dating back to ancient Egypt. The disease is a chronic metabolic disorder that heavily burdens healthcare providers worldwide due to the steady increment of patients yearly. Worryingly, diabetes affects not only the aging population but also children. It is prevalent to control this problem, as diabetes can lead to many health complications. As evolution happens, humankind starts integrating computer technology with the healthcare system. The utilization of artificial intelligence assists healthcare to be more efficient in diagnosing diabetes patients, better healthcare delivery, and more patient eccentric. Among the advanced data mining techniques in artificial intelligence, stacking is among the most prominent methods applied in the diabetes domain. Hence, this study opts to investigate the potential of stacking ensembles. The aim of this study is to reduce the high complexity inherent in stacking, as this problem contributes to longer training time and reduces the outliers in the diabetes data to improve the classification performance. In addressing this concern, a novel machine learning method called the Stacking Recursive Feature Elimination-Isolation Forest was introduced for diabetes prediction. The application of stacking with Recursive Feature Elimination is to design an efficient model for diabetes diagnosis while using fewer features as resources. This method also incorporates the utilization of Isolation Forest as an outlier removal method. The study uses accuracy, precision, recall, F1 measure, training time, and standard deviation metrics to identify the classification performances. The proposed method acquired an accuracy of 79.077% for PIMA Indians Diabetes and 97.446% for the Diabetes Prediction dataset, outperforming many existing methods and demonstrating effectiveness in the diabetes domain.


Asunto(s)
Diabetes Mellitus , Aprendizaje Automático , Humanos , Diabetes Mellitus/diagnóstico , Algoritmos , Minería de Datos/métodos , Máquina de Vectores de Soporte , Masculino
12.
PLoS One ; 19(5): e0302513, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38718032

RESUMEN

Recent advances in aerial robotics and wireless transceivers have generated an enormous interest in networks constituted by multiple compact unmanned aerial vehicles (UAVs). UAV adhoc networks, i.e., aerial networks with dynamic topology and no centralized control, are found suitable for a unique set of applications, yet their operation is vulnerable to cyberattacks. In many applications, such as IoT networks or emergency failover networks, UAVs augment and provide support to the sensor nodes or mobile nodes in the ground network in data acquisition and also improve the overall network performance. In this situation, ensuring the security of the adhoc UAV network and the integrity of data is paramount to accomplishing network mission objectives. In this paper, we propose a novel approach to secure UAV adhoc networks, referred to as the blockchain-assisted security framework (BCSF). We demonstrate that the proposed system provides security without sacrificing the performance of the network through blockchain technology adopted to the priority of the message to be communicated over the adhoc UAV network. Theoretical analysis for computing average latency is performed based on queuing theory models followed by an evaluation of the proposed BCSF approach through simulations that establish the superior performance of the proposed methodology in terms of transaction delay, data secrecy, data recovery, and energy efficiency.


Asunto(s)
Cadena de Bloques , Redes de Comunicación de Computadores , Seguridad Computacional , Dispositivos Aéreos No Tripulados , Tecnología Inalámbrica , Algoritmos
13.
PLoS One ; 19(5): e0302656, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38718081

RESUMEN

The rapid growth of traffic trajectory data and the development of positioning technology have driven the demand for its analysis. However, in the current application scenarios, there are some problems such as the deviation between positioning data and real roads and low accuracy of existing trajectory data traffic prediction models. Therefore, a map matching algorithm based on hidden Markov models is proposed in this study. The algorithm starts from the global route, selects K nearest candidate paths, and identifies candidate points through the candidate paths. It uses changes in speed, angle, and other information to generate a state transition matrix that match trajectory points to the actual route. When processing trajectory data in the experiment, K = 5 is selected as the optimal value, the algorithm takes 51 ms and the accuracy is 95.3%. The algorithm performed well in a variety of road conditions, especially in parallel and mixed road sections, with an accuracy rate of more than 96%. Although the time loss of this algorithm is slightly increased compared with the traditional algorithm, its accuracy is stable. Under different road conditions, the accuracy of the algorithm is 98.3%, 97.5%, 94.8% and 96%, respectively. The accuracy of the algorithm based on traditional hidden Markov models is 95.9%, 95.7%, 95.4% and 94.6%, respectively. It can be seen that the accuracy of the algorithm designed has higher precision. The experiment proves that the map matching algorithms based on hidden Markov models is superior to other algorithms in terms of matching accuracy. This study makes the processing of traffic trajectory data more accurate.


Asunto(s)
Algoritmos , Cadenas de Markov , Humanos , Análisis de Datos
14.
PLoS One ; 19(5): e0297999, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38718099

RESUMEN

For a narrow-brand seismograph with a flat response range limited, it cannot precisely record the signal of a ground motion and output the records with the low-frequency components cut down. A transfer function is usually used to spread the spectrum of the narrow-brand seismic records. However, the accuracy of the commonly used transfer function is not high. The authors derive a new transfer function based on the Laplace transform, bilinear transform, and Nyquist sampling theory to solve this problem. And then, the derived transfer function is used to correct the narrow-band velocity records from the Hi-net. The corrected velocity records are compared with the velocities integrated from the synchronously recorded broad-band acceleration at the same station with Hi-net. Meanwhile, the corrected records are compared with those corrected by the Nakata transfer function. The results show that the calculation accuracy of the derived transfer function is higher than the Nakata transfer function. However, when the signal-to-noise ratio is below 24, its accuracy diminishes, and it is unable to recover signals within the 0.05-0.78Hz frequency band.


Asunto(s)
Algoritmos , Modelos Teóricos , Relación Señal-Ruido
15.
Sci Adv ; 10(19): eadj1424, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38718126

RESUMEN

The ongoing expansion of human genomic datasets propels therapeutic target identification; however, extracting gene-disease associations from gene annotations remains challenging. Here, we introduce Mantis-ML 2.0, a framework integrating AstraZeneca's Biological Insights Knowledge Graph and numerous tabular datasets, to assess gene-disease probabilities throughout the phenome. We use graph neural networks, capturing the graph's holistic structure, and train them on hundreds of balanced datasets via a robust semi-supervised learning framework to provide gene-disease probabilities across the human exome. Mantis-ML 2.0 incorporates natural language processing to automate disease-relevant feature selection for thousands of diseases. The enhanced models demonstrate a 6.9% average classification power boost, achieving a median receiver operating characteristic (ROC) area under curve (AUC) score of 0.90 across 5220 diseases from Human Phenotype Ontology, OpenTargets, and Genomics England. Notably, Mantis-ML 2.0 prioritizes associations from an independent UK Biobank phenome-wide association study (PheWAS), providing a stronger form of triaging and mitigating against underpowered PheWAS associations. Results are exposed through an interactive web resource.


Asunto(s)
Bancos de Muestras Biológicas , Redes Neurales de la Computación , Humanos , Estudio de Asociación del Genoma Completo/métodos , Fenotipo , Reino Unido , Fenómica/métodos , Predisposición Genética a la Enfermedad , Genómica/métodos , Bases de Datos Genéticas , Algoritmos , Biología Computacional/métodos , Biobanco del Reino Unido
16.
Sci Rep ; 14(1): 10560, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720020

RESUMEN

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Asunto(s)
Algoritmos , Compresión de Datos , Grabación en Video , Humanos , Compresión de Datos/métodos , Actividades Humanas , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
17.
Platelets ; 35(1): 2344512, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38722090

RESUMEN

The last decade has seen increasing use of advanced imaging techniques in platelet research. However, there has been a lag in the development of image analysis methods, leaving much of the information trapped in images. Herein, we present a robust analytical pipeline for finding and following individual platelets over time in growing thrombi. Our pipeline covers four steps: detection, tracking, estimation of tracking accuracy, and quantification of platelet metrics. We detect platelets using a deep learning network for image segmentation, which we validated with proofreading by multiple experts. We then track platelets using a standard particle tracking algorithm and validate the tracks with custom image sampling - essential when following platelets within a dense thrombus. We show that our pipeline is more accurate than previously described methods. To demonstrate the utility of our analytical platform, we use it to show that in vivo thrombus formation is much faster than that ex vivo. Furthermore, platelets in vivo exhibit less passive movement in the direction of blood flow. Our tools are free and open source and written in the popular and user-friendly Python programming language. They empower researchers to accurately find and follow platelets in fluorescence microscopy experiments.


In this paper we describe computational tools to find and follow individual platelets in blood clots recorded with fluorescence microscopy. Our tools work in a diverse range of conditions, both in living animals and in artificial flow chamber models of thrombosis. Our work uses deep learning methods to achieve excellent accuracy. We also provide tools for visualizing data and estimating error rates, so you don't have to just trust the output. Our workflow measures platelet density, shape, and speed, which we use to demonstrate differences in the kinetics of clotting in living vessels versus a synthetic environment. The tools we wrote are open source, written in the popular Python programming language, and freely available to all. We hope they will be of use to other platelet researchers.


Asunto(s)
Plaquetas , Aprendizaje Profundo , Trombosis , Plaquetas/metabolismo , Trombosis/sangre , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Ratones , Algoritmos
18.
Neurosurg Rev ; 47(1): 200, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38722409

RESUMEN

Appropriate needle manipulation to avoid abrupt deformation of fragile vessels is a critical determinant of the success of microvascular anastomosis. However, no study has yet evaluated the area changes in surgical objects using surgical videos. The present study therefore aimed to develop a deep learning-based semantic segmentation algorithm to assess the area change of vessels during microvascular anastomosis for objective surgical skill assessment with regard to the "respect for tissue." The semantic segmentation algorithm was trained based on a ResNet-50 network using microvascular end-to-side anastomosis training videos with artificial blood vessels. Using the created model, video parameters during a single stitch completion task, including the coefficient of variation of vessel area (CV-VA), relative change in vessel area per unit time (ΔVA), and the number of tissue deformation errors (TDE), as defined by a ΔVA threshold, were compared between expert and novice surgeons. A high validation accuracy (99.1%) and Intersection over Union (0.93) were obtained for the auto-segmentation model. During the single-stitch task, the expert surgeons displayed lower values of CV-VA (p < 0.05) and ΔVA (p < 0.05). Additionally, experts committed significantly fewer TDEs than novices (p < 0.05), and completed the task in a shorter time (p < 0.01). Receiver operating curve analyses indicated relatively strong discriminative capabilities for each video parameter and task completion time, while the combined use of the task completion time and video parameters demonstrated complete discriminative power between experts and novices. In conclusion, the assessment of changes in the vessel area during microvascular anastomosis using a deep learning-based semantic segmentation algorithm is presented as a novel concept for evaluating microsurgical performance. This will be useful in future computer-aided devices to enhance surgical education and patient safety.


Asunto(s)
Algoritmos , Anastomosis Quirúrgica , Aprendizaje Profundo , Humanos , Anastomosis Quirúrgica/métodos , Proyectos Piloto , Microcirugia/métodos , Microcirugia/educación , Agujas , Competencia Clínica , Semántica , Procedimientos Quirúrgicos Vasculares/métodos , Procedimientos Quirúrgicos Vasculares/educación
19.
AAPS J ; 26(3): 53, 2024 Apr 23.
Artículo en Inglés | MEDLINE | ID: mdl-38722435

RESUMEN

The standard errors (SE) of the maximum likelihood estimates (MLE) of the population parameter vector in nonlinear mixed effect models (NLMEM) are usually estimated using the inverse of the Fisher information matrix (FIM). However, at a finite distance, i.e. far from the asymptotic, the FIM can underestimate the SE of NLMEM parameters. Alternatively, the standard deviation of the posterior distribution, obtained in Stan via the Hamiltonian Monte Carlo algorithm, has been shown to be a proxy for the SE, since, under some regularity conditions on the prior, the limiting distributions of the MLE and of the maximum a posterior estimator in a Bayesian framework are equivalent. In this work, we develop a similar method using the Metropolis-Hastings (MH) algorithm in parallel to the stochastic approximation expectation maximisation (SAEM) algorithm, implemented in the saemix R package. We assess this method on different simulation scenarios and data from a real case study, comparing it to other SE computation methods. The simulation study shows that our method improves the results obtained with frequentist methods at finite distance. However, it performed poorly in a scenario with the high variability and correlations observed in the real case study, stressing the need for calibration.


Asunto(s)
Algoritmos , Simulación por Computador , Método de Montecarlo , Dinámicas no Lineales , Incertidumbre , Funciones de Verosimilitud , Teorema de Bayes , Humanos , Modelos Estadísticos
20.
J Biomed Opt ; 29(6): 066004, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38751827

RESUMEN

Significance: Scanning laser optical tomography (SLOT) is a volumetric multi-modal imaging technique that is comparable to optical projection tomography and computer tomography. Image quality is crucially dependent on matching the refractive indexes (RIs) of the sample and surrounding medium, but RI matching often requires some effort and is never perfect. Aim: Reducing the burden of RI matching between the immersion medium and sample in biomedical imaging is a challenging and interesting task. We aim at implementing a post processing strategy for correcting SLOT measurements that have errors caused by RI mismatch. Approach: To better understand the problems with poorly matched Ris, simulated SLOT measurements with imperfect RI matching of the sample and medium are performed and presented here. A method to correct distorted measurements was developed and is presented and evaluated. This method is then applied to a sample containing fluorescent polystyrene beads and a sample made of olydimethylsiloxane with embedded fluorescent nanoparticles. Results: From the simulations, it is evident that measurements with an RI mismatch larger than 0.02 and no correction yield considerably worse results compared to perfectly matched measurements. RI mismatches larger than 0.05 make it almost impossible to resolve finer details and structures. By contrast, the simulations imply that a measurement with an RI mismatch of up to 0.1 can still yield reasonable results if the presented correction method is applied. The experiments validate the simulated results for an RI mismatch of about 0.09. Conclusions: The method significantly improves the SLOT image quality for samples with imperfectly matched Ris. Although the absolutely best imaging quality will be achieved with perfect RI matching, these results pave the way for imaging in SLOT with RI mismatches while maintaining high image quality.


Asunto(s)
Refractometría , Tomografía Óptica , Tomografía Óptica/métodos , Refractometría/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Simulación por Computador , Fantasmas de Imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...