Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 297.251
Filtrar
1.
Rev. bras. med. esporte ; 29(spe1): e2022_0194, 2023. tab, graf
Artículo en Inglés | LILACS | ID: biblio-1394852

RESUMEN

ABSTRACT Introduction In medicine, Deep Learning is a type of machine learning that aims to train computers to perform human tasks by simulating the human brain. Gait recognition and gait motion simulation is one of the most interesting research areas in the field of biometrics and can benefit from this technological feature. Objective To use Deep Learning to format and validate according to the dynamic characteristics of gait. Methods Gait was used for identity recognition, and gait recognition based on kinematics and dynamic gait parameters was performed through pattern recognition, including the position and the intensity value of maximum pressure points, pressure center point, and pressure ratio. Results The investigation shows that the energy consumption of gait as modeled analyzed, and the model of gait energy consumption can be obtained, which is comprehensively affected by motion parameters and individual feature parameters. Conclusion Real-time energy measurement is obtained when most people walk. The research shows that the gait frequency and body parameters obtained from the tactile parameters of gait biomechanics can more accurately estimate the energy metabolism of exercise and obtain the metabolic formula of exercise. There is a good application prospect for assessing energy metabolism through the tactile parameters of gait. Level of evidence II; Therapeutic studies - investigating treatment outcomes.


RESUMO Introdução Na medicina, o aprendizado profundo é um tipo de aprendizado de máquina que visa treinar computadores para a realização de tarefas humanas simulando o cérebro humano. O reconhecimento da marcha e a simulação do movimento de marcha são um dos pontos de maior interesse da investigação no campo da biometria e pode ser beneficiado com esse recurso tecnológico. Objetivo Utilizar o aprendizado profundo para formatar e validar, de acordo com as características dinâmicas da marcha. Métodos A marcha foi utilizada para o reconhecimento da identidade, e o reconhecimento da marcha baseado na cinemática e parâmetros dinâmicos de marcha foi realizado através do reconhecimento de padrões, incluindo a posição e o valor de intensidade dos pontos de pressão máxima, ponto central de pressão e relação de pressão. Resultados A investigação mostra que o consumo de energia da marcha como modelado analisado, e o modelo de consumo de energia da marcha pode ser obtido, o qual é afetado de forma abrangente pelos parâmetros de movimento e pelos parâmetros de características individuais. Conclusão A medição de energia em tempo real é obtida quando a maioria das pessoas caminha. A investigação mostra que a frequência da marcha e os parâmetros corporais obtidos a partir dos parâmetros tácteis da biomecânica da marcha podem estimar com maior precisão o metabolismo energético do exercício e obter a fórmula metabólica do exercício. Há uma boa perspectiva de aplicação para avaliar o metabolismo energético através dos parâmetros tácteis da marcha. Nível de evidência II; Estudos terapêuticos - investigação dos resultados do tratamento.


RESUMEN Introducción En medicina, el aprendizaje profundo es un tipo de aprendizaje que pretende entrenar a los ordenadores para que realicen tareas humanas simulando el cerebro humano. El reconocimiento de la marcha y la simulación de su movimiento es uno de los puntos más interesantes de la investigación en el campo de la biometría y puede beneficiarse de este recurso tecnológico. Objetivo Utilizar el aprendizaje profundo para formatear y validar según las características dinámicas de la marcha. Métodos Se utilizó la marcha para el reconocimiento de la identidad, y el reconocimiento de la marcha basado en la cinemática y los parámetros dinámicos de la marcha se realizó mediante el reconocimiento de patrones, incluyendo la posición y el valor de la intensidad de los puntos de presión máxima, el punto de presión central y la relación de presión. Resultados La investigación muestra que el consumo de energía de la marcha, tal y como se analizó, y el modelo de consumo de energía de la marcha se puede obtener, que es ampliamente afectado por los parámetros de movimiento y los parámetros de las características individuales. Conclusión La medición de la energía en tiempo real se obtiene cuando la mayoría de la gente camina. La investigación muestra que la frecuencia de la marcha y los parámetros corporales obtenidos a partir de los parámetros táctiles de la biomecánica de la marcha pueden estimar con mayor precisión el metabolismo energético del ejercicio y obtener la fórmula metabólica del mismo. Existe una buena perspectiva de aplicación para evaluar el metabolismo energético a través de los parámetros táctiles de la marcha. Nivel de evidencia II; Estudios terapéuticos - investigación de los resultados del tratamiento.


Asunto(s)
Humanos , Metabolismo Energético/fisiología , Análisis de la Marcha , Fenómenos Biomecánicos , Algoritmos
2.
Rev. bras. med. esporte ; 29(spe1): e2022_0198, 2023. tab, graf
Artículo en Inglés | LILACS | ID: biblio-1394847

RESUMEN

ABSTRACT Introduction Many countries have increased their investments in human resources and technology for the internal development of competitive sports, leading the world sports scene to increasingly fierce competition. Coaches and research assistants must place importance on feedback tools for frequent training of college athletes, and deep learning algorithms are an important resource to consider. Objective To develop and validate a swarm algorithm to examine the fitness of athletes during periods of competition. Methods Based on the swarm intelligence algorithm, the concept, composition, and content of physical exercises were analyzed. Combined with the characteristics of events, the body function files and the comprehensive evaluation system for high-level athletes were established. Results The insight was obtained that the constant mastery of the most advanced techniques and tactics by athletes is an important feature of modern competitive sports. Physical fitness is not only a valuable asset for athletes but also one of the keys to success in competition. Conclusion Fitness has become an increasingly prominent issue in competition, and the scientific training of contemporary competitive sports has been increasingly refined. Level of evidence II; Therapeutic studies - investigation of treatment outcomes.


RESUMO Introdução Muitos países aumentaram seus investimentos em recursos humanos e tecnologia para o desenvolvimento interno de esportes competitivos, levando o cenário esportivo mundial a uma disputa cada vez mais acirrada. Treinadores e assistentes de pesquisa devem dar importância às ferramentas de feedback para o treinamento frequente dos atletas universitários e os algoritmos de aprendizado profundo são um importante recurso a ser levado em consideração. Objetivo Desenvolver e validar um algoritmo de enxame para examinar o condicionamento físico dos atletas em períodos de competição. Métodos Com base no algoritmo de inteligência de enxame, o conceito, composição e conteúdo de exercícios físicos foram analisados. Combinado com as características dos eventos, os arquivos de funções corporais e o sistema abrangente de avaliação de atletas de alto nível foram estabelecidos. Resultados Obteve-se a percepção de que o constante domínio das técnicas e táticas mais avançadas pelos atletas é uma característica importante dos esportes competitivos modernos. A aptidão física não é apenas um ativo valioso para os atletas, mas também uma das chaves para o sucesso nas competições. Conclusão A aptidão física tem se tornado cada vez mais um problema proeminente na competição, sendo o treinamento científico dos esportes competitivos contemporâneos cada vez mais aperfeiçoado. Nível de evidência II; Estudos terapêuticos - investigação dos resultados do tratamento.


RESUMEN Introducción Muchos países han aumentado sus inversiones en recursos humanos y tecnología para el desarrollo interno del deporte de competición, lo que ha llevado al panorama deportivo mundial a una competencia cada vez más feroz. Los entrenadores y asistentes de investigación deben dar importancia a las herramientas de retroalimentación para el entrenamiento frecuente de los atletas universitarios y los algoritmos de aprendizaje profundo son un recurso importante a tener en cuenta. Objetivo Desarrollar y validar un algoritmo de enjambre para examinar el estado físico de los atletas durante los periodos de competición. Métodos A partir del algoritmo de inteligencia de enjambre, se analizó el concepto, la composición y el contenido de los ejercicios físicos. En combinación con las características de los eventos, se establecieron los archivos de funciones corporales y el sistema de evaluación integral de los atletas de alto nivel. Resultados Se obtuvo la conclusión de que el dominio constante de las técnicas y tácticas más avanzadas por parte de los atletas es una característica importante de los deportes de competición modernos. La forma física no sólo es un activo valioso para los deportistas, sino también una de las claves del éxito en las competiciones. Conclusión La aptitud física se ha convertido en una cuestión cada vez más importante en la competición, y el entrenamiento científico de los deportes de competición contemporáneos es cada vez mejor. Nivel de evidencia II; Estudios terapéuticos - investigación de los resultados del tratamiento.


Asunto(s)
Humanos , Adulto , Adulto Joven , Algoritmos , Ejercicio Físico/fisiología , Rendimiento Atlético/fisiología , Aprendizaje Profundo , Traumatismos en Atletas , Deportes/fisiología , Fuerza Muscular , Atletas
3.
Rev. bras. med. esporte ; 29(spe1): e2022_0199, 2023. tab, graf
Artículo en Inglés | LILACS | ID: biblio-1394846

RESUMEN

ABSTRACT Introduction Nowadays, more people are concerned with physical exercise and swimming competitions, as a major sporting event, have become a focus of attention. Such competitions require special attention to their athletes and the use of computational algorithms assists in this task. Objective To design and validate an algorithm to evaluate changes in vital capacity and blood markers of athletes after swimming matches based on combined learning. Methods The data integration algorithm was used to analyze changes in vital capacity and blood acid after combined learning swimming competition, followed by the construction of an information system model to calculate and process this algorithm. Results Comparative experiments show that the neural network algorithm can reduce the calculation time from the original initial time. In the latest tests carried out in about 10 seconds, this has greatly reduced the total calculation time. Conclusion According to the model requirements of the designed algorithm, practical help has been demonstrated by building a computational model. The algorithm can be optimized and selected according to the calculation model according to the reality of the application. Level of evidence II; Therapeutic studies - investigation of treatment outcomes.


RESUMO Introdução Atualmente, mais pessoas preocupam-se com o exercício físico e as competições de natação, como evento esportivo de destaque, tornou-se foco de atenção. Tais competições exigem atenção especial aos seus atletas e o uso de algoritmos computacionais auxiliam nessa tarefa. Objetivo Projetar e validar um algoritmo para avaliação das alterações da capacidade vital e marcadores sanguíneos dos atletas após os jogos de natação baseados no aprendizado combinado. Métodos O algoritmo de integração de dados foi usado para analisar as mudanças de capacidade vital e ácido sanguíneo após competição de natação de aprendizado combinado, seguido à construção de um modelo de sistema de informação para calcular e processar esse algoritmo. Resultados Experiências comparativas mostram que o algoritmo de rede neural pode reduzir o tempo de cálculo a partir do tempo inicial original. Nos últimos testes levados à cabo em cerca de 10 segundos, isto reduziu muito o tempo total de cálculo. Conclusão De acordo com os requisitos do modelo do algoritmo projetado, foi demonstrada a ajuda prática pela construção de um modelo computacional. O algoritmo pode ser otimizado e selecionado de acordo com o modelo de cálculo, segundo a realidade da aplicação. Nível de evidência II; Estudos terapêuticos - investigação dos resultados do tratamento.


RESUMEN Introducción Hoy en día, cada vez más personas se preocupan por el ejercicio físico y las competiciones de natación, como evento deportivo destacado, se han convertido en un foco de atención. Estas competiciones requieren una atención especial para sus atletas y el uso de algoritmos computacionales ayuda en esta tarea. Objetivo Diseñar y validar un algoritmo para evaluar los cambios en la capacidad vital y los marcadores sanguíneos de los atletas después de los partidos de natación basado en el aprendizaje combinado. Métodos Se utilizó el algoritmo de integración de datos para analizar los cambios de la capacidad vital y la acidez de la sangre tras la competición de natación de aprendizaje combinado, seguido de la construcción de un modelo de sistema de información para calcular y procesar este algoritmo. Resultados Los experimentos comparativos muestran que el algoritmo de la red neuronal puede reducir el tiempo de cálculo con respecto al tiempo inicial. En las últimas pruebas realizadas en unos 10 segundos, esto redujo en gran medida el tiempo total de cálculo. Conclusión De acuerdo con los requisitos del modelo del algoritmo diseñado, se ha demostrado la ayuda práctica mediante la construcción de un modelo computacional. El algoritmo puede optimizarse y seleccionarse según el modelo de cálculo en función de la realidad de la aplicación. Nivel de evidencia II; Estudios terapéuticos - investigación de los resultados del tratamiento.


Asunto(s)
Humanos , Natación/fisiología , Algoritmos , Biomarcadores/análisis , Aprendizaje Profundo , Rendimiento Atlético/fisiología , Atletas
4.
Rev. bras. med. esporte ; 29(spe1): e2022_0197, 2023. tab, graf
Artículo en Inglés | LILACS | ID: biblio-1394845

RESUMEN

ABSTRACT Introduction The recent development of the deep learning algorithm as a new multilayer network machine learning algorithm has reduced the problem of traditional training algorithms easily falling into minimal places, becoming a recent direction in the learning field. Objective Design and validate an artificial intelligence model for deep learning of the resulting impacts of weekly load training on students' biological system. Methods According to the physiological and biochemical indices of athletes in the training process, this paper analyzes the actual data of athletes' training load in the annual preparation period. The characteristics of athletes' training load in the preparation period were discussed. The value, significance, composition factors, arrangement principle and method of calculation, and determination of weekly load density using the deep learning algorithm are discussed. Results The results showed that the daily 24-hour random sampling load was moderate intensity, low and high-intensity training, and enhanced the physical-motor system and neural reactivity. Conclusion The research shows that there can be two activities of "teaching" and "training" in physical education and sports training. The sports biology monitoring research proves to be a growth point of sports training research with great potential for expansion for future research. Level of evidence II; Therapeutic studies - investigation of treatment outcomes.


RESUMO Introdução O recente desenvolvimento do algoritmo de aprendizado profundo como um novo algoritmo de aprendizado de máquina de rede multicamadas reduziu o problema dos algoritmos de treinamento tradicionais, que facilmente caiam em locais mínimos, tornando-se uma direção recente no campo do aprendizado. Objetivo Desenvolver e validar um modelo de inteligência artificial para aprendizado profundo dos impactos resultantes dos treinos semanais de carga sobre o sistema biológico dos estudantes. Métodos De acordo com os índices fisiológicos e bioquímicos dos atletas no processo de treinamento, este artigo analisa os dados reais da carga de treinamento dos atletas no período anual de preparação. As características da carga de treinamento dos atletas no período de preparação foram discutidas. O valor, significância, fatores de composição, princípio de arranjo e método de cálculo e determinação da densidade de carga semanal usando o algoritmo de aprendizado profundo são discutidos. Resultados Os resultados mostraram que a carga diária de 24 horas de amostragem aleatória foi de intensidade moderada, treinamento de baixa densidade e alta intensidade, e o sistema físico-motor e a reatividade neural foram aprimorados. Conclusão A pesquisa mostra que pode haver duas atividades de "ensino" e "treinamento" na área de educação física e no treinamento esportivo. A pesquisa de monitoramento da biologia esportiva revela-se um ponto de crescimento da pesquisa de treinamento esportivo com grande potencial de expansão para pesquisas futuras. Nível de evidência II; Estudos terapêuticos - investigação dos resultados do tratamento.


RESUMEN Introducción El reciente desarrollo del algoritmo de aprendizaje profundo como un nuevo algoritmo de aprendizaje automático de red multicapa ha reducido el problema de los algoritmos de entrenamiento tradicionales, que caen fácilmente en lugares mínimos, convirtiéndose en una dirección reciente en el campo del aprendizaje. Objetivo Desarrollar y validar un modelo de inteligencia artificial para el aprendizaje profundo de los impactos resultantes del entrenamiento de la carga semanal en el sistema biológico de los estudiantes. Métodos De acuerdo con los índices fisiológicos y bioquímicos de los atletas en el proceso de entrenamiento, este artículo analiza los datos reales de la carga de entrenamiento de los atletas en el período de preparación anual. Se analizaron las características de la carga de entrenamiento de los atletas en el periodo de preparación. Se analizan el valor, el significado, los factores de composición, el principio de disposición y el método de cálculo y determinación de la densidad de carga semanal mediante el algoritmo de aprendizaje profundo. Resultados Los resultados mostraron que la carga diaria de 24 horas de muestreo aleatorio era de intensidad moderada, de baja densidad y de alta intensidad de entrenamiento, y que el sistema físico-motor y la reactividad neural mejoraban. Conclusión La investigación muestra que puede haber dos actividades de "enseñanza" y "formación" en la educación física y el entrenamiento deportivo. La investigación sobre el seguimiento de la biología del deporte demuestra ser un punto de crecimiento de la investigación sobre el entrenamiento deportivo con un gran potencial de expansión para futuras investigaciones. Nivel de evidencia II; Estudios terapéuticos - investigación de los resultados del tratamiento.


Asunto(s)
Humanos , Algoritmos , Biología Computacional/métodos , Rendimiento Atlético/fisiología , Aprendizaje Profundo , Educación y Entrenamiento Físico/métodos
5.
Spectrochim Acta A Mol Biomol Spectrosc ; 284: 121733, 2023 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-36029745

RESUMEN

Nitrogen plays an important role in rice growth, and determination of nitrogen content in rice plants is of great significance in assessing plant nutritional status and allowing precision cultivation. Traditional chemical methods for determining nitrogen content have the disadvantages of destructive sampling and lengthy analysis times. Here, the feasibility of rapid nitrogen content analysis by near-infrared (NIR) spectroscopy of rice plants was studied. Spectral data from 447 rice samples at several growth stages were used to establish a predictive model. Different spectral preprocessing methods and characteristic selection methods were compared, such as interval partial least-squares (iPLS), synergy interval partial least-squares (SiPLS), and moving-window partial least-squares (mwPLS). The SiPLS method exhibited better performance than mwPLS or iPLS. Specifically, the combination of four subintervals (7, 26, 27, and 28), with characteristic bands at 5299-4451 cm-1 and 10445-10423 cm-1, resulted in the best model. The optimal SiPLS model had a correlation coefficient of 0.9533 and a root mean square error of prediction (RMSEP) of 0.1952 on the prediction set. Compared to using the full spectra, using SiPLS reduced the number of characteristics by 87 % in the model, and RMSEP was reduced from 0.2284 to 0.1952. The results demonstrate that NIR spectroscopy combined with the SiPLS algorithm can be applied to quickly determine nitrogen content in rice plants. This study provides a technical framework to guide future precision agriculture efforts with respect to nitrogen application.


Asunto(s)
Oryza , Espectroscopía Infrarroja Corta , Algoritmos , Análisis de los Mínimos Cuadrados , Nitrógeno , Oryza/química , Espectroscopía Infrarroja Corta/métodos
6.
Spectrochim Acta A Mol Biomol Spectrosc ; 284: 121788, 2023 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-36058170

RESUMEN

The quantification of single oil in high order edible blend oil is a challenging task. In this research, a novel swarm intelligence algorithm, discretized whale optimization algorithm (WOA), was first developed for reducing irrelevant variables and improving prediction accuracy of hexanary edible blend oil samples. The WOA is inspired by hunting strategy of humpback whales, which mainly includes three behaviors, i.e., encircling prey, bubble-net attacking and searching for prey. In discretized WOA, positions of whales were updated and then discretized by arctangent function. The whale population performance, iteration number and whale number of WOA were investigated. To validate the performance of selected variables, partial least squares (PLS) was used to build model and predict single oil contents in hexanary blend oil. Results show that WOA-PLS can provide the best prediction accuracy compared with full-spectrum PLS, continuous wavelet transform-PLS (CWT-PLS), uninformative variable elimination-PLS (UVE-PLS), Monte Carlo uninformative variable elimination-PLS (MCUVE-PLS) and randomization test-PLS (RT-PLS). Furthermore, CWT-WOA-PLS can further produce better results with fewer variables compared with WOA-PLS.


Asunto(s)
Algoritmos , Espectroscopía Infrarroja Corta , Inteligencia , Análisis de los Mínimos Cuadrados , Método de Montecarlo , Espectroscopía Infrarroja Corta/métodos
7.
Spectrochim Acta A Mol Biomol Spectrosc ; 284: 121785, 2023 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-36058172

RESUMEN

Eating repeatedly used hotpot oil will cause serious harm to human health. In order to realize rapid non-destructive testing of hotpot oil quality, a modeling experiment method of fluorescence hyperspectral technology combined with machine learning algorithm was proposed. Five preprocessing algorithms were used to preprocess the original spectral data, which realized data denoising and reduces the influence of baseline drift and tilt. The feature bands extracted from the spectral data showed that the best feature bands for the two-classification model and the six-classification model were concentrated between 469 and 962 nm and 534-809 nm, respectively. Using the PCA algorithm to visualize the spectral data, the results showed the distribution of the six types of samples intuitively, and indicated that the data could be classified. Based on the modeling analysis of the feature bands, the results showed that the best two-classification models and the best six-classification models were MF-RF-RF and MF-XGBoost-LGB models, respectively, and the classification accuracy reached 100 %. Compared with the traditional model, the error was greatly reduced, and the calculation time was also saved. This study confirmed that fluorescence hyperspectral technology combined with machine learning algorithm could effectively realize the detection of reused hotpot oil.


Asunto(s)
Algoritmos , Máquina de Vectores de Soporte , Fluorescencia , Humanos , Aprendizaje Automático , Tecnología
8.
Food Chem ; 398: 133870, 2023 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-35963216

RESUMEN

Food safety and quality assessment mechanisms are unmet needs that industries and countries have been continuously facing in recent years. Our study aimed at developing a platform using Machine Learning algorithms to analyze Mass Spectrometry data for classification of tomatoes on organic and non-organic. Tomato samples were analyzed using silica gel plates and direct-infusion electrospray-ionization mass spectrometry technique. Decision Tree algorithm was tailored for data analysis. This model achieved 92% accuracy, 94% sensitivity and 90% precision in determining to which group each fruit belonged. Potential biomarkers evidenced differences in treatment and production for each group.


Asunto(s)
Lycopersicon esculentum , Algoritmos , Inocuidad de los Alimentos , Lycopersicon esculentum/química , Aprendizaje Automático , Espectrometría de Masa por Ionización de Electrospray
9.
Methods Mol Biol ; 2553: 21-39, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36227537

RESUMEN

This chapter outlines the myriad applications of machine learning (ML) in synthetic biology, specifically in engineering cell and protein activity, and metabolic pathways. Though by no means comprehensive, the chapter highlights several prominent computational tools applied in the field and their potential use cases. The examples detailed reinforce how ML algorithms can enhance synthetic biology research by providing data-driven insights into the behavior of living systems, even without detailed knowledge of their underlying mechanisms. By doing so, ML promises to increase the efficiency of research projects by modeling hypotheses in silico that can then be tested through experiments. While challenges related to training dataset generation and computational costs remain, ongoing improvements in ML tools are paving the way for smarter and more streamlined synthetic biology workflows that can be readily employed to address grand challenges across manufacturing, medicine, engineering, agriculture, and beyond.


Asunto(s)
Aprendizaje Automático , Biología Sintética , Algoritmos , Redes y Vías Metabólicas
10.
Methods Mol Biol ; 2553: 441-452, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36227554

RESUMEN

The integrative method approaches are continuously evolving to provide accurate insights from the data that is received through experimentation on various biological systems. Multi-omics data can be integrated with predictive machine learning algorithms in order to provide results with high accuracy. This protocol chapter defines the steps required for the ML-multi-omics integration methods that are applied on biological datasets for its analysis and the visual interpretation of the results thus obtained.


Asunto(s)
Algoritmos , Aprendizaje Automático , Redes y Vías Metabólicas
11.
Ultrasonics ; 127: 106826, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36058188

RESUMEN

Carotid artery atherosclerosis is a significant cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting the atherosclerotic carotid plaque in an ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. This study proposes an automatic method for atherosclerotic plaque segmentation by using correntropy-based level sets (CLS) with learning-based initialization. We introduce the CLS model, containing the point-based local bias-field corrected image fitting method and correntropy-based distance measurement, to overcome the limitations of the ultrasound images. A supervised learning algorithm is employed to solve the automatic initialization problem of the variational methods. The proposed atherosclerotic plaque segmentation method is validated on 29 carotid ultrasound images, obtaining a Dice ratio of 90.6 ± 1.9% and an overlap index of 83.6 ± 3.2%. Moreover, by comparing the standard deviation of each evaluation index, it can be found that the proposed method is more robust for segmenting the atherosclerotic plaque. Our work shows that our proposed method can be more helpful than other variational models for measuring the carotid plaque burden.


Asunto(s)
Aterosclerosis , Placa Aterosclerótica , Algoritmos , Aterosclerosis/diagnóstico por imagen , Arterias Carótidas/diagnóstico por imagen , Humanos , Placa Aterosclerótica/diagnóstico por imagen , Ultrasonografía/métodos
12.
Ultrasonics ; 127: 106837, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36075161

RESUMEN

In this article, a novel ultrasound computed tomography (USCT) reconstruction algorithm for breast imaging is proposed. This algorithm is based on an ultrasound propagation model, the refract-ray model (RRM). In this model, the field of imaging is assumed as piecewise homogenous and is divided into several regions. The ultrasound propagation paths are considered polylines that only refract at the borders of the regions. The edge information is provided by B-mode imaging. Both simulations and experiments are implemented to validate the proposed algorithm. Compared with the traditional bent-ray model (BRM), the time of reconstructions using RRM decreases by over 90 %. In simulations, the imaging qualities for RRM and BRM are comparable, in terms of the root mean square error, the Tenengrad value, and the deformation of digital phantom. In the experiments, a cylindrical agar phantom is imaged using a customized imaging system. When imaging using RRM, the estimate of the phantom radius is about 0.1 mm in error, while it is about 0.3 mm in error using BRM. Moreover, the Tenengrad value of the result using RRM is much higher than that using BRM (9.76 compared to 0.79). The results show that the proposed algorithm can better delineate the phantom within a water bath. In future work, further experimental work is required to validate the method for improving imaging quality under breast-mimicking imaging conditions.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Agar , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos , Agua
13.
Ultrasonics ; 127: 106838, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36126437

RESUMEN

Coherent plane-wave compounding (CPWC) is a widely used technique in medical ultrasound imaging due to its high frame rate property. It is well-known that increasing the plane waves leads to improving the image quality. However, the image quality still needs to be further improved in CPWC. In this regard, a variety of methods have been proposed. In this paper, a new compressive sensing (CS) based approach is introduced with the combination of the adaptive minimum variance (MV) algorithm to further improve the image quality in terms of resolution and contrast. In the proposed method, which is called the CS-based MV technique, the CS method is used in the receive direction to produce the beamformed data for each plane wave. Then, the MV algorithm is performed in the plane wave transmit angle direction to coherently compound the images and improve the resolution. Moreover, to deal with the high computational complexity and also, the needing for high memory space during the CS method implementation, an approximation is considered which results in considerably reduced computational burden and memory space. The results obtained from the simulated point targets show that the proposed method leads to resolution improvement for about 71%, 5.5%, and 37% respectively, compared to DAS, DAS+MV, and CS+DAS beamformers. Also, the quantitative results obtained from the experimental contrast phantom in plane wave imaging challenge in medical ultrasound (PICMUS) data show a 3.02 dB, 2.57 dB, and 2.24 dB improvement of the contrast ratio metric using the proposed method compared to DAS, DAS+MV, and double-MV methods, respectively, indicating the good performance of the proposed method in image quality improvement.


Asunto(s)
Algoritmos , Compresión de Datos , Ácido 4-Acetamido-4'-isotiocianatostilbeno-2,2'-disulfónico/análogos & derivados , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Ultrasonografía/métodos
14.
Radiographics ; 43(1): e220060, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36331878

RESUMEN

The use of digital breast tomosynthesis (DBT) in breast cancer screening has become widely accepted, facilitating increased cancer detection and lower recall rates compared with those achieved by using full-field digital mammography (DM). However, the use of DBT, as compared with DM, raises new challenges, including a larger number of acquired images and thus longer interpretation times. While most current artificial intelligence (AI) applications are developed for DM, there are multiple potential opportunities for AI to augment the benefits of DBT. During the diagnostic steps of lesion detection, characterization, and classification, AI algorithms may not only assist in the detection of indeterminate or suspicious findings but also aid in predicting the likelihood of malignancy for a particular lesion. During image acquisition and processing, AI algorithms may help reduce radiation dose and improve lesion conspicuity on synthetic two-dimensional DM images. The use of AI algorithms may also improve workflow efficiency and decrease the radiologist's interpretation time. There has been significant growth in research that applies AI to DBT, with several algorithms approved by the U.S. Food and Drug Administration for clinical implementation. Further development of AI models for DBT has the potential to lead to improved practice efficiency and ultimately improved patient health outcomes of breast cancer screening and diagnostic evaluation. See the invited commentary by Bahl in this issue. ©RSNA, 2022.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Humanos , Femenino , Mamografía/métodos , Detección Precoz del Cáncer/métodos , Neoplasias de la Mama/patología , Algoritmos , Mama/diagnóstico por imagen
15.
J Theor Biol ; 556: 111328, 2023 Jan 07.
Artículo en Inglés | MEDLINE | ID: mdl-36273593

RESUMEN

Multi-omics clustering plays an important role in cancer subtyping. However, the data of different kinds of omics are often related, these correlations may reduce the clustering algorithm performance. It is crucial to eliminate the unexpected redundant information caused by these correlations between different omics. We proposed RSC-based differential model with correlation removal for improving multi-omics clustering (RSC-MCR). This method first introduced RSC to calculate the pairwise correlations of all features, and decomposed it to obtain the pairwise correlations of different omics features, thus built the connection between different omics based on the pairwise correlations of different omics features. Then, to remove the redundant correlation, we designed a differential model to calculate the degree of difference between the original feature matrix and the correlation matrix which contained the most relevant information between different omics. We compared the performance of RSC-MCR with decorrelation methods on different clustering methods (CC, FCM, SNF, NMF, LRAcluster). The experimental results on five cancer datasets show the efficiency of the RSC-MCR as well as improvements over other decorrelation methods.


Asunto(s)
Algoritmos , Neoplasias , Humanos , Análisis por Conglomerados , Neoplasias/genética
16.
J Environ Manage ; 325(Pt A): 116428, 2023 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-36272289

RESUMEN

Topical advances in earth observation have enabled spatially explicit mapping of species' fundamental niche limits that can be used for nature conservation and management applications. This study investigates the possibility of applying functional variables of ecosystem retrieved from Moderate Resolution Imaging Spectroradiometer (MODIS) onboard sensor data to map the species distribution of two alpine treeline species, namely Betula utilis D.Don and Rhododendron campanulatum D.Don over the Himalayan biodiversity hotspot. In this study, we have developed forty-nine Novel Earth Observation Variables (NEOVs) from MODIS products, an asset to the present investigation. To determine the effectiveness and ecological significance of NEOVs combinations, we built and compared four different models, namely, a bioclimatic model (BCM) with bioclimatic predictor variables, a phenology model (PhenoM) with earth observation derived phenological predictor variables, a biophysical model (BiophyM) with earth observation derived biophysical predictor variables, and a hybrid model (HM) with a combination of selected predictor variables from BCM, PhenoM, and BiophyM. All models utilized topographical variables by default. Models that include NEOVs were competitive for focal species, and models without NEOVs had considerably poor model performance and explanatory strength. To ascertain the accurate predictions, we assessed the congruence of predictions by pairwise comparisons of their performance. Among the three machine learning algorithms tested (artificial neural networks, generalised boosting model, and maximum entropy), maximum entropy produced the most promising predictions for BCM, PhenoM, BiophyM, and HM. Area under curve (AUC) and true skill statistic (TSS) scores for the BCM, PhenoM, BiophyM, and HM models derived from maximum entropy were AUC ≥0.9 and TSS ≥0.6 for the focal species. The overall investigation revealed the competency of NEOVs in the accurate prediction of species' fundamental niches, but conventional bioclimatic variables were unable to achieve such a level of precision. A principal component analysis of environmental spaces disclosed that niches of focal species substantially overlapped each other. We demonstrate that the use of satellite onboard sensors' biotic and abiotic variables with species occurrence data can provide precision and resolution for species distribution mapping at a scale that is relevant ecologically and at the operational scale of most conservation and management actions.


Asunto(s)
Biodiversidad , Ecosistema , Imágenes Satelitales , Algoritmos
17.
Mol Phylogenet Evol ; 178: 107636, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36208695

RESUMEN

Phylogenetic trees are essential tools in evolutionary biology that present information on evolutionary events among organisms and molecules. From a dataset of n sequences, a phylogenetic tree of (2n-5)!! possible topologies exists, and determining the optimum topology using brute force is infeasible. Recently, a recursive graph cut on a graph-represented-similarity matrix has proven accurate in reconstructing a phylogenetic tree containing distantly related sequences. However, identifying the optimum graph cut is challenging, and approximate solutions are currently utilized. Here, a phylogenetic tree was reconstructed with an improved graph cut using a quantum-inspired computer, the Fujitsu Digital Annealer (DA), and the algorithm was named the "Normalized-Minimum cut by Digital Annealer (NMcutDA) method". First, a criterion for the graph cut, the normalized cut value, was compared with existing clustering methods. Based on the cut, we verified that the simulated phylogenetic tree could be reconstructed with the highest accuracy when sequences were diverged. Moreover, for some actual data from the structure-based protein classification database, only NMcutDA could cluster sequences into correct superfamilies. Conclusively, NMcutDA reconstructed better phylogenetic trees than those using other methods by optimizing the graph cut. We anticipate that when the diversity of sequences is sufficiently high, NMcutDA can be utilized with high efficiency.


Asunto(s)
Algoritmos , Computadores , Filogenia , Análisis por Conglomerados , Bases de Datos de Proteínas
18.
Mol Phylogenet Evol ; 178: 107643, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36216302

RESUMEN

Phylogenetic inference, which involves time-consuming calculations, is a field where parallelization can speed up the resolution of many problems. TNT (a widely used program for phylogenetic analysis under parsimony) allows parallelization under the PVM system (Parallel Virtual Machine). However, as the basic aspects of the implementation remain unpublished, few studies have taken advantage of the parallelization routines of TNT. In addition, the PVM system is deprecated by many system administrators. One of the most common standards for high performance computing is now MPI (Message Passing Interface). To facilitate the use of the parallel analyses offered by TNT, this paper describes the basic aspects of the implementation, as well as a port of the parallelization interface of TNT into MPI. The use of the new routines is illustrated by reanalysis of seven significant datasets, either recent phylogenomic datasets with many characters (up to 2,509,064 characters) or datasets with large numbers of taxa (up to 13,921 taxa). Versions of TNT including the MPI functionality are available at: http://www.lillo.org.ar/phylogeny/tnt/.


Asunto(s)
Algoritmos , Programas Informáticos , Filogenia , Metodologías Computacionales
19.
Sci Total Environ ; 857(Pt 2): 159448, 2023 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-36252662

RESUMEN

As an essential environmental property, the aqueous solubility quantifies the hydrophobicity of a compound. It could be further utilized to evaluate the ecological risk and toxicity of organic pollutants. Concerned about the proliferation of organic contaminants in water and the associated technical burden, researchers have developed QSPR models to predict aqueous solubility. However, there are no standard procedures or best practices on how to comprehensively evaluate models. Hence, the CRITIC-TOPSIS comprehensive assessment method was first-ever proposed according to a variety of statistical parameters in the environmental model research field. 39 models based on 13 ML algorithms (belonged to 4 tribes) and 3 descriptor screening methods, were developed to calculate aqueous solubility values (log Kws) for organic chemicals reliably and verify the effectiveness of the comprehensive assessment method. The evaluations were carried out for exhibiting better predictive accuracy and external competitiveness of the MLR-1, XGB-1, DNN-1, and kNN-1 models in contrast to other prediction models in each tribe. Further, XGB model based on SRM (XGB-1, C = 0.599) was selected as an optimal pathway for prediction of aqueous solubility. We hope that the proposed comprehensive evaluation approach could act as a promising tool for selecting the optimum environmental property prediction methods.


Asunto(s)
Algoritmos , Relación Estructura-Actividad Cuantitativa , Solubilidad , Agua/química , Aprendizaje Automático
20.
Sci Total Environ ; 857(Pt 2): 159493, 2023 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-36257423

RESUMEN

A good knowledge in eco-hydrological processes requires significant understanding of geospatial distribution of soil moisture (SM). However, SM monitoring remains challenging due to its large spatial variability and its dynamic time response. This study was performed to assess the performance of a particle swarm optimization (PSO)-based optimized Cerebellar Model Articulation Controller (CMAC) in generating high-resolution surface SM estimates using sentinel-2 imagery over a Mediterranean agro-ecosystem. Furthermore, the results were compared with those of PSO-based optimized group method of data handling (GMDH) as a more common data-driven method. Two different modeling approaches i.e. modeling in homogenous clusters (local approach) and modeling in entire area as an entity (global approach) were examined. Candidate predictors namely sentinel-2 spectral bands, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI), digital elevation model (DEM), slope and aspect were used as the input variables to estimate SM. An intensive field survey had been done to gather in-situ SM data using a time-domain reflectometer (TDR). K-fold validation based on in-situ SM measurements demonstrated the reasonability of the SM estimation of the proposed methodology. Detecting homogeneous areas was done using genetic and particle swarm optimization algorithms. Synthesized SM product of PSO-GMDH showed a mean Normalized Root-Mean-Square Error (NRMSE) of 13.6 to 8.91 for global and local approaches in the test phase. PSO-CMAC method with an average NRMSE of 12.47 to 8.72 for global and local approaches shows the highest accuracy and outperforms the PSO-GMDH method at both local and global approaches. Overall, results revealed that clustering study area prior to running machine learning (ML) models coupled with optical satellite imagery and geophysical properties, boosts their predictive performance and can lead to more accurate mapping of SM with more heterogeneity. The results also showed that the global approach had a moderate performance in capturing the SM heterogeneity.


Asunto(s)
Ecosistema , Suelo , Imágenes Satelitales/métodos , Agua/análisis , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...