Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-38610379

RESUMEN

Recent advances in Deep Learning and aerial Light Detection And Ranging (LiDAR) have offered the possibility of refining the classification and segmentation of 3D point clouds to contribute to the monitoring of complex environments. In this context, the present study focuses on developing an ordinal classification model in forest areas where LiDAR point clouds can be classified into four distinct ordinal classes: ground, low vegetation, medium vegetation, and high vegetation. To do so, an effective soft labeling technique based on a novel proposed generalized exponential function (CE-GE) is applied to the PointNet network architecture. Statistical analyses based on Kolmogorov-Smirnov and Student's t-test reveal that the CE-GE method achieves the best results for all the evaluation metrics compared to other methodologies. Regarding the confusion matrices of the best alternative conceived and the standard categorical cross-entropy method, the smoothed ordinal classification obtains a more consistent classification compared to the nominal approach. Thus, the proposed methodology significantly improves the point-by-point classification of PointNet, reducing the errors in distinguishing between the middle classes (low vegetation and medium vegetation).

2.
Artículo en Inglés | MEDLINE | ID: mdl-38347692

RESUMEN

Real-world classification problems may disclose different hierarchical levels where the categories are displayed in an ordinal structure. However, no specific deep learning (DL) models simultaneously learn hierarchical and ordinal constraints while improving generalization performance. To fill this gap, we propose the introduction of two novel ordinal-hierarchical DL methodologies, namely, the hierarchical cumulative link model (HCLM) and hierarchical-ordinal binary decomposition (HOBD), which are able to model the ordinal structure within different hierarchical levels of the labels. In particular, we decompose the hierarchical-ordinal problem into local and global graph paths that may encode an ordinal constraint for each hierarchical level. Thus, we frame this problem as simultaneously minimizing global and local losses. Furthermore, the ordinal constraints are set by two approaches ordinal binary decomposition (OBD) and cumulative link model (CLM) within each global and local function. The effectiveness of the proposed approach is measured on four real-use case datasets concerning industrial, biomedical, computer vision, and financial domains. The extracted results demonstrate a statistically significant improvement to state-of-the-art nominal, ordinal, and hierarchical approaches.

3.
Expert Syst Appl ; 225: 120103, 2023 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-37090447

RESUMEN

The sanitary emergency caused by COVID-19 has compromised countries and generated a worldwide health and economic crisis. To provide support to the countries' responses, numerous lines of research have been developed. The spotlight was put on effectively and rapidly diagnosing and predicting the evolution of the pandemic, one of the most challenging problems of the past months. This work contributes to the existing literature by developing a two-step methodology to analyze the transmission rate, designing models applied to territories with similar pandemic behavior characteristics. Virus transmission is considered as bacterial growth curves to understand the spread of the virus and to make predictions about its future evolution. Hence, an analytical clustering procedure is first applied to create groups of locations where the virus transmission rate behaved similarly in the different outbreaks. A curve decomposition process based on an iterative polynomial process is then applied, obtaining meaningful forecasting features. Information of the territories belonging to the same cluster is merged to build models capable of simultaneously predicting the 14-day incidence in several locations using Evolutionary Artificial Neural Networks. The methodology is applied to Andalusia (Spain), although it is applicable to any region across the world. Individual models trained for a specific territory are carried out for comparison purposes. The results demonstrate that this methodology achieves statistically similar, or even better, performance for most of the locations. In addition to being extremely competitive, the main advantage of the proposal lies in its complexity cost reduction. The total number of parameters to be estimated is reduced up to 93.51% for the short term and 93.31% for the mid-term forecasting, respectively. Moreover, the number of required models is reduced by 73.53% and 58.82% for the short- and mid-term forecasting horizons.

4.
IEEE Trans Neural Netw Learn Syst ; 34(3): 1478-1488, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34428161

RESUMEN

Activation functions lie at the core of every neural network model from shallow to deep convolutional neural networks. Their properties and characteristics shape the output range of each layer and, thus, their capabilities. Modern approaches rely mostly on a single function choice for the whole network, usually ReLU or other similar alternatives. In this work, we propose two new activation functions and analyze their properties and compare them with 17 different function proposals from recent literature on six distinct problems with different characteristics. The objective is to shed some light on their comparative performance. The results show that the proposed functions achieved better performance than the most commonly used ones.

5.
Sci Rep ; 12(1): 17327, 2022 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-36243880

RESUMEN

Modelling extreme values distributions, such as wave height time series where the higher waves are much less frequent than the lower ones, has been tackled from the point of view of the Peak-Over-Threshold (POT) methodologies, where modelling is based on those values higher than a threshold. This threshold is usually predefined by the user, while the rest of values are ignored. In this paper, we propose a new method to estimate the distribution of the complete time series, including both extreme and regular values. This methodology assumes that extreme values time series can be modelled by a normal distribution in a combination of a uniform one. The resulting theoretical distribution is then used to fix the threshold for the POT methodology. The methodology is tested in nine real-world time series collected in the Gulf of Alaska, Puerto Rico and Gibraltar (Spain), which are provided by the National Data Buoy Center (USA) and Puertos del Estado (Spain). By using the Kolmogorov-Smirnov statistical test, the results confirm that the time series can be modelled with this type of mixed distribution. Based on this, the return values and the confidence intervals for wave height in different periods of time are also calculated.

6.
Expert Syst Appl ; 207: 117977, 2022 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-35784094

RESUMEN

Many types of research have been carried out with the aim of combating the COVID-19 pandemic since the first outbreak was detected in Wuhan, China. Anticipating the evolution of an outbreak helps to devise suitable economic, social and health care strategies to mitigate the effects of the virus. For this reason, predicting the SARS-CoV-2 transmission rate has become one of the most important and challenging problems of the past months. In this paper, we apply a two-stage mid and long-term forecasting framework to the epidemic situation in eight districts of Andalusia, Spain. First, an analytical procedure is performed iteratively to fit polynomial curves to the cumulative curve of contagions. Then, the extracted information is used for estimating the parameters and structure of an evolutionary artificial neural network with hybrid architectures (i.e., with different basis functions for the hidden nodes) while considering single and simultaneous time horizon estimations. The results obtained demonstrate that including polynomial information extracted during the training stage significantly improves the mid- and long-term estimations in seven of the eight considered districts. The increase in average accuracy (for the joint mid- and long-term horizon forecasts) is 37.61% and 35.53% when considering the single and simultaneous forecast approaches, respectively.

7.
PLoS One ; 16(5): e0252068, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34019601

RESUMEN

Donor-Recipient (D-R) matching is one of the main challenges to be fulfilled nowadays. Due to the increasing number of recipients and the small amount of donors in liver transplantation, the allocation method is crucial. In this paper, to establish a fair comparison, the United Network for Organ Sharing database was used with 4 different end-points (3 months, and 1, 2 and 5 years), with a total of 39, 189 D-R pairs and 28 donor and recipient variables. Modelling techniques were divided into two groups: 1) classical statistical methods, including Logistic Regression (LR) and Naïve Bayes (NB), and 2) standard machine learning techniques, including Multilayer Perceptron (MLP), Random Forest (RF), Gradient Boosting (GB) or Support Vector Machines (SVM), among others. The methods were compared with standard scores, MELD, SOFT and BAR. For the 5-years end-point, LR (AUC = 0.654) outperformed several machine learning techniques, such as MLP (AUC = 0.599), GB (AUC = 0.600), SVM (AUC = 0.624) or RF (AUC = 0.644), among others. Moreover, LR also outperformed standard scores. The same pattern was reproduced for the others 3 end-points. Complex machine learning methods were not able to improve the performance of liver allocation, probably due to the implicit limitations associated to the collection process of the database.


Asunto(s)
Prueba de Histocompatibilidad/estadística & datos numéricos , Trasplante de Hígado/estadística & datos numéricos , Máquina de Vectores de Soporte , Donantes de Tejidos/estadística & datos numéricos , Obtención de Tejidos y Órganos/estadística & datos numéricos , Receptores de Trasplantes/estadística & datos numéricos , Teorema de Bayes , Interpretación Estadística de Datos , Bases de Datos Factuales , Prueba de Histocompatibilidad/métodos , Humanos , Trasplante de Hígado/ética , Modelos Logísticos , Donantes de Tejidos/provisión & distribución , Obtención de Tejidos y Órganos/métodos , Receptores de Trasplantes/psicología
8.
Sci Rep ; 11(1): 7067, 2021 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-33782476

RESUMEN

Parkinson's disease is characterised by a decrease in the density of presynaptic dopamine transporters in the striatum. Frequently, the corresponding diagnosis is performed using a qualitative analysis of the 3D-images obtained after the administration of [Formula: see text]I-ioflupane, considering a binary classification problem (absence or existence of Parkinson's disease). In this work, we propose a new methodology for classifying this kind of images in three classes depending on the level of severity of the disease in the image. To tackle this problem, we use an ordinal classifier given the natural order of the class labels. A novel strategy to perform feature selection is developed because of the large number of voxels in the image, and a method for generating synthetic images is proposed to improve the quality of the classifier. The methodology is tested on 434 studies conducted between September 2015 and January 2019, divided into three groups: 271 without alteration of the presynaptic nigrostriatal pathway, 73 with a slight alteration and 90 with severe alteration. Results confirm that the methodology improves the state-of-the-art algorithms, and that it is able to find informative voxels outside the standard regions of interest used for this problem. The differences are assessed by statistical tests which show that the proposed image ordinal classification could be considered as a decision support system in medicine.


Asunto(s)
Imagenología Tridimensional/métodos , Enfermedad de Parkinson/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
9.
IEEE Trans Cybern ; 51(11): 5409-5422, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31945011

RESUMEN

Time-series clustering is the process of grouping time series with respect to their similarity or characteristics. Previous approaches usually combine a specific distance measure for time series and a standard clustering method. However, these approaches do not take the similarity of the different subsequences of each time series into account, which can be used to better compare the time-series objects of the dataset. In this article, we propose a novel technique of time-series clustering consisting of two clustering stages. In a first step, a least-squares polynomial segmentation procedure is applied to each time series, which is based on a growing window technique that returns different-length segments. Then, all of the segments are projected into the same dimensional space, based on the coefficients of the model that approximates the segment and a set of statistical features. After mapping, a first hierarchical clustering phase is applied to all mapped segments, returning groups of segments for each time series. These clusters are used to represent all time series in the same dimensional space, after defining another specific mapping process. In a second and final clustering stage, all the time-series objects are grouped. We consider internal clustering quality to automatically adjust the main parameter of the algorithm, which is an error threshold for the segmentation. The results obtained on 84 datasets from the UCR Time Series Classification Archive have been compared against three state-of-the-art methods, showing that the performance of this methodology is very promising, especially on larger datasets.


Asunto(s)
Algoritmos , Análisis por Conglomerados , Factores de Tiempo
10.
Curr Opin Organ Transplant ; 25(4): 399-405, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32618714

RESUMEN

PURPOSE OF REVIEW: Machine learning techniques play an important role in organ transplantation. Analysing the main tasks for which they are being applied, together with the advantages and disadvantages of their use, can be of crucial interest for clinical practitioners. RECENT FINDINGS: In the last 10 years, there has been an explosion of interest in the application of machine-learning techniques to organ transplantation. Several approaches have been proposed in the literature aiming to find universal models by considering multicenter cohorts or from different countries. Moreover, recently, deep learning has also been applied demonstrating a notable ability when dealing with a vast amount of information. SUMMARY: Organ transplantation can benefit from machine learning in such a way to improve the current procedures for donor--recipient matching or to improve standard scores. However, a correct preprocessing is needed to provide consistent and high quality databases for machine-learning algorithms, aiming to robust and fair approaches to support expert decision-making systems.


Asunto(s)
Aprendizaje Automático , Trasplante de Órganos/métodos , Selección de Donante/métodos , Selección de Donante/estadística & datos numéricos , Humanos , Trasplante de Órganos/estadística & datos numéricos , Donantes de Tejidos , Obtención de Tejidos y Órganos/métodos , Obtención de Tejidos y Órganos/estadística & datos numéricos
11.
PLoS One ; 15(1): e0227188, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31923277

RESUMEN

Several European countries have established criteria for prioritising initiation of treatment in patients infected with the hepatitis C virus (HCV) by grouping patients according to clinical characteristics. Based on neural network techniques, our objective was to identify those factors for HIV/HCV co-infected patients (to which clinicians have given careful consideration before treatment uptake) that have not being included among the prioritisation criteria. This study was based on the Spanish HERACLES cohort (NCT02511496) (April-September 2015, 2940 patients) and involved application of different neural network models with different basis functions (product-unit, sigmoid unit and radial basis function neural networks) for automatic classification of patients for treatment. An evolutionary algorithm was used to determine the architecture and estimate the coefficients of the model. This machine learning methodology found that radial basis neural networks provided a very simple model in terms of the number of patient characteristics to be considered by the classifier (in this case, six), returning a good overall classification accuracy of 0.767 and a minimum sensitivity (for the classification of the minority class, untreated patients) of 0.550. Finally, the area under the ROC curve was 0.802, which proved to be exceptional. The parsimony of the model makes it especially attractive, using just eight connections. The independent variable "recent PWID" is compulsory due to its importance. The simplicity of the model means that it is possible to analyse the relationship between patient characteristics and the probability of belonging to the treated group.


Asunto(s)
Infecciones Oportunistas Relacionadas con el SIDA/complicaciones , Infecciones Oportunistas Relacionadas con el SIDA/tratamiento farmacológico , Antivirales/uso terapéutico , Hepatitis C/complicaciones , Hepatitis C/tratamiento farmacológico , Aprendizaje Automático , Adolescente , Adulto , Anciano , Coinfección , Técnicas de Apoyo para la Decisión , Femenino , Estudios de Seguimiento , VIH/genética , Hepacivirus/genética , Humanos , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Estudios Prospectivos , España , Adulto Joven
12.
IEEE Trans Med Imaging ; 35(4): 1036-45, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26672031

RESUMEN

Thickness of the melanoma is the most important factor associated with survival in patients with melanoma. It is most commonly reported as a measurement of depth given in millimeters (mm) and computed by means of pathological examination after a biopsy of the suspected lesion. In order to avoid the use of an invasive method in the estimation of the thickness of melanoma before surgery, we propose a computational image analysis system from dermoscopic images. The proposed feature extraction is based on the clinical findings that correlate certain characteristics present in dermoscopic images and tumor depth. Two supervised classification schemes are proposed: a binary classification in which melanomas are classified into thin or thick, and a three-class scheme (thin, intermediate, and thick). The performance of several nominal classification methods, including a recent interpretable method combining logistic regression with artificial neural networks (Logistic regression using Initial variables and Product Units, LIPU), is compared. For the three-class problem, a set of ordinal classification methods (considering ordering relation between the three classes) is included. For the binary case, LIPU outperforms all the other methods with an accuracy of 77.6%, while, for the second scheme, although LIPU reports the highest overall accuracy, the ordinal classification methods achieve a better balance between the performances of all classes.


Asunto(s)
Dermoscopía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Melanoma/diagnóstico por imagen , Algoritmos , Humanos , Aprendizaje Automático
13.
IEEE Trans Neural Netw Learn Syst ; 27(9): 1947-61, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-26316222

RESUMEN

The imbalanced nature of some real-world data is one of the current challenges for machine learning researchers. One common approach oversamples the minority class through convex combination of its patterns. We explore the general idea of synthetic oversampling in the feature space induced by a kernel function (as opposed to input space). If the kernel function matches the underlying problem, the classes will be linearly separable and synthetically generated patterns will lie on the minority class region. Since the feature space is not directly accessible, we use the empirical feature space (EFS) (a Euclidean space isomorphic to the feature space) for oversampling purposes. The proposed method is framed in the context of support vector machines, where the imbalanced data sets can pose a serious hindrance. The idea is investigated in three scenarios: 1) oversampling in the full and reduced-rank EFSs; 2) a kernel learning technique maximizing the data class separation to study the influence of the feature space structure (implicitly defined by the kernel function); and 3) a unified framework for preferential oversampling that spans some of the previous approaches in the literature. We support our investigation with extensive experiments over 50 imbalanced data sets.

14.
Neural Comput ; 27(4): 954-81, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25734495

RESUMEN

In this letter, we explore the idea of modeling slack variables in support vector machine (SVM) approaches. The study is motivated by SVM+, which models the slacks through a smooth correcting function that is determined by additional (privileged) information about the training examples not available in the test phase. We take a closer look at the meaning and consequences of smooth modeling of slacks, as opposed to determining them in an unconstrained manner through the SVM optimization program. To better understand this difference we only allow the determination and modeling of slack values on the same information--that is, using the same training input in the original input space. We also explore whether it is possible to improve classification performance by combining (in a convex combination) the original SVM slacks with the modeled ones. We show experimentally that this approach not only leads to improved generalization performance but also yields more compact, lower-complexity models. Finally, we extend this idea to the context of ordinal regression, where a natural order among the classes exists. The experimental results confirm principal findings from the binary case.

15.
Neural Netw ; 59: 51-60, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25078110

RESUMEN

Threshold models are one of the most common approaches for ordinal regression, based on projecting patterns to the real line and dividing this real line in consecutive intervals, one interval for each class. However, finding such one-dimensional projection can be too harsh an imposition for some datasets. This paper proposes a multidimensional latent space representation with the purpose of relaxing this projection, where the different classes are arranged based on concentric hyperspheres, each class containing the previous classes in the ordinal scale. The proposal is implemented through a neural network model, each dimension being a linear combination of a common set of basis functions. The model is compared to a nominal neural network, a neural network based on the proportional odds model and to other state-of-the-art ordinal regression methods for a total of 12 datasets. The proposed latent space shows an improvement on the two performance metrics considered, and the model based on the three-dimensional latent space obtains competitive performance when compared to the other methods.


Asunto(s)
Modelos Neurológicos , Redes Neurales de la Computación , Algoritmos , Inteligencia Artificial , Bases de Datos Factuales , Máquina de Vectores de Soporte
16.
IEEE Trans Cybern ; 44(5): 681-94, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-23807481

RESUMEN

The classification of patterns into naturally ordered labels is referred to as ordinal regression. This paper proposes an ensemble methodology specifically adapted to this type of problem, which is based on computing different classification tasks through the formulation of different order hypotheses. Every single model is trained in order to distinguish between one given class (k) and all the remaining ones, while grouping them in those classes with a rank lower than k , and those with a rank higher than k. Therefore, it can be considered as a reformulation of the well-known one-versus-all scheme. The base algorithm for the ensemble could be any threshold (or even probabilistic) method, such as the ones selected in this paper: kernel discriminant analysis, support vector machines and logistic regression (LR) (all reformulated to deal with ordinal regression problems). The method is seen to be competitive when compared with other state-of-the-art methodologies (both ordinal and nominal), by using six measures and a total of 15 ordinal datasets. Furthermore, an additional set of experiments is used to study the potential scalability and interpretability of the proposed method when using LR as base methodology for the ensemble.


Asunto(s)
Modelos Logísticos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Análisis Discriminante , Máquina de Vectores de Soporte
17.
IEEE Trans Neural Netw Learn Syst ; 24(11): 1836-49, 2013 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24808616

RESUMEN

In this paper, two neural network threshold ensemble models are proposed for ordinal regression problems. For the first ensemble method, the thresholds are fixed a priori and are not modified during training. The second one considers the thresholds of each member of the ensemble as free parameters, allowing their modification during the training process. This is achieved through a reformulation of these tunable thresholds, which avoids the constraints they must fulfill for the ordinal regression problem. During training, diversity exists in different projections generated by each member is taken into account for the parameter updating. This diversity is promoted in an explicit way using a diversity-encouraging error function, extending the well-known negative correlation learning framework to the area of ordinal regression, and inheriting many of its good properties. Experimental results demonstrate that the proposed algorithms can achieve competitive generalization performance when considering four ordinal regression metrics.


Asunto(s)
Algoritmos , Interpretación Estadística de Datos , Modelos Estadísticos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Análisis de Regresión , Simulación por Computador
18.
IEEE Trans Neural Netw ; 22(2): 246-63, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21138802

RESUMEN

This paper proposes a hybrid multilogistic methodology, named logistic regression using initial and radial basis function (RBF) covariates. The process for obtaining the coefficients is carried out in three steps. First, an evolutionary programming (EP) algorithm is applied, in order to produce an RBF neural network (RBFNN) with a reduced number of RBF transformations and the simplest structure possible. Then, the initial attribute space (or, as commonly known as in logistic regression literature, the covariate space) is transformed by adding the nonlinear transformations of the input variables given by the RBFs of the best individual in the final generation. Finally, a maximum likelihood optimization method determines the coefficients associated with a multilogistic regression model built in this augmented covariate space. In this final step, two different multilogistic regression algorithms are applied: one considers all initial and RBF covariates (multilogistic initial-RBF regression) and the other one incrementally constructs the model and applies cross validation, resulting in an automatic covariate selection [simplelogistic initial-RBF regression (SLIRBF)]. Both methods include a regularization parameter, which has been also optimized. The methodology proposed is tested using 18 benchmark classification problems from well-known machine learning problems and two real agronomical problems. The results are compared with the corresponding multilogistic regression methods applied to the initial covariate space, to the RBFNNs obtained by the EP algorithm, and to other probabilistic classifiers, including different RBFNN design methods [e.g., relaxed variable kernel density estimation, support vector machines, a sparse classifier (sparse multinomial logistic regression)] and a procedure similar to SLIRBF but using product unit basis functions. The SLIRBF models are found to be competitive when compared with the corresponding multilogistic regression methods and the RBFEP method. A measure of statistical significance is used, which indicates that SLIRBF reaches the state of the art.


Asunto(s)
Algoritmos , Inteligencia Artificial , Modelos Logísticos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Cómputos Matemáticos , Reconocimiento de Normas Patrones Automatizadas/estadística & datos numéricos , Análisis de Regresión , Diseño de Software , Validación de Programas de Computación
19.
IEEE Trans Neural Netw ; 21(5): 750-70, 2010 May.
Artículo en Inglés | MEDLINE | ID: mdl-20227976

RESUMEN

This paper proposes a multiclassification algorithm using multilayer perceptron neural network models. It tries to boost two conflicting main objectives of multiclassifiers: a high correct classification rate level and a high classification rate for each class. This last objective is not usually optimized in classification, but is considered here given the need to obtain high precision in each class in real problems. To solve this machine learning problem, we use a Pareto-based multiobjective optimization methodology based on a memetic evolutionary algorithm. We consider a memetic Pareto evolutionary approach based on the NSGA2 evolutionary algorithm (MPENSGA2). Once the Pareto front is built, two strategies or automatic individual selection are used: the best model in accuracy and the best model in sensitivity (extremes in the Pareto front). These methodologies are applied to solve 17 classification benchmark problems obtained from the University of California at Irvine (UCI) repository and one complex real classification problem. The models obtained show high accuracy and a high classification rate for each class.


Asunto(s)
Algoritmos , Evolución Biológica , Redes Neurales de la Computación , Curva ROC , Inteligencia Artificial , Redes de Comunicación de Computadores/estadística & datos numéricos , Simulación por Computador , Humanos , Modelos Genéticos , Mutación/genética , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...