Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 3 de 3
1.
PLoS One ; 19(4): e0299000, 2024.
Article En | MEDLINE | ID: mdl-38630761

In the article, the extreme problem of finding the optimal placement plan of 5G base stations at certain points within a linear area of finite length is set. A fundamental feature of the author's formulation of the extreme problem is that it takes into account not only the points of potential placement of base stations but also the possibility of selecting instances of stations to be placed at a specific point from a defined excess set, as well as the aspect of inseparable interaction of placed 5G base stations within the framework of SON. The formulation of this extreme problem is brought to the form of a specific combinatorial model. The article proposes an adapted branch-and-bounds method, which allows the process of synthesis of the architecture of a linearly oriented segment of a 5G network to select the best options for the placement of base stations for further evaluation of the received placement plans in the metric of defined performance indicators. As the final stage of the synthesis of the optimal plan of a linearly oriented wireless network segment based on the sequence of the best placements, it is proposed to expand the parametric space of the design task due to the specific technical parameters characteristic of the 5G platform. The article presents a numerical example of solving an instance of the corresponding extremal problem. It is shown that the presented mathematical apparatus allows for the formation of a set of optimal placements taking into account the size of the non-coverage of the target area. To calculate this characteristic parameter, both exact and two approximate approaches are formalized. The results of the experiment showed that for high-dimensional problems, the approximate approach allows for reducing the computational complexity of implementing the adapted branch-and-bounds method by more than six times, with a slight loss of accuracy of the optimal solution. The structure of the article includes Section 1 (introduction and state-of-the-art), Section 2 (statement of the research, proposed models and methods devoted to the research topic), Section 3 (numerical experiment and analysis of results), and Section 4 (conclusions and further research).


Mathematics
2.
Entropy (Basel) ; 25(12)2023 Nov 21.
Article En | MEDLINE | ID: mdl-38136447

Measurement is a typical way of gathering information about an investigated object, generalized by a finite set of characteristic parameters. The result of each iteration of the measurement is an instance of the class of the investigated object in the form of a set of values of characteristic parameters. An ordered set of instances forms a collection whose dimensionality for a real object is a factor that cannot be ignored. Managing the dimensionality of data collections, as well as classification, regression, and clustering, are fundamental problems for machine learning. Compactification is the approximation of the original data collection by an equivalent collection (with a reduced dimension of characteristic parameters) with the control of accompanying information capacity losses. Related to compactification is the data completeness verifying procedure, which is characteristic of the data reliability assessment. If there are stochastic parameters among the initial data collection characteristic parameters, the compactification procedure becomes more complicated. To take this into account, this study proposes a model of a structured collection of stochastic data defined in terms of relative entropy. The compactification of such a data model is formalized by an iterative procedure aimed at maximizing the relative entropy of sequential implementation of direct and reverse projections of data collections, taking into account the estimates of the probability distribution densities of their attributes. The procedure for approximating the relative entropy function of compactification to reduce the computational complexity of the latter is proposed. To qualitatively assess compactification this study undertakes a formal analysis that uses data collection information capacity and the absolute and relative share of information losses due to compaction as its metrics. Taking into account the semantic connection of compactification and completeness, the proposed metric is also relevant for the task of assessing data reliability. Testing the proposed compactification procedure proved both its stability and efficiency in comparison with previously used analogues, such as the principal component analysis method and the random projection method.

3.
Bioengineering (Basel) ; 10(7)2023 Jul 15.
Article En | MEDLINE | ID: mdl-37508865

The development of information technology has had a significant impact on various areas of human activity, including medicine. It has led to the emergence of the phenomenon of Industry 4.0, which, in turn, led to the development of the concept of Medicine 4.0. Medicine 4.0, or smart medicine, can be considered as a structural association of such areas as AI-based medicine, telemedicine, and precision medicine. Each of these areas has its own characteristic data, along with the specifics of their processing and analysis. Nevertheless, at present, all these types of data must be processed simultaneously, in order to provide the most complete picture of the health of each individual patient. In this paper, after a brief analysis of the topic of medical data, a new classification method is proposed that allows the processing of the maximum number of data types. The specificity of this method is its use of a fuzzy classifier. The effectiveness of this method is confirmed by an analysis of the results from the classification of various types of data for medical applications and health problems. In this paper, as an illustration of the proposed method, a fuzzy decision tree has been used as the fuzzy classifier. The accuracy of the classification in terms of the proposed method, based on a fuzzy classifier, gives the best performance in comparison with crisp classifiers.

...