Recovery of a dynamic system from its functioning is a problem of current interest in the theory of control systems. As a behavior model of gene network regulatory circuit, a discrete dynamic system has been proposed, where coordinates correspond to the concentration of substances, while special functions, which depend on the system value in the previous moment, account for their increase or decrease. Pseudo-polynomial discrete dynamic system recovery algorithms with additive and multiplicative functions have been obtained earlier. The generalized case of arbitrary threshold functions is considered in this article. Algorithms for significant variables recovery and threshold functions weight regulation, having pseudo-polynomial testing complexity, are given. These algorithms allow one either to recover the system completely, or to lower the threshold function dimension.
We consider a problem of risky behavior intensity estimation based on data received from respondent’s answers about their behavior. The method of maximum likelihood estimation is used for derivation of the estimate, and likelihood function describes the likelihood of realization of the particular system of answers. The likelihood function is derived for the situation when data about several last episodes of the behavior is available and situation when data about one last episode of the behavior and record intervals between consequent episodes of the behavior process occurred within fixed time interval is available.
The basic requirements for airborne satellite navigation equipment based on the existing Russian and international regulations are reviewed. The functions of airborne equipment providing the aircraft operations in accordance with the modern concept of navigation, based on characteristics, are described.
The importance of an efficient network resource allocation strategy has grown significantly with the rapid advancement of cellular network technology and the widespread use of mobile devices. Efficient resource allocation is crucial for enhancing user services and optimizing network performance. The primary objective is to optimize the power distribution method to maximize the total aggregate rate for all customers within the network. In recent years, graph-based deep learning approaches have shown great promise in addressing the challenge of network resource allocation. Graph neural networks (GNNs) have particularly excelled in handling graph-structured data, benefiting from the inherent topological characteristics of mobile networks. However, many of these methodologies tend to focus predominantly on node characteristics during the learning phase, occasionally overlooking or oversimplifying the importance of edge attributes, which are equally vital as nodes in network modeling. To tackle this limitation, we introduce a novel framework known as the Heterogeneous Edge Feature Enhanced Graph Attention Network (HEGAT). This framework establishes a direct connection between the evolving network topology and the optimal power distribution strategy throughout the learning process. Our proposed HEGAT approach exhibits improved performance and demonstrates significant generalization capabilities, as evidenced by extensive simulation results.
The possibility and expediency of forecasting in the stock markets are analyzed analytically using the methods and approaches of statistical mechanics. The apparatus of statistical mechanics is used to analyze and forecast one of the most important indicators of the market – the distribution of its logarithmic profitability. The Lotka-Volterra model used in ecology to describe systems of the "predator-prey" type was used as the initial model. It approximates market dynamics adequately. In the article, its Hamiltonian property is used, which makes it possible to apply the apparatus of statistical mechanics. The apparatus of statistical mechanics (using the principle of maximum entropy) makes it possible to implement a probabilistic approach that is adapted to the conditions of stock market uncertainty. The canonical variables of the Hamiltonian are presented as logarithms of stock and bond prices, the joint probability distribution function of stock and bond prices is obtained as a Gibbs distribution. The Boltzmann factor, included in the Gibbs distribution, allows us to estimate the probability of the occurrence of certain stock and bond prices and obtain an analytical expression for calculating the logarithmic return, which gives more accurate results than the widely used normal (Gaussian) distribution. According to its characteristics, the resulting distribution resembles the Laplace distribution. The main characteristics of the resulting distribution are calculated – the mean value, variance, asymmetry, and kurtosis. Mathematical results are presented graphically. An explanation is given of the cause-and-effect mechanism that causes a change in the profitability of the market. For this, the idea of Theodore Modis about the competition between stocks and bonds for the attention and money of investors is developed (by analogy with the turnover of biomass in models of the "predator-prey" type in biology). The results of the study are of interest to investors, theorists, and practitioners of the stock market. They allow us to make thoughtful and balanced investment decisions due to a more realistic idea of the expected return and a more adequate assessment of investment risk.
Information is given about a new approach to the application of methods of the theory of semi-Markov processes to solve the applied problem of assessing the functional stability of elements that make up the information infrastructure, functioning under the influence of multiple computer attacks. The task of assessing functional stability is reduced to the task of finding the survivability function of the element under study and determining its extreme values. The relevance of the study is substantiated. The rationale is based on the assumption that quantitative methods of studying the stability of technical systems, which operate on the theory of reliability, cannot always be used to assess survivability. The concepts of «stability» and «computer attack» are being clarified. Verbal and formal statements of research tasks are formulated. The novelty of the results obtained lies in the application of well-known methods to solve a practically significant problem in a new formulation, taking into account the limitations on the resource allocated to maintain the survivability of the element under study, provided that arbitrary distribution laws are adopted for the random times of the implementation of computer attacks and the recovery times of the functional element. Recommendations on the formation of initial data, the content of the enlarged stages of modeling and a test case to demonstrate the performance of the model are given. The results of the test simulation are presented in the form of graphs of the survivability function. The resulting application can be used in practice to construct a survivability function when implementing up to three computer attacks, as well as a tool for evaluating the reliability of analogous statistical models. The limitation is explained by a progressive increase in the dimension of the analytical model and a decrease in the possibility of its meaningful interpretation.
Ensuring the robustness of digital audio watermarking under the influence of interference, various transformations and possible attacks is an urgent problem. One of the most used and fairly stable marking methods is the patchwork method. Its robustness is ensured by the use of expanding bipolar numerical sequences in the formation and embedding of a watermark in a digital audio and correlation detection in the detection and extraction of a watermark. An analysis of the patchwork method showed that the absolute values of the ratio of the maximum of the autocorrelation function (ACF) to its minimum for expanding bipolar sequences and extended marker sequences used in traditional digital watermarking approach 2 with high accuracy. This made it possible to formulate criteria for searching for special expanding bipolar sequences, which have improved correlation properties and greater robustness. The article developed a mathematical apparatus for searching and constructing limit-expanding bipolar sequences used in solving the problem of robust digital audio watermarking using the patchwork method. Limit bipolar sequences are defined as sequences whose autocorrelation functions have the maximum possible ratios of maximum to minimum in absolute value. Theorems and corollaries from them are formulated and proved: on the existence of an upper bound on the minimum values of autocorrelation functions of limit bipolar sequences and on the values of the first and second petals of the ACF. On this basis, a rigorous mathematical definition of limit bipolar sequences is given. A method for searching for the complete set of limit bipolar sequences based on rational search and a method for constructing limit bipolar sequences of arbitrary length using generating functions are developed. The results of the computer simulation of the assessment of the values of the absolute value of the ratio of the maximum to the minimum of the autocorrelation and cross-correlation functions of the studied bipolar sequences for blind reception are presented. It is shown that the proposed limit bipolar sequences are characterized by better correlation properties in comparison with the traditionally used bipolar sequences and are more robust.
An attention-based random survival forest (Att-RSF) is presented in the paper. The first main idea behind this model is to adapt the Nadaraya-Watson kernel regression to the random survival forest so that the regression weights or kernels can be regarded as trainable attention weights under important condition that predictions of the random survival forest are represented in the form of functions, for example, the survival function and the cumulative hazard function. Each trainable weight assigned to a tree and a training or testing example is defined by two factors: by the ability of corresponding tree to predict and by the peculiarity of an example which falls into a leaf of the tree. The second main idea behind Att-RSF is to apply the Huber's contamination model to represent the attention weights as the linear function of the trainable attention parameters. The Harrell's C-index (concordance index) measuring the prediction quality of the random survival forest is used to form the loss function for training the attention weights. The C-index jointly with the contamination model lead to the standard quadratic optimization problem for computing the weights, which has many simple algorithms for its solution. Numerical experiments with real datasets containing survival data illustrate Att-RSF.
Active deployment of machine learning systems sets a task of their protection against different types of attacks that threaten confidentiality, integrity and accessibility of both processed data and trained models. One of the promising ways for such protection is the development of privacy-preserving machine learning systems, that use homomorphic encryption schemes to protect data and models. However, such schemes can only process polynomial functions, which means that we need to construct polynomial approximations for nonlinear functions used in neural models. The goal of this paper is the construction of precise approximations of several widely used neural network activation functions while limiting the degree of approximation polynomials as well as the evaluation of the impact of the approximation precision on the resulting value of the whole neural network. In contrast to the previous publications, in the current paper we study and compare different ways for polynomial approximation construction, introduce precision metrics, present exact formulas for approximation polynomials as well as exact values of corresponding precisions. We compare our results with the previously published ones. Finally, for a simple convolutional network we experimentally evaluate the impact of the approximation precision on the bias of the output neuron values of the network from the original ones. Our results show that the best approximation for ReLU could be obtained with the numeric method, and for the sigmoid and hyperbolic tangent – with Chebyshev polynomials. At the same time, the best approximation among the three functions could be obtained for ReLU. The results could be used for the construction of polynomial approximations of activation functions in privacy-preserving machine learning systems.
This paper focuses on capturing the meaning of Natural Language Understanding (NLU) text features to detect the duplicate unsupervised features. The NLU features are compared with lexical approaches to prove the suitable classification technique. The transfer-learning approach is utilized to train the extraction of features on the Semantic Textual Similarity (STS) task. All features are evaluated with two types of datasets that belong to Bosch bug and Wikipedia article reports. This study aims to structure the recent research efforts by comparing NLU concepts for featuring semantics of text and applying it to IR. The main contribution of this paper is a comparative study of semantic similarity measurements. The experimental results demonstrate the Term Frequency–Inverse Document Frequency (TF-IDF) feature results on both datasets with reasonable vocabulary size. It indicates that the Bidirectional Long Short Term Memory (BiLSTM) can learn the structure of a sentence to improve the classification.
Spectral analysis of signals is used as one of the main methods for studying systems and objects of various physical natures. Under conditions of a priori statistical uncertainty, the signals are subject to random changes and noise. Spectral analysis of such signals involves the estimation of the power spectral density (PSD). One of the classical methods for estimating PSD is the periodogram method. The algorithms that implement this method in digital form are based on the discrete Fourier transform. Digital multiplication operations are mass operations in these algorithms. The use of window functions leads to an increase in the number of these operations. Multiplication operations are among the most time consuming operations. They are the dominant factor in determining the computational capabilities of an algorithm and determine its multiplicative complexity. The paper deals with the problem of reducing the multiplicative complexity of calculating the periodogram estimate of the PSD using window functions. The problem is solved based on the use of binary-sign stochastic quantization for converting a signal into digital form. This two-level signal quantization is carried out without systematic error. Based on the theory of discrete-event modeling, the result of a binary-sign stochastic quantization in time is considered as a chronological sequence of significant events determined by the change in its values. The use of a discrete-event model for the result of binary-sign stochastic quantization provided an analytical calculation of integration operations during the transition from the analog form of the periodogram estimation of the SPM to the mathematical procedures for calculating it in discrete form. These procedures became the basis for the development of a digital algorithm. The main computational operations of the algorithm are addition and subtraction arithmetic operations. Reducing the number of multiplication operations decreases the overall computational complexity of the PSD estimation. Numerical experiments were carried out to study the algorithm operation. They were carried out on the basis of simulation modeling of the discrete-event procedure of binary-sign stochastic quantization. The results of calculating the PSD estimates are presented using a number of the most famous window functions as an example. The results obtained indicate that the use of the developed algorithm allows calculating periodogram estimates of PSD with high accuracy and frequency resolution in the presence of additive white noise at a low signal-to-noise ratio. The practical implementation of the algorithm is carried out in the form of a functionally independent software module. This module can be used as a part of complex metrologically significant software for operational analysis of the frequency composition of complex signals.
The recommendations on the application of methods of multidimensional estimation (MDE) of objects, proposed in the paper Velasquez M., Hester P.T. «An Analysis of Multi-Criteria Decision Making Methods», are analyzed. The weak substantiation of these recommendations, resulting from the superficial systematization of MDE methods, is noted. The recommendations are focused not on the classes of MDE methods, but on various areas of activity. However, in each area of activity there is a wide range of tasks for evaluating objects of various nature. In this regard, the urgency of a more thorough systematization of MDE methods is recognized.
Taking into account the diversity of MDE methods, it was decided to limit ourselves to the systematization of methods that use evaluation functions (EF), and on this basis to offer general recommendations for their application.
The review of MDE methods from a unified position required clarification of the terminology used in them. On the basis of the formal model of the criterion, the relationship between the concepts of "preference", "criterion" and "indicator" is established. To highlight the methods that use evaluation functions, the concept of the target value of the indicator is introduced. Regarding its location on the indicator scale, the concepts of ideal and real goals are introduced. The criteria corresponding to these goals are divided into target and restrictive ones. Using the proposed terminology, a review of the most well-known MDE methods was carried out. Of these, a group of methods using evaluation functions is distinguished.
Variants of evaluation functions created on the basis of the criterion and postulates of the theory of value and utility are considered. On the basis of the similarity of the domains of definition and the meanings of EFs, the relationship between them is established. Regarding the target value of the indicator, they are divided into the functions of achieving the goal and functions of deviation from the goal. The mutual complementarity of these functions is shown. A group of functions of deviation from the goal is highlighted, which allows us to order objects separately according to penalties and rewards in relation to achieving a real goal. The concept of norm is introduced for the correspondence relation. On the example of medical analyzes, the practical application of deviation functions from the norm is shown using both the minimax and the weighted average generalizing function to establish a rating on a set of objects.
The similarities and differences of the EFs revealed in the course of the study form the basis for the classification of the MDE methods that use them. The difference in EFs in terms of the complexity of creation is reflected in the proposed methodology for their application.
The paper considers the possibility of creating a speech-like interference for the means of vibro-acoustic protection of speech information based on tables of syllables and words of the Russian language. The choice of research directions and experimental conditions is substantiated: synthesis of sound files by random sampling of speech elements from a database, research of spectra of synthesized noise, algorithm for creating interference of the “speech choir” type, study of autocorrelation functions of synthesized speech-like interference, as well as their probability distribution density. It is shown that the spectral and statistical characteristics of the synthesized speech-like interference type "speech choir" of five voices are close to similar characteristics of real speech signals. At the same time, the speech choir was formed by averaging the instantaneous values of temporary realizations of sound files. It is shown that the spectral power density of the speech-like interference of the “speech choir” type practically is not changed with the number of averaged “voices” starting from five. The probability density distribution of the speech-like interference value with an increase in the number of voices in the “speech choir” approaches the normal law (unlike a real speech signal whose probability density is close to the Laplace distribution). Evaluation of the autocorrelation function gave a correlation interval of several milliseconds. The articulation tests of speech intelligibility using synthesized speech-like interference with different signal-to-noise ratios showed the possibility of reducing the integral noise level by 12-15 dB compared to noise-like interference. The dependencies of verbal intelligibility on the integral signal-to-noise ratio are constructed on the basis of polynomial and piecewise linear approximations. A preliminary assessment of a possible impact of speech-like interference on the psycho-emotional state of a person was performed. The direction of further research on increasing the efficiency of algorithms for generating speech-like interference is discussed.
A phase enlargement of semi-Markov systems that does not require determining stationary distribution of the embedded Markov chain is considered. Phase enlargement is an equivalent replacement of a semi-Markov system with a common phase state space by a system with a discrete state space. Finding the stationary distribution of an embedded Markov chain for a system with a continuous phase state space is one of the most time-consuming and not always solvable stage, since in some cases it leads to a solution of integral equations with kernels containing sum and difference of variables.
For such equations there is only a particular solution and there are no general solutions to date. For this purpose a lemma on a type of a distribution function of the difference of two random variables, provided that the first variable is greater than the subtracted variable, is used.
It is shown that the type of the distribution function of difference of two random variables under the indicated condition depends on one constant, which is determined by a numerical method of solving the equation presented in the lemma.
Based on the lemma, a theorem on the difference of a random variable and a complicated recovery flow is built up. The use of this method is demonstrated by the example of modeling a technical system consisting of two series-connected process cells, provided that both cells cannot fail simultaneously. The distribution functions of the system residence times in enlarged states, as well as in a subset of working and non-working states, are determined. The simulation results are compared by the considered and classical method proposed by V. Korolyuk, showed the complete coincidence of the sought quantities.
A problem of reducing a linear time-invariant dynamic system is considered as a problem of approximating its initial rational transfer function with a similar function of a lower order. The initial transfer function is also assumed to be rational. The approximation error is defined as the standard integral deviation of the transient characteristics of the initial and reduced transfer function in the time domain. The formulations of two main types of approximation problems are considered: a) the traditional problem of minimizing the approximation error at a given order of the reduced model; b) the proposed problem of minimizing the order of the model at a given tolerance on the approximation error.
Algorithms for solving approximation problems based on the Gauss-Newton iterative process are developed. At the iteration step, the current deviation of the transient characteristics is linearized with respect to the coefficients of the denominator of the reduced transfer function. Linearized deviations are used to obtain new values of the transfer function coefficients using the least-squares method in a functional space based on Gram-Schmidt orthogonalization. The general form of expressions representing linearized deviations of transient characteristics is obtained.
To solve the problem of minimizing the order of the transfer function in the framework of the least squares algorithm, the Gram-Schmidt process is also used. The completion criterion of the process is to achieve a given error tolerance. It is shown that the sequence of process steps corresponding to the alternation of coefficients of polynomials of the numerator and denominator of the transfer function provides the minimum order of transfer function.
The paper presents an extension of the developed algorithms to the case of a vector transfer function with a common denominator. An algorithm is presented with the approximation error defined in the form of a geometric sum of scalar errors. The use of the minimax form for error estimation and the possibility of extending the proposed approach to the problem of reducing the irrational initial transfer function are discussed.
Experimental code implementing the proposed algorithms is developed, and the results of numerical evaluations of test examples of various types are obtained.
The paper considers the problem of planning a mobile robot movement in a conflict environment, which is characterized by the presence of areas that impede the robot to complete the tasks. The main results of path planning in the conflict environment are considered. Special attention is paid to the approaches based on the risk functions and probabilistic methods. The conflict areas, which are formed by point sources that create in the general case asymmetric fields of a continuous type, are observed. A probabilistic description of such fields is proposed, examples of which are the probability of detection or defeat of a mobile robot. As a field description, the concept of characteristic probability function of the source is introduced; which allows us to optimize the movement of the robot in the conflict environment. The connection between the characteristic probability function of the source and the risk function, which can be used to formulate and solve simplified optimization problems, is demonstrated. The algorithm for mobile robot path planning that ensures the given probability of passing the conflict environment is being developed. An upper bound for the probability of the given environment passing under fixed boundary conditions is obtained. A procedure for optimizing the robot path in the conflict environment is proposed, which is characterized by higher computational efficiency achieved by avoiding the search for an exact optimal solution to a suboptimal one. A procedure is proposed for optimizing the robot path in the conflict environment, which is characterized by higher computational efficiency achieved by avoiding the search for an exact optimal solution to a suboptimal one. The proposed algorithms are implemented in the form of a software simulator for a group of ground-based robots and are studied by numerical simulation methods.
In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable. In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable. The main idea of the approximation is to replace the discontinuous Heaviside function in the integral representation of the probability function with a smooth function having such properties as continuity, smoothness, and easily computable derivatives. An example of such a function is the distribution function of a random variable distributed according to the logistic law with zero mean and finite dispersion, which is a sigmoid. The value inversely proportional to the root of the variance is a parameter that provides the proximity of the original function and its approximation. This replacement allows us to obtain a smooth approximation of the probability function, and for this approximation derivatives by the control vector and by other parameters of the problem can be easily found. The article proves the convergence of the probability function approximation obtained by replacing the Heaviside function with the sigmoidal function to the original probability function, and the error estimate of such approximation is obtained. Next, approximate expressions for the derivatives of the probability function by the control vector and the parameter of the function are obtained, their convergence to the true derivatives is proved under a number of conditions for the loss functional. Using known relations between derivatives of probability functions and quantile functions, approximate expressions for derivatives of quantile function by control vector and by the level of probability are obtained. Examples are considered to demonstrate the possibility of applying the proposed estimates to the solution of stochastic programming problems with criteria in the form of a probability function and a quantile function, including in the case of a multidimensional random variable.
The development of methodical and mathematical apparatus for formation of a set of diagnostic parameters of complex technical systems, the content of which consists of processing the trajectories of the output processes of the system using the theory of functional spaces, is considered in this paper. The trajectories of the output variables are considered as Lebesgue measurable functions. It ensures a unified approach to obtaining diagnostic parameters regardless a physical nature of these variables and a set of their jump-like changes (finite discontinuities of trajectories). It adequately takes into account a complexity of the construction, a variety of physical principles and algorithms of systems operation. A structure of factor-spaces of measurable square Lebesgue integrable functions, ( spaces) is defined on sets of trajectories. The properties of these spaces allow to decompose the trajectories by the countable set of mutually orthogonal directions and represent them in the form of a convergent series. The choice of a set of diagnostic parameters as an ordered sequence of coefficients of decomposition of trajectories into partial sums of Fourier series is substantiated. The procedure of formation of a set of diagnostic parameters of the system, improved in comparison with the initial variants, when the trajectory is decomposed into a partial sum of Fourier series by an orthonormal Legendre basis, is presented. A method for the numerical determination of the power of such a set is proposed.
New aspects of obtaining diagnostic information from the vibration processes of the system are revealed. A structure of spaces of continuous square Riemann integrable functions ( spaces) is defined on the sets of vibrotrajectories. Since they are subspaces in the afore mentioned factor-spaces, the general methodological bases for the transformation of vibrotrajectories remain unchanged. However, the algorithmic component of the choice of diagnostic parameters becomes more specific and observable. It is demonstrated by implementing a numerical procedure for decomposing vibrotrajectories by an orthogonal trigonometric basis, which is contained in spaces. The processing of the results of experimental studies of the vibration process and the setting on this basis of a subset of diagnostic parameters in one of the control points of the system is provided.
The materials of the article are a contribution to the theory of obtaining information about the technical condition of complex systems. The applied value of the proposed development is a possibility of their use for the synthesis of algorithmic support of automated diagnostic tools.
Modeling of radiation fields of an arbitrary optical thickness uniform slab in the case of a weak absorption and strongly elongated phase functions has been carried. The simple modification of classical Ambarzumian’s — Chandrasekhar’s invariance principle indispensable for the receiving of new non-linear integral equations connected with azimuthal Fourier harmonics of generalized unified photometric function and photometric invariants has been used. These values join upgoing and downgoing radiation fields intensities making use of simple linear manner at arbitrary optical levels in mirror vision directions including fixed azimuthal angles and solar zenith distance. Parametrizations of obtained non-linear integral equations have demonstrated in the absence of reflecting underlying surfaces, placed at the lowest level of considered uniform slab, the possibility to express angular-spatial properties of unified photometrical function and appropriate photometrical invariants taking into account the phase functions strongly elongation near small scattering angles and small slab’s absorption with the help of primary scattered radiation field intensities and adaptive fitting multipliers. These functional adaptive corrections have been stipulated by uniform slab’s multiple light scattering and possess the clear physical interpretation. The use of mirror reflection (symmetry) principle, elaborated by the author, and conception of unified photometric function allows one to estimate the above-mentioned peculiarities of real environment’s phase functions in the framework of photometric invariants numerical modeling. An analysis of appropriate radiative modeling results has shown a dominating influence of primary light scattering in the formation of anisotropically scattered radiation fields of an arbitrary optical thickness uniform slab in the case of weak radiation absorption and strongly elongated phase functions.
Based on the mirror reflection principle and solutions of modified linear singular integral equations, the numerical modeling of the unified exit function of the outside radiation field and photometrical invariants of brightness coefficients for a uniform slab of finite optical thickness has been carried out. The efficacy of applying the angular discretization method for problems of numerical modeling of outer radiation fields in the «atmosphere — underlying surface» system has been proved. This new approach allows generalizing the basic results in the particular case of a semi-infinite uniform slab. In this connection the main mathematical aspects and computational peculiarity of the numerical realization of the angular discretization method have been considered. Due to linearity of the used basic integral equations, the conducted analysis can be generalized to the case of scalar and polarized inner radiation fields taking into account the multiple anisotropic scattering of photons and their reflection from an arbitrary horizontally uniform underlying surface.
A considerable number of failures in the system of interval control of train traffic are associated with the impact of disturbing actions in a wide range of change in the only informative feature characterizing the rail lines states. In the paper, it is proposed to determine the state of the control object by the principles of the pattern recognition with multivariate informative features. It is suggested to use as the features the voltages and currents at the input and output of a quadripole, and as a polynomial of the decisive function — the Hermit’s orthogonal polynomial, which allows increasing the depth of recognition and ensuring the relative invariance of disturbing actions by amplifying the order and dimension. In recognition of the rail lines states the relative error of calculating class boundaries by decisive functions is used as a quality criterion.
Serviceability of the proposed method is demonstrated by the results of rail lines states recognition with a "trained" decisive function.
The problem of estimating the vulnerability of the speech information of a confidential nature is currently topical. However, in the use of means of acoustic protection, i.e. in conditions of strong noise, the existing instrumental and computational methods give greater accuracy when compared with the extremely labor intensive methods of articulation.
In the paper we study the method of estimating the security of voice data based on the Pearson correlation coefficient. This ratio has poor sensitivity to the spectral properties of the acoustic signals. Therefore, the author suggests an approach to the definition of the security indicator of voice data based on the mathematical apparatus of the coher-ence function of source and noisy signals.
We propose to split the entire speech frequency range of the coherence function into separate octaves. We also offer to calculate the expectation of the coherence function components in octaves and on the basis of convolution function obtain an expression for calculating the index of the vulnerability of speech.
The proposed algorithm for determining the vulnerability index of voice data allows improving the assessment accuracy.
In this article we consider an approach to representation of distributions of probabili-ties in the form of the two-level composition of an integral kernel and a phase function which is generalization of the concept of density of random parameter distribution. Possibilities of giper-delta approximation of the phase function and its interrelation with the formation of phase-type distributions are shown. The method of approximating distributions formation on the basis of the arbitrary phase function by the method of derivatives is offered.
The article presents the results of an experiment on the facial muscles electromyographic signal processing (EMG) based on the algorithm of radial basis function neural networks (NN). We have studied the efficiency of using as input for training NN nine signs of EMG learned as a function of time. The best result was obtained for the criterion Maximum Picked Value. The worst result was obtained for the criterion Mean Value. We have proposed a gesture recognition algorithm. The resulting algorithm and the neural network based on it can be used in the construction of a human-machine interface.
This article discusses some approaches to the recognition of the parameters of the threshold k-valued functions, which can be used for building information processing and security units. The main focus is put on the issue of proving k-valued function belonging to the threshold class. For solving this problem it is proposed to use the input coefficients of expansion and increase. With the help of the latter, the coefficients of linear forms of the k-valued threshold function are procedurally approximated. Along with the proposed analytical approach, the article discusses an algorithmic method based on reducing the problem of finding a threshold representation of k-valued functions to the system of linear inequalities, for the solution of which the ellipsoid method, modified by Khachiyan, is applied. The comparative analysis of the proposed methods is carried out based on experiments.
The axioms of the theory of multi-criteria selection on a finite set of alternatives are formulated. They identified the theory from the general theory of decision-making. The theory united all the known methods of multi-criteria selection in the system. It is the basis of the textbook "The theory of administrative decision making"
The article describes the method of documentary file formats protection from unau-thorized access based on indistinguishable program code obfuscation. The application of indistinguishable obfuscation to solve the problem of unauthorized access protection is substantiated. Mathematical model of indistinguishable program code obfuscation underlying the method of documentary file formats protection from unauthorized access is proposed.
The article considers main methods to use intelligent techniques and algorithms, synthesized on their basis, as well as examinesdata presentation of network monitoring for IT security risk management of secure multiservice networks. The mathematical model of intelligent data presentation is developed and examined for IT security risk investigation and assessment.
The functions used by different optimization methods are interpreted with expected utility viewpoint. It makes possible to distinguish two groups of methods - criterion and functional choice ones. The first group of methods sets preferences on the values of the criteria, and the second - on the values of the functions presenting the preferences of decision maker on the scales of attributes. Such an interpretation of functions that does not depend on how they are created, allowed considering methods of multi-criteria optimization and multi-dimensional utility from unified positions. The analytic hierarchy process uses the priority function calculated based on the matrix of pairwise comparisons. So, it is related to the group of methods of functional choice too. The resulting system allows one to compare their methods for quality and evaluate the effectiveness of tasks solving.
For a decision about the degree of membership of the test object to class of identified conditions it is necessary to produce the aggregation of its known characteristics, as a result diverse array of parameters can be reduced to a small number of generic classes that are functionally associated with the source data. This problem can be solved using algorithms for fuzzy classification.
The actual problem influence of the developed atmospheric turbulence and multiple light molecular-aerosol scattering at board satellite devices spatial-frequency permission and received from space a videoinformation quality about the environment has been considered. Given consideration has been carried out in a frame of the linear optical system’s theory and optical signals Fourier transformations. Representative data of the structural function vertical dependence of developed atmospheric turbulence and board satellite devices parameters, formed the space environment’s, including the advanced modeling data about spatial-frequency filtration by the molecular-aerosol Earth’s atmosphere in a visual spectral region (λ=400-800nm) have been used. Optical transfer functions of atmospheric “turbulence- molecular-aerosol scattering-board satellite device” complex system have been calculated. It is shown, that in during of remote sensing of the Earth from space the quality of environment’s satellite images and spectral brightness’s, including a spatial frequence permission of a the board optical devices of middle (Δ~10 1 –10 2 )м and low (Δ~10 3 –10 4 )м spatial frequence permission don’t dependent from the atmospheric turbulence. It is important result that the influence of atmospheric turbulence comparison with noncoherent multiple light molecular-aerosol scattering and board satellite device is most significant only for high frequancies and a small scale fragments (Δ<<10)м of space images and appropriate spectral brightness’s.
In this paper influence of multi-iterative hashing with several modifiers algorithm's parameters on its cryptographic persistence is considered. Relevance of multi-iterative hashing with several modifiers algorithm’s application and need of research of its parameters are justified, the description of algorithm is provided. Cryptographic persistence of hash function to attacks which are not depends on algorithm is caused by its bitness, i.e. actually on the amount of unique hash values that hash function is able to generate. For an estimation of algorithm’s persistence to dictionary attacks and attacks by methods of "brute force" and "birthdays" the algorithm of multi-iterative hashing with several modifiers is considered as independent hash function. Estimation of the algorithm’s persistence for a given number of iterations is offered to produce by calculating the average bitness of equivalently persistent hash function for the algorithm. The description of estimation method of algorithm’s persistence is provided. The experiments are performed using a truncated cryptographically persistent hash function. The results of experiments allow to compare the algorithm’s persistence metrics of under different values of its parameters. Besides, the results of the experiments allow to understand how the values of certain parameters, and combinations of values for these parameters affect for the algorithm’s cryptographic persistence to dictionary attacks and attacks by methods of "brute force" and "birthdays". On the basis of the received results it is possible to draw conclusions about the values of the parameters recommended for practical application of this algorithm. In conclusion, the paper presents the main results of the work. Authors of the article believe that the algorithm can find application in authentication subsystems of information systems, and also in systems where the most important requirement is persistence for a long time.
OCR results of archival documents have to be corrected in order to improve accuracy. An algorithm that takes into account peculiarities of the Russian language and allows handling large volumes of text corpus in fully automatic mode is described. The correction process is divided into stages of analysis of the entire corpus of texts, preparation of data structures, the selection of word candidates and their final ranking. Using rank-rating model for generating text corrections allows handling texts containing specific terminology from different subject areas.
Communication system synthesis task statement under Bayesian criterions with variant loss functions is considered. Significant differences in communication system structure and parameters synthesized under Bayesian criterions with simple loss function and uncertainty function are obtained at sufficiently general conditions.
A scheme of constructing multi-voice speech synthesizer based on the use of the synergies of integration of the text to speech and voice conversion systems are presented in this article. Such organization of the system allows simultaneous synthesis and modification actions in speech signal, based on an integrated approach to its treatment, significantly reducing the number of errors and artifacts that affect the resulting quality. Applying this approach let to implement a function for multivoice speech synthesizer without significant labor costs for training speech database to add new voices.
In English and Russian works dedicated to discrete optimization problems, are isolated in nature. The basis of decision support systems are put particular methods of optimization. It is difficult to choose a proper method for the solution of choice task. We propose to consider the optimization methods in terms of utility functions. On the basis of criteria systematization is shown the possibility of interpreting the functions used in the methods of multi-criteria optimization as simple versions of the utility function. As a consequence, it is stated a higher degree of information on the preferences of the utility function over other functions used in optimization problems.
The paper deals with the simulation methods of information system behavior. Attention is paid to the difficulties of dynamic system models application which are accounted for functional dependence of temporal distribution on a large number of parameters as well as for the lack of initial data. An approach to attack detection is proposed on the basis of analysis of deviation from autocorrelation function.
The signed utility functions application is substantiated in this paper. The problem of their values’ convolution for multicriteria ordering is solved, including the assessment whether the multiplicative convolution application is justified in this case.
In this article we treat a question how to develop a procedure of functional and parametric analysis, which is based on calculation of effectiveness of multifuncional complex operation during decision making; we offer correlations for calculation, mark practicability of adding of support devices to information complexes.
1 - 25 of 41 items