The article investigates the effect of pulse interference on information reception in conditions of episodic synchronization of frames of the physical level of a satellite communication channel with streams of radio pulses of unintended interference. An analytical model of the influence of pulse interference on the reception of information in a satellite communication channel under conditions of episodic synchronization of physical-level frames with pulse interference streams is proposed. Using the example of the DVB family of standards, the combined effect of noise and unintended impulse interference on the conditional error probabilities when receiving a synchro group, the service part of the header and the information part of the frame is shown. Estimates of the average number of frames of the physical level for the duration of the interval of episodic synchronization, the number of intervals of episodic synchronization and the proportion of elementary parcels in the frame exposed to interference, depending on the duration of the pulse interference, are given. It is shown that there are such relations between the duration of the interference pulse and the continuity of the sequence, in which the phenomenon of the episodic synchronization of physical-level frames with the flow of pulse interference has a significant impact on the functioning of the satellite communication channel. The dependences of the probability of erroneous reception of a frame of the physical level of a satellite communication channel on the signal-to-interference ratio at the fixed signal-to-noise ratio and on the duration of the interference pulse are obtained. It has been found that at high signal-to-noise ratios and the duration of the interference correlated with the duration of the service part of the frame, but significantly less than the duration of the frame, the probability of erroneous reception of the frame may be higher than at lower signal-to-noise ratios due to errors when receiving the service part of the frames.
Introduction : Modern complex technical systems are often critical. Criticality is due to the consequences of disruption of the functioning of such systems, and their failure to fulfill the required list of functions and tasks. The process of control and management of such systems is carried out using communication systems and networks that become critical for them. There is a need to ensure the stable functioning of the complex technical systems themselves, their control and monitoring systems, communication systems and networks. The paper proposes a method for ensuring the functional stability of a communication system, the basis of which is the process of identifying and eliminating conflicts in it due to the difference between the profile of functioning and the profile of the process of functioning of the system. The proposed model of the process of functioning of the communication system allows, based on changes in the intensity of the impact on the system of destabilizing factors, the identification of conflicts and their elimination, to determine the probability of ensuring the functional stability of the system. The purpose of the study: to develop a methodology for ensuring the functional stability of a communication system under the influence of destabilizing factors and the emergence of conflicts, a model of the process of the system's functioning, which makes it possible to determine the probability of the system being in a functionally stable state. Methods of graph theory and matrix theory, the theory of Markov processes. Results: an approach is proposed for assessing the functional stability of a communication system under the influence of destabilizing factors, a technique has been developed to ensure the functional stability of a communication system. Practical significance: the results of the study can be used in the design and construction of complex technical systems, decision support systems, control, communication and management.
The huge volume of data produced by IoT procedures needs the processing power and space for storage provided by cloud, edge, and fog computing systems. Each of these ways of computing has benefits as well as drawbacks. Cloud computing improves the storage of information and computational capability while increasing connection delay. Edge computing and fog computing offer similar advantages with decreased latency, but they have restricted storage, capacity, and coverage. Initially, optimization has been employed to overcome the issue of traffic dumping. Conversely, conventional optimization cannot keep up with the tight latency requirements of decision-making in complex systems ranging from milliseconds to sub-seconds. As a result, ML algorithms, particularly reinforcement learning, are gaining popularity since they can swiftly handle offloading issues in dynamic situations involving certain unidentified data. We conduct an analysis of the literature to examine the different techniques utilized to tackle this latency-aware intelligent task offloading issue schemes for cloud, edge, and fog computing. The lessons acquired consequently, from these surveys are then presented in this report. Lastly, we identify some additional avenues for study and problems that must be overcome in order to attain the lowest latency in the task offloading system.
The problem of obtaining the best alternative using decision-making methods based on the experience of specialists and mathematical calculations is considered in the article. Group decision-making is appropriate for solving this problem. However, it can lead to the selection of several best alternatives (multivariate of the result). Accounting for competence will prioritize the decision of more competent participants and eliminate the emergence of several best alternatives in the process of group decision-making. The problem of determining the competence coefficients for participants in group decision-making has been formulated. The selection of the best alternative with the multivariate of the result is provided in the problem. A method for solving the problem has been developed. It involves discretizing the range of input variables and refining the competence coefficients values of group decision-making participants in it to select the best alternative, either by the majority principle or with the decision-maker’s involvement. Further calculation of the competence coefficients for participants in group decision-making is carried out using local linear interpolation of the refined competence coefficient at surrounding points from the discretized range. The use of the proposed method for solving the problem is considered using the example of group decision-making according to the main types of the majoritarian principle for selecting an electrodeposition variant. The results show that the proposed method for calculating the competence coefficients of participants in group decision-making through local linear interpolation is the most effective for selecting the best alternative with a multivariate result based on the relative majority.
Scenario: System reliability monitoring focuses on determining the level at which the system works as expected (under certain conditions and over time) based on requirements. The edge computing environment is heterogeneous and distributed. It may lack central control due to the scope, number, and volume of stakeholders. Objective: To identify and characterize the Real-time System Reliability Monitoring strategies that have considered Artificial Intelligence models for supporting decision-making processes. Methodology: An analysis based on the Systematic Mapping Study was performed on December 14, 2022. The IEEE and Scopus databases were considered in the exploration. Results: 50 articles addressing the subject between 2013 and 2022 with growing interest. The core use of this technology is related to networking and health areas, articulating Body sensor networks or data policies management (collecting, routing, transmission, and workload management) with edge computing. Conclusions: Real-time Reliability Monitoring in edge computing is ongoing and still nascent. It lacks standards but has taken importance and interest in the last two years. Most articles focused on Push-based data collection methods for supporting centralized decision-making strategies. Additionally, to networking and health, it concentrated and deployed on industrial and environmental monitoring. However, there are multiple opportunities and paths to walk to improve it. E.g., data interoperability, federated and collaborative decision-making models, formalization of the experimental design for measurement process, data sovereignty, organizational memory to capitalize previous knowledge (and experiences), calibration and recalibration strategies for data sources.
The paper is devoted to improving the accuracy of digital sensors with a time lag. The relevance of the topic is due to the widespread use of sensors of this type, which is largely due to a sharp increase in the requirements for measurement accuracy. The timeliness is associated also with the extensive application of digital technologies for information processing in control systems, communications, monitoring and many others. To eliminate the errors caused by the time delay of digital sensors, it is suggested to use an astatic high-speed corrector. The applicability of this corrector is justified by the properties of discrete-time dynamical systems. In this regard, at first, the conditions are considered under which the discrete systems are physically realizable and have a finite duration of the transient since in this latter case they are the fastest. It is also shown that in order to measure a polynomial signal of limited intensity with zero error in steady-state mode, the astatism order of the sensor must be one value greater than the degree of this signal. Based on the above conditions, the main result of the article is proved – a theorem in which the conditions for the existence of the astatic high-speed corrector are established. When this corrector is switched on at the output of the digital sensor or when its software is being corrected an upgraded sensor is formed, the error of which will be zero in steady-state mode. This is due to the fact that the corrector eliminates the error of the digital sensor caused by its time delay, which is assumed to be multiple of the sampling period. The order of the corrector as a system is determined by the integer solution of the equation obtained in the work, which relates the degree of the measured polynomial signal, the time delay of the digital sensor, the permissible overshoot of the upgraded sensor and the relative order of the desired corrector. This equation is solved for the cases, where the degree of the measured signals is not greater than one, the overshoot is equal to the frequently assigned values, and the time delay does not exceed four sampling periods. The corresponding order of the upgraded sensor is given in tabular form. This makes it possible to find the required corrector without solving the shown equation in many cases. The effectiveness of the suggested approach with respect to improving the accuracy of digital sensors is shown by a numerical example. The zero error value of the upgraded sensor is confirmed both by computer simulation and numerical calculation. The results obtained can be used in the development of high-precision digital sensors of various physical quantities.
The trend of development of smart farms is aimed at their becoming fully autonomous, robotic enterprises. The prospects for the intellectualization of agricultural production and smart farms, in particular, today are associated with the development of technology systems used to detect, recognize complex production situations and search for effective solutions in these situations. The article presents the concept of such a decision support system on smart farms using the method of decision support based on case-based reasoning - CBR system. Its implementation requires a number of non-trivial tasks, which include, first of all, the tasks of formalizing the presentation of situations and creating methods for comparing and retrieving situations from the KB on this basis. In this study, a smart farm is presented as a complex technological object consisting of interrelated components, which are the technological subsystems of a smart farm, the products produced, the objects of the operational environment, as well as the relationships between them. To implement algorithms for situational decision-making based on precedents, a formalized representation of the situation in the form of a multivector is proposed. This allowed us to develop a number of models of the trained similarity function between situations. The conducted experiments have shown the operability of the proposed models, on the basis of which ensemble architecture of a neural network has been developed for comparing situations and selecting them from the knowledge base in decision-making processes. Of practical interest is monitoring the condition of plants by their video and photo images, which allows detecting undesirable plant conditions (diseases), which can serve as a signal to activate the process of searching for solutions in the knowledge base.
As the number of users on social media rise, information creation and circulation increase day after day on a massive basis. People can share their ideas and opinions on these platforms. A social media microblogging site such as Facebook or Twitter is the favoured medium for debating any important event, and information is shared immediately. It causes rumours to spread quickly and circulates inaccurate information, making people uneasy. Thus, it is essential to evaluate and confirm the level of veracity of such information. Because of the complexities of the text, automated detection of rumours in their early phases is challenging. This research employs various NLP techniques to extract information from tweets and then applies various machine learning models to determine whether the information is a rumour. The classification is performed using three classifiers such as SVC (Support Vector Classifier), Gradient Boosting, and Naive Bayes classifiers for five different events from the PHEME dataset. Some drawbacks include limited handling of imbalanced data, difficulty capturing complex linguistic patterns, lack of interpretability, difficulty handling large feature spaces, and insensitivity to word order and context by using the above classifiers. The stacking approach is used to overcome the above drawbacks in which the output of combined classifiers is an ensemble with LSTM. The performance of the models has been analyzed. The experimental findings reveal that the ensemble model obtained efficient outcomes compared to other classifiers, with an accuracy of 93.59%.
Information is given about a new approach to the application of methods of the theory of semi-Markov processes to solve the applied problem of assessing the functional stability of elements that make up the information infrastructure, functioning under the influence of multiple computer attacks. The task of assessing functional stability is reduced to the task of finding the survivability function of the element under study and determining its extreme values. The relevance of the study is substantiated. The rationale is based on the assumption that quantitative methods of studying the stability of technical systems, which operate on the theory of reliability, cannot always be used to assess survivability. The concepts of «stability» and «computer attack» are being clarified. Verbal and formal statements of research tasks are formulated. The novelty of the results obtained lies in the application of well-known methods to solve a practically significant problem in a new formulation, taking into account the limitations on the resource allocated to maintain the survivability of the element under study, provided that arbitrary distribution laws are adopted for the random times of the implementation of computer attacks and the recovery times of the functional element. Recommendations on the formation of initial data, the content of the enlarged stages of modeling and a test case to demonstrate the performance of the model are given. The results of the test simulation are presented in the form of graphs of the survivability function. The resulting application can be used in practice to construct a survivability function when implementing up to three computer attacks, as well as a tool for evaluating the reliability of analogous statistical models. The limitation is explained by a progressive increase in the dimension of the analytical model and a decrease in the possibility of its meaningful interpretation.
The paper’s goal is to develop a methodology and algorithm for the recognition of objects in the environment, keeping the quality with an increasing number of objects. For this purpose, the following problems were solved: recognition of the shape features, estimation of relations between features, and matching between the found features and relations and the defined templates (descriptions of complex and simple objects of the real world). A convolutional neural network is used for the shape feature recognition. In order to train it we used artificially generated images with shape features (3D primitive objects) that were randomly placed on the scene with different properties of their surfaces. The set of relations necessary to recognize objects, which can be represented as a combination of shape features, is formed. Testing on photos of real-world objects showed the ability to recognize real-world objects regardless of their type (in cases where different models and modifications are possible). This paper considers an example of outdoor luminaire recognition. The example shows the algorithm's ability not only to detect an object in the image but also to estimate the position of its components. This solution makes it possible to use the algorithm in the task of object manipulation performed by robotic systems.
A new approach to the synthesis of self-checking devices is considered, based on the control of calculations in testing objects using Hamming codes, the check bits of which are described by self-dual functions. In this case, the structure operates in a pulsed mode, which is actually based on the introduction of temporal redundancy when building a self-checking device. This, unfortunately, leads to some decrease in performance, however, it significantly improves the characteristics of controllability, which is especially important for devices and systems of critical use, the input data for which does not change so often. A brief review of methods for constructing built-in control circuits based on the self-duality property of calculated functions is given. The basic structures of the organization of built-in control circuits are given. The proposed ways of developing the theory of synthesis of built-in control circuits are based on checking whether or not the calculated functions belong to a class of self-dual Boolean functions. All possible values of the number of data bits for Hamming codes have been established. They will have the property of the self-duality of functions describing control bits. En-coders of such Hamming codes will be self-dual devices. Since the functions of the check bits of Hamming codes are linear, in order for them to be self-dual, it is necessary that an odd number of arguments be used in each of them. It is proved that the number of bits of code words of Hamming codes with self-dual check functions is equal to n =3+4 l , l ∈ N 0 . The results of the simulations self-dual devices with built-in control circuits along two diagnostic parameters in the Multisim environment are presented. A method is proposed for modification of the structure of calculation control along two diagnostic parameters, which allows to use any linear block code (not necessarily Hamming code). It is based on retrofitting the encoder with a device for converting functions into self-dual ones. In fact, this is a code modification device. It is proved that to obtain a modified Hamming code with self-dual control functions for n≠3+4l, l∈N 0 ; cases, it is enough to add modulo M =2 the non-self-dual control function with the function of the high data bit.
Chatbot research has advanced significantly over the years. Enterprises have been investigating how to improve these tools’ performance, adoption, and implementation to communicate with customers or internal teams through social media. Besides, businesses also want to pay attention to quality reviews from customers via social networks about products available in the market. From there, please select a new method to improve the service quality of their products and then send it to publishing agencies to publish based on the needs and evaluation of society. Although there have been numerous recent studies, not all of them address the issue of opinion evaluation on the chatbot system. The primary goal of this paper’s research is to evaluate human comments in English via the chatbot system. The system’s documents are preprocessed and opinion-matched to provide opinion judgments based on English comments. Based on practical needs and social conditions, this methodology aims to evolve chatbot content based on user inter-actions, allowing for a cyclic and human-supervised process with the following steps to evaluate comments in English. First, we preprocess the input data by collecting social media comments, and then our system parses those comments according to the rating views for each topic covered. Finally, our system will give a rating and comment result for each comment entered into the system. Experiments show that our method can improve accuracy better than the referenced methods by 78.53%.
The article deals with the problem of forming a digital shadow of the process of moving a person. An analysis of the subject area was carried out, which showed the need to formalize the process of creating digital shadows to simulate human movements in virtual space, testing software and hardware systems that operate on the basis of human actions, as well as in various systems of musculoskeletal rehabilitation. It was revealed that among the existing approaches to the capture of human movements, it is impossible to single out a universal and stable method under various environmental conditions. A method for forming a digital shadow has been developed based on combining and synchronizing data from three motion capture systems (virtual reality trackers, a motion capture suit, and cameras using computer vision technologies). Combining the above systems makes it possible to obtain a comprehensive assessment of the position and condition of a person regardless of environmental conditions (electromagnetic interference, illumination). To implement the proposed method, a formalization of the digital shadow of the human movement process was carried out, including a description of the mechanisms for collecting and processing data from various motion capture systems, as well as the stages of combining, filtering, and synchronizing data. The scientific novelty of the method lies in the formalization of the process of collecting data on the movement of a person, combining and synchronizing the hardware of the motion capture systems to create digital shadows of the process of moving a person. The obtained theoretical results will be used as a basis for software abstraction of a digital shadow in information systems to solve the problems of testing, simulating a person, and modeling his reaction to external stimuli by generalizing the collected data arrays about his movement.
Vectorization of objects from an image is necessary in many areas. The existing methods of vectorization of satellite images do not provide the necessary quality of automation. Therefore, manual labor is required in this area, but the volume of incoming information usually exceeds the processing speed. New approaches are needed to solve such problems. The method of vectorization of objects in images using image decomposition into topological features is proposed in the article. It splits the image into separate related structures and relies on them for further work. As a result, already at this stage, the image is divided into a tree-like structure. This method is unique in its way of working and is fundamentally different from traditional methods of vectorization of images. Most methods work using threshold binarization, and the main task for them is to select a threshold coefficient. The main problem is the situation when there are several objects in the image that require a different threshold. The method departs from direct work with the brightness characteristic in the direction of analyzing the topological structure of each object. The proposed method has a correct mathematical description based on algebraic topology. On the basis of the method a geoinformation technology has been developed for automatic vectorization of raster images in order to search for objects located on it. Testing was carried out on satellite images from different scales. The developed method was compared with a special tool for vectorization R2V and showed a higher average accuracy. The average percentage of automatic vectorization of the proposed method was 81%, and the semi-automatic vectorizing module R2V was 73%.
The task of automating and reducing the complexity of the process of developing virtual training complexes is considered. The analysis of the subject area showed the need to move from a monolithic to a service-oriented version of the architecture. It is found that the use of a monolithic architecture in the implementation of virtual training complexes limits the possibility of modernizing the system, increases its software complexity, and makes it difficult to implement an interface for managing and monitoring the training process. The general concept of the microservice architecture of virtual training complexes is presented, and definitions of the main and secondary components are given. The scientific novelty of the research lies in the transition from the classical monolithic architecture in the subject area of the HTC to the microservice architecture; eliminating the shortcomings of this approach by implementing a single protocol for the exchange of information between modules; separation of network interaction procedures into software libraries to unify and improve the reliability of the system. The use of isolated, loosely coupled microservices allows developers to use the best technologies, platforms and frameworks for their implementation; separate the graphical interface of the simulator instructor from the visualization and virtual reality system; provide the ability to flexibly replace the main components (visualization, interface, interaction with virtual reality) without changing the architecture and affecting other modules. The decomposition of the structural model of the microservice architecture is carried out, and the specifics of the functioning of the main components are presented. The implementation of microservices networking libraries and a JSON-based data exchange protocol is considered. The practical significance of the proposed architecture lies in the possibility of parallelization and reducing the complexity of the development and modernization of training complexes. The features of the functioning of the systems implemented in the proposed microservice architecture are analyzed.
Machine learning and digital signal processing methods are used in various industries, including in the analysis and classification of seismic signals from surface sources. The developed wave type analysis algorithm makes it possible to automatically identify and, accordingly, separate incoming seismic waves based on their characteristics. To distinguish the types of waves, a seismic measuring complex is used that determines the characteristics of the boundary waves of surface sources using special molecular electronic sensors of angular and linear oscillations. The results of the algorithm for processing data obtained by the method of seismic observations using spectral analysis based on the Morlet wavelet are presented. The paper also describes an algorithm for classifying signal sources, determining the distance and azimuth to the point of excitation of surface waves, considers the use of statistical characteristics and MFCC (Mel-frequency cepstral coefficients) parameters, as well as their joint application. At the same time, the following were used as statistical characteristics of the signal: variance, kurtosis coefficient, entropy and average value, and gradient boosting was chosen as a machine learning method; a machine learning method based on gradient boosting using statistical and MFCC parameters was used as a method for determining the distance to the signal source. The training was conducted on test data based on the selected special parameters of signals from sources of seismic excitation of surface waves. From a practical point of view, new methods of seismic observations and analysis of boundary waves make it possible to solve the problem of ensuring a dense arrangement of sensors in hard-to-reach places, eliminate the lack of knowledge in algorithms for processing data from seismic sensors of angular movements, classify and systematize sources, improve prediction accuracy, implement algorithms for locating and tracking sources. The aim of the work was to create algorithms for processing seismic data for classifying signal sources, determining the distance and azimuth to the point of excitation of surface waves.
Computer networks are based on technology that provides the technical infrastructure where routing protocols are used to transmit packets over the Internet. Routing protocols define how routers communicate with each other by distributing information. They are used to describe how routers communicate with each other, learn available routes, build routing tables, make routing decisions, and share information between neighbors. The main purpose of routing protocols is to determine the best route from source to destination. A particular case of a routing protocol operating within an autonomous system is called an internal routing protocol (IGP – Interior Gateway Protocol). The article analyzes the problem of correctly choosing a routing protocol. Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP) are considered leading routing protocols for real-time applications. For this they are chosen to be studied. The main objective of the study is to compare the proposed routing protocols and to evaluate them based on different performance indicators. This assessment is carried out theoretically – by analyzing their characteristics and action, and practically – through simulation experiments. After the study of the literature, the simulation scenarios and quantitative indicators by which the performance of the protocols is compared are defined. First, a network model with OSPF is designed and simulated using the OPNET Modeler simulator. Second, EIGRP is implemented in the same network scenario and a new simulation is done. The implementation of the scenarios shall collect the necessary results and analyze the operation of the two protocols. The data shall be derived and an assessment and conclusion shall be made against the defined quantitative indicators.
The paper considers the problem of a wireless communication system’s physical level security for a multipath signal propagation channel and the presence of a wiretap channel. To generalize the propagation effects, the Beaulieu-Xie shadowed channel model was assumed. To describe the security of the information transfer process, such a metric as the secure outage probability of was considered. An analytical expression of the secure outage probability was obtained and analyzed depending on the characteristics of the channel and the communication system: the average value of the signal-to-noise ratio in the main channel and the wiretap channel, the effective path-loss exponent, the relative distance between the legitimate receiver and the wiretap and the threshold bandwidth, normalized to the bandwidth of a smooth Gaussian channel. The analysis considers the sets of parameters that cover all practically important scenarios for the functioning of a wireless communication systems: both deep fading (corresponding to the hyper-Rayleigh scenario) and small fading, both in the case of a significant line-of-sight component and a significant number of multipath clusters, and with significant shadowing of the dominant component and a small number of multipath waves, as well as all intermediate options. It is found out that the value of the energy requirements for guaranteed secure communication at a given speed is determined primarily by the power of multipath components, as well as the existence of an irreducible secure outage probability of a communication session with an increase for channels with strong overall shadowing of the signal components, which from a practical point of view is important to take into account when imposing requirements for the values of the signal-to-noise ratio and the data transfer rate in the direct channel, providing the desired degree of security of the wireless communication session.
Structural dependences of the working outputs of logical combinational circuits were studied with the aim of subsequent identification of the type of possible errors. The types of manifested errors and the classification of the working outputs of logical combinational circuits are given. It is shown that the presence of an internal structural connection of discrete devices leads to an increase in the multiplicity of possible errors. The condition for determining the functional dependence of outputs on the manifestation of errors of the studied multiplicity is given. It is noted that out of the many types of errors, unidirectional errors can appear at the outputs of the circuits. A well-known method for determining unidirectionally dependent operating outputs of discrete device circuits is presented, which has a drawback. It is only necessary to pairwise compare each output with the rest of the whole set. For the convenience of the process of searching for such outputs, the author of the article proposed a new method for identifying unidirectionally dependent working outputs. This method differs from known methods in that it is applicable for any number of outputs, which requires much less time to search for the above outputs. It is shown that logical combinational circuits can have functional features, in which only unidirectional errors can appear at the working outputs. Therefore, a new method for identifying any number of unidirectionally independent operating outputs of combinational circuits has been proposed. It is shown that the methods proposed in the article for finding unidirectionally dependent and unidirectionally independent outputs of logical combinational circuits require simple mathematical calculations. In the Multisim, internal faults of the diagnosable circuits are simulated and all possible errors at the working outputs are fixed. According to the results of the experiments, the validity of the theoretical results obtained was also confirmed.
The use of various types of heuristic algorithms based on soft computing technologies for the distribution of tasks in groups of mobile robots performing monosyllabic operations in a single workspace is considered: genetic algorithms, ant algorithms and artificial neural networks. It is shown that this problem is NP-complex and its solution by direct iteration for a large number of tasks is impossible. The initial problem is reduced to typical NP-complete problems: the generalized problem of finding the optimal group of closed routes from one depot and the traveling salesman problem. A description of each of the selected algorithms and a comparison of their characteristics are presented. A step-by-step algorithm of operation is given, taking into account the selected genetic operators and their parameters for a given population volume. The general structure of the developed algorithm is presented, which makes it possible to solve a multi-criteria optimization problem efficiently enough, taking into account time costs and the integral criterion of robot efficiency, taking into account energy costs, functional saturation of each agent of the group, etc. The possibility of solving the initial problem using an ant algorithm and a generalized search for the optimal group of closed routes is shown. For multi-criteria optimization, the possibility of linear convolution of the obtained vector optimality criterion is shown by introducing additional parameters characterizing group control: the overall efficiency of the functioning of all robots, the energy costs for the functioning of the support group and the energy for placing one robot on the work field. To solve the task distribution problem using the Hopfield neural network, its representation is made in the form of a graph obtained during the transition from the generalized task of finding the optimal group of closed routes from one depot to the traveling salesman problem. The quality indicator is the total path traveled by each of the robots in the group.
The problem of classification using a compartmental spiking neuron model is considered. The state of the art of spiking neural networks analysis is carried out. It is concluded that there are very few works on the study of compartmental neuron models. The choice of a compartmental spiking model is justified as a neuron model for this work. A brief description of such a model is given, and its main features are noted in terms of the possibility of its structural reconfiguration. The method of structural adaptation of the model to the input spike pattern is described. The general scheme of the compartmental spiking neurons’ organization into a network for solving the classification problem is given. The time-to-first-spike method is chosen for encoding numerical information into spike patterns, and a formula is given for calculating the delays of individual signals in the spike pattern when encoding information. Brief results of experiments on solving the classification problem on publicly available data sets (Iris, MNIST) are presented. The conclusion is made about the comparability of the obtained results with the existing classical methods. In addition, a detailed step-by-step description of experiments to determine the state of an autonomous uninhabited underwater vehicle is provided. Estimates of computational costs for solving the classification problem using a compartmental spiking neuron model are given. The conclusion is made about the prospects of using spiking compartmental models of a neuron to increase the bio-plausibility of the implementation of behavioral functions in neuromorphic control systems. Further promising directions for the development of neuromorphic systems based on the compartmental spiking neuron model are considered.
It is difficult or impossible to develop software without included errors. Errors can lead to an abnormal order of machine code execution during data transmission to a program. Program splitting into routines causes possible attacks by using return instructions from these routines. Most of existing security tools need to apply program source codes to protect against such attacks. The proposed defensive method is intended to a comprehensive solution to the problem. Firstly, it makes it difficult for an attacker to gain control over program execution, and secondly, the number of program routines, which can be used during the attack, decreases. Specific security code insertion is used at the beginning and end of the routines to make it complicated to gain control over the program execution. The return address is kept secure during a call of the protected routine, and the protected routine is restored after its execution if it was damaged by the attacker. To reduce the number of suitable routines for attacks, it was suggested to use synonymous substitutions of instructions that contain dangerous values. It should be mentioned that proposed defensive measures do not affect the original application`s algorithm. To confirm the effectiveness of the described defensive method, software implementation and its testing were accomplished. Acknowledging controls were conducted using synthetic tests, performance tests and real programs. Results of testing have demonstrated the reliability of the proposed measures. It ensures the elimination of program routines suitable for attacks and ensures the impossibility of using standard return instructions for conducting attacks. Performance tests have shown a 14 % drop in the operating speed, which approximately matches the level of the nearest analogues. The application of the proposed solution declines the number of possible attack scenarios, and its applicability level is higher in comparison with analogues.
Receiving and transmitting paths of modern radio communication systems are built on the basis of an open structure that provides hierarchical differentiation of access to the provided telecommunication services. However, this approach does not exclude the possibility of access to the transmitted content by unauthorized users. Hiding information by methods of cryptographic protection in such a situation only activates additional interest in transmission, therefore the most pragmatic solution is to use signals of a complex structure, which significantly complicate or even exclude the extraction of information from them by third-party users. The problem of regulating access selection in the development and design radio system elements is rather multifaceted and has a high degree of complexity. One of the directions for solving problems in this subject area is based on the well-known approaches to expanding the signal base, however, algorithms for their practical implementation were obtained without taking into account the limitations on the allocated resource and the very fact of using these algorithms. Based on the theory of systems and the general theory of communication, an approach to the formation of signal structures of a complex structure has been developed, which ensures an increase in the properties of their structural secrecy in relation to unauthorized users. At the same time, the known solutions at the physical level of signal spaces were refined, which made it possible to formalize the procedures for the formation of radio signals with specified properties. The method of formalizing the function of displaying the signal space based on the allocation of stochastic properties of pseudo-random sequences has been substantiated, which made it possible to ensure the uncertainty of their structure in case of unauthorized processing. The approbation of the proposed approach is given on the example of the formation of quadrature modulation signals, taking into account the subsequent analysis of their properties from various positions of legitimate and illegitimate users. The results obtained confirm the uncertainty during illegitimate processing with a slight deterioration in the noise immunity properties of radio communication systems. In general, this allows to conclude the adequacy of theoretical solutions. As an example, constellation diagrams of signals at the output of a quadrature receiver are presented. The set of proposed technical solutions presented in the work determines the novelty of this approach. The scientific problem to be solved belongs to the class of problems of synthesis of signals of complex structures.
Modern methods for solving problems of planning of task packages execution in multi-stage systems are characterized by the presence of restrictions on their dimension, the impossibility of obtaining guaranteed best results in comparison with fixed packages for different values of the input parameters of tasks. The problem of optimizing the composition of task packages executed in multi-stage systems using the method of branches and borders is solved in the paper. Studies of various ways of forming the order of execution of task packages in multi-stage systems (heuristic rules for ordering task packages in the sequences of their execution on MS devices) have been carried out. The method of ordering packets in the sequence of their execution (a heuristic rule), which minimizes the total time for implementing actions with them on the devices, is defined. The method of ordering the types of tasks, according to which their packages are considered in the procedure of the method of branches and borders, is formulated on the basis of the obtained rule. A mathematical model of the process of implementing actions with packages on the system devices, which provides the calculation of its parameters, has been built. The construction of a method for forming all possible solutions for the composition of task packages for a given number of them has been completed. Decisions on the composition of task packages of different types are interpreted in the procedure of the method of branches and borders in order to build the optimal combination of them. To implement the method of branches and borders, a branching (splitting) procedure is formulated, which assumes the formation of subsets of solutions that include packages of different compositions of tasks of the same type. Expressions for calculating the lower and upper estimates of the values of the optimization criterion for the composition of packages for subsets formed in the branching procedure are constructed. The dropout procedure involves the exclusion of subsets whose lower estimate is not less than the record. To find optimal solutions, a breadth-first search strategy is applied, which provides for the study of all subsets of solutions that include various packages of tasks of the same type obtained as a result of the procedure for splitting subsets of tasks that are not excluded from consideration after the implementation of the dropout procedure. The developed algorithms are implemented programmatically, which allowed to obtain the results of planning the execution of task packages in a multi-stage system, which are on average 30 % better than fixed packages.
When building autonomous real-time systems (RTS), it is necessary to solve the problem of optimal multitasking loading of a number of parallel functioning digital signal processors. One of the reserves for achieving the desired result is the implementation of samples from the sensor signals of information about the magnitude of the signal most rarely in time. In this case, it is necessary to provide a linear or stepwise approximation of the signal by samples with an acceptable reconstruction error. One of the system tasks of these processors is filtering signals or limiting the spectrum to the cutoff frequency. A distinctive feature of the approach proposed in the article is the fulfillment of the condition: if the measurement of this frequency is difficult (for example, in the electromechanical means of the RTS), then for such signals it is proposed to match the maximum values of the harmonic half-wave parameters: approximation error, speed and acceleration. The study opens up the prospect of applying new approaches to sampling the time of signals in the amplitude-time domain and determining the equivalent cutoff frequency of the signal spectrum for such signals. In this article, the dependences of the value of the unit of system time for input-output of data on the degree of agreement between the maximum values of the signal parameters are obtained. A mathematical model of the extreme behavior of a signal between two adjacent samples is given in the form of a harmonic half-wave. The study is also extended to convex composite harmonic functions, according to which the signal can deviate from the results of a linear or stepwise approximation of the signal for these samples. The comparison of the models by the value of the relative time sampling intervals, depending on the degree of matching of the maximum parameters of the harmonic half-wave, is carried out. When comparing, in addition to these maximum parameters, the relationship of the maximum signal speed with the error of approximating the samples by steps and the relationship of the maximum acceleration of the signal with the maximum error of the linear approximation was taken into account. The results make it possible to determine the duration of the intervals of uniform sampling of the signal time based on the results of the inspection of the control object, substantiate a significant increase in the sampling interval of time or a similar increase in the number of tasks to be solved per unit of system time.
An approach to the technical diagnostics of complex technical systems based on the results of telemetry information processing by an external monitoring and diagnostics system using hybrid network structures is proposed. The principle of constructing diagnostic complexes of complex technical systems is considered, which ensures the automation of the technical diagnostics process and is based on the use of models in the form of hybrid network structures for processing telemetric information, including multilayer neural networks and discrete Bayesian networks with stochastic learning. A model of changes in the parameters of complex technical systems technical state based on multilayer neural networks has been developed, which makes it possible to form a probabilistic assessment of attributing the current situation of complex technical system functioning to the set of functions considered situations according to individual telemetry parameters, and multilevel hierarchical model of complex technical systems technical diagnostics based on a discrete Bayesian network with stochastic learning, which allows aggregating the information received from neural network models and recognizing the current situation of complex technical system functioning. In the conditions of functioning emergencies of the complex technical system, according to the results of processing telemetric information, faulty functional units are localized and an explanation of the cause of the emergency is formed. The stages of complex technical systems technical diagnostics implementation using the proposed hybrid network structures in the processing of telemetric information are detailed. An example of using the developed approach to solving problems of spacecraft onboard system technical diagnostics is presented. The advantages of the proposed approach to the technical diagnostics of complex technical systems in comparison with the traditional approach based on analysis of telemetry parameters values belonging to the given tolerances are shown.
Analysis of the application of smart home technology indicates an insufficient level of controllability of its infrastructure, which leads to excessive consumption of energy and information resources. The problem of managing the digital infrastructure of human living space, is associated with a large number of highly specialized solutions for home automation, which complicate the management process. Smart home is considered as a set of independent cyber-physical devices aimed at achieving its goal. For coordinated work of cyber-physical devices it is proposed to provide their joint work through a single information center. Simulation of device operation modes in a digital environment preserves the resource of physical devices by making a virtual calculation for all possible variants of interaction of devices between themselves and the physical environment. A methodology for controlling the microclimate of a smart home using an ensemble of fuzzy artificial neural networks is developed, with the example of joint use of air conditioning, ventilation and heating. The neural network algorithm allows you to monitor the parameters of the physical environment, predict the modes of cyber-physical devices and generate control signals for each of them, ensuring the joint operation of devices with minimal resource consumption and information traffic. A variant of practical implementation of a smart home climate control system on the example of a multifunctional educational computer class is proposed. Hybrid neural networks of air conditioning, ventilation and heating systems were developed. The testing of the microclimate control system of a multifunctional university classroom using hybrid neural networks was carried out, a programmable logic controller of domestic production was used as a control device. The goal of management based on cooperating cyber-physical devices is to achieve a minimum of power and information traffic when they work together.
This paper employs the overarching concept of communities to express the social contexts within which human creativity is exercised and learning happens. With the advent of digital technologies, these social contexts, the communities we engage in, change radically. The new landscape brought about by digital technologies is characterized by new qualities, new opportunities for action, new community affordances. The term onlife is adopted from the Onlife Manifesto and used to distinguish the new kind of communities brought about by the modern digital technologies, the onlife communities. Design principles are presented to foster such communities and support their members. These principles constitute a framework that emphasizes the concept of performativity, i.e. knowledge is based on human performance and actions done within certain social contexts, rather than development of conceptual representations. To demonstrate the use of the framework and the corresponding principles, the paper presents how they can be used to analyze, evaluate and reframe a concrete system addressing creativity and learning in the field of cultural heritage (history teaching and learning). One of the most significant results is the adoption of principles that facilitate students’ engagement in rich learning experiences moving from the role of end-user towards the role of expert-user with the support of so called maieuta-designers. The result of this process is the use of the studied software not only to consume ready-made content but the creation of new, student generated content, offering new learning opportunities to the students. As the evaluation shows, these new learning opportunities enable students to develop a deeper understanding of the topics studied.
The current state with the solution of the problem complex planning of the execution of task packets in multistage system is characterized by the absence of universal methods of forming decisions on the composition of packets, the presence of restrictions on the dimension of the problem and the impossibility of guaranteed obtaining effective solutions for various values of its input parameters, as well the impossibility of registration the condition of the formation of sets from the results. The solution of the task of planning the execution of task packets in multistage systems with the formation of sets of results within the specified deadlines has been realized of authors in article. To solve the planning problem, the generalized function of the system was decomposed into a set of hierarchically interrelated subfunctions. The use of decomposition made it possible to use a hierarchical approach for planning the execution of task packets in multistage systems, which involves defining solutions based on the composition of packets at the top level of the hierarchy and scheduling the execution of packages at the bottom level of the hierarchy. The theory of hierarchical games is used to optimize solutions for the compositions of task packets and schedules for their execution is built, which is a system of criteria at the decision-making levels. Evaluation of the effectiveness of decisions by the composition of packets at the top level of the hierarchy is ensured by the distribution of the results of task execution by packets in accordance with the formed schedule. To evaluate the effectiveness of decisions on the composition of packets, method for ordering the identifiers of the types of sets with registration of the deadlines and a method for distributing the results of the tasks performed by packets has been formulated, which calculates the moments of completion of the formation of sets and delays with their formation relative to the specified deadlines. The studies of planning the process of the executing task packages in multistage systems have been carried out, provided that the sets are formed within specified deadlines. On their basis, conclusions, regarding the dependence of the planning efficiency from the input parameters of the problem, were formulated.
The problem of stability analysis and its components of reliability and survivability is quite popular both in the field of telecommunications and in other industries involved in the development and operation of complex networks. The most suitable network model for this type of problem is a model that uses the postulates of graph theory. At the same time, the assumption of the random nature of failures of individual links of the telecommunications network allows it to be considered in the form of a generalized Erdos–Renyi model. It is well known that the probability of failure of elements can be interpreted in the form of a readiness coefficient and an operational readiness coefficient, as well as in the form of other indicators that characterize the performance of elements of a telecommunications network. Most approaches consider only the case of bipolar connectivity, when it is necessary to ensure the interaction of two end destinations. In modern telecommunications networks, services such as virtual private networks come to the fore, for which multipoint connections are organized that do not fit into the concept of bipolar connectivity. In this regard, we propose to extend this approach to the analysis of multi-pole and all-pole connections. The approach for two-pole connectivity is based on a method that uses the connectivity matrix as a basis, and, in fact, assumes a sequential search of all combinations of vertex sections, starting from the source and drain. This method leads to the inclusion of non-minimal cross-sections in the general composition, which required the introduction of an additional procedure for checking the added cross-section for non-excess. The approach for all-pole connectivity is based on a method that uses the connectivity matrix as a basis, and, in fact, assumes a sequential search of all combinations of vertex sections, not including one of the vertices considered terminal. A simpler solution was to control the added section for uniqueness. The approach for multipolar connectivity is similar to that used in the formation of the set of minimal all-pole sections and differs only in the procedure for selecting the combinations used to form the cross-section matrix, of which only those containing pole vertices are preserved. As a test communication network, the Rostelecom backbone network is used, deployed to form flows in the direction of "Europe-Asia". It is shown that multipolar sections are the most general concept with respect to two-pole and all-pole sections. despite the possibility of such a generalization, in practical applications it is advisable to consider particular cases due to their lower computational complexity.
A lot of network management tasks require a description of the logical and physical computer network topology. Obtaining such a description in an automatic way is complicated due to the possibility of incompleteness and incorrectness of the initial data on the network structure. This article provides a study on the properties of incomplete initial data on network device connectivity on the link layer. Methods for generalized handling of the heterogeneous input data on the link layer are included. We describe models and methods for deriving a missing part of the data, as well as the condition in which it is possible to get a single correct network topology description. The article includes algorithms for building a link layer topology description from incomplete data when this data is possible to fulfill up to the required level. Also, we provide methods for detecting and resolving an ambiguity in the data and methods for improving incorrect initial data. Tests and evaluations provided in the article demonstrate the applicability and effectiveness of the build methods for discovering various heterogeneous real-life networks. Additionally, we show advantages of the provided methods over the previous analogs: our methods are able to derive up to 99% data on link layer connectivity in polynomial time; able to provide a correct solution from an ambiguous data.
Currently, centrally reserved access to the medium in the digital radio communication networks of the IEEE 802.11 family standards is an alternative to random multiple access to the environment such as CSMA/CA and is mainly used in the transmission voice and video messages in real time. Centrally reserved access to the environment determines the scope of interest in it from attackers. However, the assessment of effectiveness of centrally reserved access to the environment under the conditions of potentially possible destructive impacts was not carried out and therefore it is impossible to assess the contribution of such impacts to the decrease in the effectiveness of such access. Also, the stage establishing of centrally reserved access to the environment was not previously taken into account. Analytical model development of centrally reserved access to the environment under the conditions of destructive influences in digital radio communication networks of the IEEE 802.11 family standards. A mathematical model of centrally reserved access to the environment has been developed, taking into account not only the stage of its functioning, but also the stage of formation under the conditions of destructive influences by the attacker. Moreover, in the model the stage of establishing centrally reserved access to the medium displays a sequential relationship of such access, synchronization elements in digital radio communication networks and random multiple access to the medium of the CSMA/CA type. It was established that collisions in the data transmission channel caused by destructive influences can eliminate centrally reserved access to the medium even at the stage of its establishment. The model is applicable in the design of digital radio communication networks of the IEEE 802.11 family of standards, the optimization of such networks of the operation, and the detection of potential destructive effects by an attacker.
. The analysis of networks of a diverse nature, which are citation networks, social networks or information and communication networks, includes the study of topological properties that allow one to assess the relationships between network nodes and evaluate various characteristics, such as the density and diameter of the network, related subgroups of nodes, etc. For this, the network is represented as a graph – a set of vertices and edges between them. One of the most important tasks of network analysis is to estimate the significance of a node (or in terms of graph theory – a vertex). For this, various measures of centrality have been developed, which make it possible to assess the degree of significance of the nodes of the network graph in the structure of the network under consideration.
The existing variety of measures of centrality gives rise to the problem of choosing the one that most fully describes the significance and centrality of the node.
The relevance of the work is due to the need to analyze the centrality measures to determine the significance of vertices, which is one of the main tasks of studying networks (graphs) in practical applications.
The study made it possible, using the principal component method, to identify collinear measures of centrality, which can be further excluded both to reduce the computational complexity of calculations, which is especially important for networks that include a large number of nodes, and to increase the reliability of the interpretation of the results obtained when evaluating the significance node within the analyzed network in solving practical problems.
In the course of the study, the patterns of representation of various measures of centrality in the space of principal components were revealed, which allow them to be classified in terms of the proximity of the images of network nodes formed in the space determined by the measures of centrality used.
This paper presents a comparison between discrete Hidden Markov Models and Convolutional Neural Networks for the image classification task. By fragmenting an image into sections, it is feasible to obtain vectors that represent visual features locally, but if a spatial sequence is established in a fixed way, it is possible to represent an image as a sequence of vectors. Using clustering techniques, we obtain an alphabet from said vectors and then symbol sequences are constructed to obtain a statistical model that represents a class of images. Hidden Markov Models, combined with quantization methods, can treat noise and distortions in observations for computer vision problems such as the classification of images with lighting and perspective changes. We have tested architectures based on three, six and nine hidden states favoring the detection speed and low memory usage. Also, two types of ensemble models were tested. We evaluated the precision of the proposed methods using a public domain data set, obtaining competitive results with respect to fine-tuned Convolutional Neural Networks, but using significantly less computing resources. This is of interest in the development of mobile robots with computers with limited battery life, but requiring the ability to detect and add new objects to their classification systems.
The paper describes the main ways of organizing modern satellite communication systems and the methods of synchronization and transmission of service information used in them, the frame synchronization mechanism from the view point of noise immunity. Based on the analysis, a block diagram of a simulation model is proposed for studying the influence of unintentional interference on the channels of modern satellite communication systems. The proposed model of the impact of non-stationary interference on a satellite communication channel takes into account the effect of interference on symbolic, frame synchronization, mechanisms for extracting frame boundaries, as well as the effect of modern error correction codes. The model allows evaluating the impact of non-stationary interference on both the information and the service side of the frame of modern systems of broadband satellite communications. As an indicator of the noise immunity of a satellite communication channel, there was used probability of frame loss, i.e. frame skipping due to a violation in the frame synchronization system, incorrect allocation of frame boundaries, or the presence of errors in the frame that were not repaired by corrective codes. Using this model, we studied the effect of non-stationary interference of various durations on the information and service parts of the frame, compared the results of the impact of non-stationary interferences of various durations with the effect of white Gaussian noise. It is shown that non-stationary interference, which are short noise pulses that do not affect the information part of the frame due to reparation by correction codes, can significantly reduce the reception quality due to disruption of frame synchronization and distortion of service information about the signal-code structure and frame length.
The main factors that determine the expansion of capabilities and increase the effectiveness of network intelligence to identify the composition and structure of client-server computer networks due to the stationarity of their structural and functional characteristics are analyzed. The substantiation of an urgent problem of dynamic management of structurally-functional characteristics of the client-server computer networks functioning in the conditions of network reconnaissance is resulted on the grounds of the revealed protection features of client-server computer networks at the present stage that is based on realization of principles of spatial safety maintenance, and also formalization and introduction of forbidding regulations.
The mathematical model allowing to find optimum modes for dynamic configuration of structurally-functional characteristics of client-server computer networks for various situations is presented. Calculation results are given. An algorithm is presented that makes it possible to solve the problem of dynamic configuration of the structural and functional characteristics of a client-server computer network, which reduces the reliability time of data obtained by network intelligence. The results of practical tests of software developed on the basis of the dynamic configuration algorithm of client-server computer networks are shown. The obtained results show that the use of the presented solution for the dynamic configuration of client-server computer networks allows to increase the effectiveness of protection by changing the structural and functional characteristics of client-server computer networks within several subnets without breaking critical connections through time intervals that are adaptively changed depending on the functioning conditions and the attacker’s actions.
The novelty of the developed model lies in the application of the mathematical apparatus of the Markov’s theory of random processes and Kolmogorov’s solution of equations to justify the choice of dynamic configuration modes for the structural and functional characteristics of client-server computer networks. The novelty of the developed algorithm is the use of a dynamic configuration model for the structural and functional characteristics of client-server computer networks for the dynamic control of the structural and functional characteristics of a client-server computer network in network intelligence.
Recently, there has been a rising interest in small satellites such as CubeSats in the aerospace community due to their small size and cost-effective operation. It is challenging to ensure precision performance for satellites with minimum cost and energy consumption. To support maneuverability, the CubeSat is equipped with a propellant tank, in which the fuel must be maintained in the appropriate temperature range. Simultaneously, the energy production should be maximized, such that the other components of the satellite are not overheated. To meet the technological requirements, we propose a multicriteria optimal control design using a nonlinear dynamical thermal model of the CubeSat system. First, a PID control scheme with an anti-windup compensation is employed to evaluate the minimum heat flux necessary to keep the propellant tank at a given reference temperature. Secondly, a linearization-based controller is designed for temperature control. Thirdly, the optimization of the solar cell area and constrained temperature control is solved as an integrated nonlinear model predictive control problem using the quasilinear parameter varying form of the state equations. Several simulation scenarios for different power limits and solar cell coverage cases are shown to illustrate the trade-offs in control design and to show the applicability of the approach.
Reliability, survivability, and stability analysis tasks are typical not only for telecommunications, but also for systems whose components are subject to one or more types of failures, such as transport, power, mechanical systems, integrated circuits, and even software. The logical approach involves the decomposition of the system into a number of small functional elements, and within telecommunications networks they are usually separate network devices (switches, routers, terminals, etc.), as well as communication lines between them (copper-core, fiber-optic, coaxial cables, wireless media, and other transmission media). Functional relationships also define logical relationships between the failures of individual elements and the failure of the network as a whole. The assumption is also used that device failures are relatively less likely than communication line failures, which implies using the assumption of absolute stability (reliability, survivability) of these devices. Model of a telecommunication network in the form of the generalized model of Erdos–Renyi is presented. In the context of the stability of the telecommunications network, the analyzed property is understood as the connectivity of the network in one form or another. Based on the concept of stochastic connectivity of a network, as the correspondence of a random graph of the connectivity property between a given set of vertices, three connectivity measures are traditionally distinguished: two-pole, multi-pole, and all-pole. The procedures for forming an arbitrary structure of sets of paths and trees for networks are presented, as well as their generalization of multipolar trees. It is noted that multipolar trees are the most common concept of relatively simple chains and spanning trees. Solving such problems will allow us to proceed to calculating the probability of connectivity of graphs for various connectivity measures.
The modern enterprises apply network technologies to their automated industrial control systems. Along with advantages of the above approach the risk of network attacks on automated control systems increases significantly. Hence there is an urgent need to develop automated monitoring means being capable of unauthorized access detection and of an adequate response to it. The enterprise security system should take into account components interaction and involve the ability of self-renewal throughout the entire life cycle.
The partial models of functioning of automated control systems of an enterprise under information threats are offered taking into account parameters of states of the enterprise at its different levels, realization of network threats, counteraction measures, etc. For each model it is possible to form the state space of a part of an enterprise and on the basis of the series of tests to define state transition parameters thus enabling model representation in the form of a marked graph. The sequences of states possess the properties of semi-Markov processes so semi-Markov apparatus is applicable. Probabilities of state transitions could be computed as a result of numerical solution of the corresponding system of integral equations by Lagrange-Stieltjes technique.
Application of Semi-Markov apparatus for the detection of non-authorized activities during data transfer under network scanning attack proved the validity of the above methods. In addition its application results in creation of a set of security assurance measures to be undertaken. Having obtained state transition probabilities the development of integral security indicator becomes possible thus contributing to the enterprise performance enhancement.
A problem of reducing a linear time-invariant dynamic system is considered as a problem of approximating its initial rational transfer function with a similar function of a lower order. The initial transfer function is also assumed to be rational. The approximation error is defined as the standard integral deviation of the transient characteristics of the initial and reduced transfer function in the time domain. The formulations of two main types of approximation problems are considered: a) the traditional problem of minimizing the approximation error at a given order of the reduced model; b) the proposed problem of minimizing the order of the model at a given tolerance on the approximation error.
Algorithms for solving approximation problems based on the Gauss-Newton iterative process are developed. At the iteration step, the current deviation of the transient characteristics is linearized with respect to the coefficients of the denominator of the reduced transfer function. Linearized deviations are used to obtain new values of the transfer function coefficients using the least-squares method in a functional space based on Gram-Schmidt orthogonalization. The general form of expressions representing linearized deviations of transient characteristics is obtained.
To solve the problem of minimizing the order of the transfer function in the framework of the least squares algorithm, the Gram-Schmidt process is also used. The completion criterion of the process is to achieve a given error tolerance. It is shown that the sequence of process steps corresponding to the alternation of coefficients of polynomials of the numerator and denominator of the transfer function provides the minimum order of transfer function.
The paper presents an extension of the developed algorithms to the case of a vector transfer function with a common denominator. An algorithm is presented with the approximation error defined in the form of a geometric sum of scalar errors. The use of the minimax form for error estimation and the possibility of extending the proposed approach to the problem of reducing the irrational initial transfer function are discussed.
Experimental code implementing the proposed algorithms is developed, and the results of numerical evaluations of test examples of various types are obtained.
The paper proposes a solution to the problem of selecting the bandwidth capabilities of digital communication channels of a transport communication network taking into account the imbalance of data traffic by priorities. The algorithm for selecting bandwidth guarantees the minimum costs associated with renting digital communication channels with optimal bandwidth, provided that the requirements for quality of service of protocol data blocks of the first, second, and k-th priority in an unbalanced in terms of priorities transport communication network are met. At the first stage of solving the problem, using the method of Lagrange multipliers, an algorithm for selecting the capacities of digital communication channels for a balanced in terms of priorities transport network was developed. High performance of this algorithm was ensured by applying algebraic operations on matrices (addition, multiplication, etc.). At the second stage of solving the problem, using the generalized Lagrange multipliers method, we compared the conditional extrema of the cost function for renting digital communication channels for single active quality of service requirements for protocol data blocks, for all possible pairs of active quality of service requirements for protocol data blocks, for all possible triples of active requirements for the quality of service of protocol data units, and so on up to the case when all the requirements for quality of service maintenance of protocol data units are active simultaniously. At the third stage of solving the problem, an example of selecting the bandwidth capabilities of digital communication channels of the unbalanced by priorities transport network consisting of eight routers serving protocol data blocks of three priorities was considered. At the fourth stage of the solution of the problem of the choice of carrying capacities the estimation of efficiency of the developed algorithm by a method of simulation modeling was carried out. To this end, in the environment of the network simulator OMNet ++, the unbalanced in terms of priority transport communication network consisting of eight routers connected by twelve digital communication channels with optimal bandwidth was investigated.
The analysis of well-known methods for ensuring IT-security is presented, methods for evaluating security of IT-components and Cloud services in general are considered.
An attempt to analyze cloud services not from a commercial position of a popular marketing product, but from a position of system analysis is made. The previously introduced procedure for IT-components evaluation is not stable, since the end user has not a 100% guarantee of access to all IT-components, and even more so to the remote and uncontrolled Cloud service. A number of reviews point at increased efforts to create a secure network architecture and ability to continuously monitor deviations from established business goals. In contrast to the Zero Trust and Zero Trust eXtended models, according to which additional security functions are superimposed on existing IT-components, it is proposed to consider the set of IT-components as a new entity – an Information Processing System. This will allow to move to formal processes for assessing the degree of compliance with the criteria of standards for both existing and prospective IT-components while ensuring security of Cloud services.
A new method for evaluation which is based on the previously developed hybrid methodology using formal procedures based on two systems of criteria - assessment of the degree of compliance of Management systems (based on ISO/IEC 27001 series) and assessment of functional safety requirements (based on IEC 61508 series and ISO/IEC 15408 series) is proposed. This method provides reproducible and objective assessments of security risks of Cloud-based IT‑components that can be presented to an independent group of evaluators for verification. The results obtained can be applied in the independent assessment, including critical information infrastructure objects.
One of the important tasks of such theories as theories of pattern recognition and the theory of information security, is the task of identifying terminals of information and telecommunication networks.
The relevance of the topic is due to the need to study methods for identifying computer network terminals and build information security systems based on the knowledge gained.
The main parameters that allow uniquely identifying subscriber terminals in the network are address-switching information, as well as a number of parameters characterizing the software and hardware of the computer system. Based on the obtained parameters, digital fingerprints of subscriber terminals are generated.
The using anonymous networks by users of subscriber terminals and blocking of the methods of generating and collecting digital fingerprint parameters, does not allow to achieve the required degree of identification reliability in some cases.
Based on the peculiarities of digital image formation in modern computer systems, many transformation parameters make impact on the output graphic primitive, thereby forming a digital fingerprint of the subscriber terminal, which depends on the placement of samples in a pixel, the algorithms used to calculate the degree of pixels influence, and also the procedures used of smoothing images in the graphics subsystem.
In this paper an original model of image formation by means of a subscriber terminal web browser that allows to increase the degree of reliability of identification under conditions of anonymization of users of information and telecommunication networks is propesed.
Features of the digital images formation in the graphic subsystems of modern computer systems are substantiated. These features allow identification under a priori uncertainty regarding the modes and parameters of information transfer.
In the paper issues of brain-computer interface applications in assistive technologies are considered in particular for robotic devices control. Noninvasive brain-computer interfaces are built based on the classification of electroencephalographic signals, which show bioelectrical activity in different zones of the brain. Such brain-computer interfaces after training are able to decode electroencephalographic patterns corresponding to different imaginary movements and patterns corresponding to different audio-visual stimulus. The requirements which must be met by brain-computer interfaces operating in real time, so that biological feedback is effective and the user's brain can correctly associate responses with events are formulated. The process of electroencephalographic signal processing in noninvasive brain-computer interface is examined including spatial and temporal filtering, artifact removal, feature selection, and classification. Descriptions and comparison of classifiers based on support vector machines, artificial neural networks, and Riemann geometry are presented. It was shown that such classifiers can provide accuracy at the level of 60-80% for recognition of imaginary movements from two to four classes. Examples of application of the classifiers to control robotic devices were presented. The approach is intended both to help healthy users to perform daily functions better and to increase the quality of life of people with movement disabilities. Tasks to increase the efficiency of technology application are formulated.
An issue of the Internet of Things security which does not belong to the traditional problem of cybersecurity, as it is a local or distributed monitoring and/or monitoring of physical systems state connected via the Internet, is considered. An architecture of Supervisory Control and Data Acquisition system (SCADA) was considered in previous authors studies. Due to SCADA systems implementation, vulnerabilities and various options of cyberattacks on them were analyzed. As an example, a case study based on trees was considered, and the obtained results were summarized and visualized.
The purpose of the paper is to compare new industrial technology of the Internet of things (Industrial Internet of Things) with the previously studied traditional SCADA systems.
The Industrial Internet of Things is a network of devices which are connected through communication technologies. Some of the most common security issues for the Industrial Internet of Things are presented in this paper.
A brief overview of the structure of the Industrial Internet of things is presented, basic principles of security and the main problems that can arise with devices of the Internet of things are described. Based on research and analysis of the risk of threats in the field of the Industrial Internet of things, a specific case of destructive impact based on a tree analysis is considered as the main approach. A description of an attack tree leaf node value creation and an analysis of results are provided. Analysis of the electronic record change scenario to increase the infusion rate of an overflow pump using a complexity index is performed. The consequences compared to a previous study of SCADA systems are analyzed, and respective conclusion is made.
We consider one of communication network structure analysis and synthesis methods, based on the simplest approach to connectivity probability calculation – a method of full network typical state search. In this case, the typical states of the network are understood as the events of network graph connectivity and disconnection, which are simple graph chains and sections. Despite significant drawback of typical state enumeration method, which involves significant calculation complexity, it is quite popular at stage of debugging new analysis methods. In addition, on its basis it is possible to obtain boundary estimates of network connectivity probability. Thus, when calculating Asari–Proshana boundaries use full set of incoherent (top) and cohesive (bottom) communication network states. These boundaries are based on statement that network connectivity probability under same conditions is higher (lower) than that of network composed of independent disjoint (connected) subgraph complete set serial (parallel) connection. When calculating Litvak–Ushakov boundaries, only edge-disjoint sections (for upper) and connected subgraphs (for lower) are used, i.e. subsets of elements such that any element does not meet two-rods. This boundary takes into account the well-known natural monotonicity property, which is to reduce (increase) network reliability with decrease (increase) any element reliability. From a computational view point Asari–Proshana boundaries have huge drawback: they require references of all connected subgraphs to compute upper bounds and all minimal cuts for bottom, which in itself is non-trivial. Litvak–Ushakov boundaries are devoid of these drawback: by calculating them, we can stop at any searching step for variants of sets of independent connected and disconnected graph states.
The paper considers methods for comparison of objects’ images represented by sets of points using computational topology methods. The algorithms for construction of sets of real barcodes for comparison of objects’ images are proposed. The determination of barcodes of object forms allows us to study continuous and discrete structures, making it useful in computational topology. A distinctive feature of the use of the proposed comparison methods versus the methods of algebraic topology is obtaining more information about objects’ form. An important area of application of real-valued barcodes is studying invariants of big data. Proposed method combines the technology of barcodes construction with embedded non-geometrical information (color, time of formation, pen pressure), represented as functions of simplicial complexes. To do this, barcodes are expanded with functions from simplexes to represent heterogeneous information. The proposed structure of extended barcodes increases the effectiveness of persistent homology methods when comparing images and pattern recognition. A modification of the Wasserstein method is proposed for finding the distance between images by introducing non-geometric information about the distances between images, due to inequalities of the functions of the source and terminal images of the corresponding simplexes. The geometric characteristics of an object can change with diffeomorphic deformations; the proposed algorithms for the formation of expanded image barcodes are invariant to rotation and translation transformations. We considered a method for determining the distance between sets of points representing the curves, taking into account an orientation of curves’ segments. The article is intended for a reader who is familiar with basic concepts of algebraic and computational topology, the theory of Lie groups, and diffeomorphic transformations.
Assessing the security of digital radio networks in destructive impact conditions is an important task. However, such an assessment for random multiple access to the ALOHA-type environment in digital radio networks was not carried out. The paper presents an analytical model of random multiple access for the environment of digital radio networks of the ALOHA type in destructive impact conditions. In this model, acomplex measure, including the probability of a successful voice connection, the transfer of a service command, a text message or a multimedia file, the degree of filling and the degree of overflow of digital radio network data packets, serves as the resultant indicator for evaluating the effectiveness of random multiple media access. The new complex indicator of the probability of a successful voice connection, the transfer of a service command, a text message or a multimedia file takes into account the known probabilities of successful delivery of data packets, creation of a collision and a free channel, as well as new average transmission times for a sequence of data packets and a collision formed during such transmission. New indicators are the degree of filling and the degree of overflow of digital radio communications network data packets. They determine in saturated and supersaturated data networks of such a network how close (far) to maximum is the probability value of a successful voice connection, transmission of a service command, test message or multimedia file. The model takes into account the potential destructive effects of the attacker by refining the analytical expressions for the known probabilistic and new temporal characteristics. First, a quantitative relationship between the probability of a successful voice connection, the transfer of a service command, a text message or a multimedia file and the average duration of a data channel collisions is established./ Secondly, for guaranteed disabling a digital radio network with random multiple access to the medium ALOHA type attacker must constantly carry out a destructive impact. The results are applied in design of digital radio communications networks operating under destructive impacts, as well as in development of automatic systems for optimizing the operation of digital radio communications networks and protecting them from such impacts.
The scientific research of reliability of combinatorial-metric algorithm for multi-dimensional group point objects recognition in hierarchically organized features space is considered in the paper. The nature of reliability indicator change is examined, as an example, using multilevel descriptions of simulated and real objects under the condition that recognition results obtained at one hierarchy level are used as input data at next level.
A priori uncertainty of a view angle, composition incompleteness and coordinate noise of objects determine the combinatorial procedures of quantifiable estimation of proximity of multidimensional GPO, presenting the object of recognition to a particular class.
The stability of the recognition algorithm is achieved by the possibility of changing strategy of making a classification decision. For this purpose, we use the representation of a group point object at the lowest level of the hierarchy in the form of: sample, composition of sample elements or a complex a priori indicator. In order to increase the recognition accuracy, it was proposed to use the search of recognition results at low levels of the hierarchy. The experimental dependences of a priori and a posteriori reliability indicators for various conditions for measurements and states of recognition objects are provided in the paper.
The authors developed an approach to comparative analysis of scientific journals collections based on the analysis of co-authors graph and the text model. The use of time series of co-authorship graphs metrics allowed the authors to analyze trends in the development of journal authors. The text model was built using machine learning techniques. The journals content was classified to determine the authenticity degree of various journals and different issues of a single journal via a text model. The authors developed a metric of Content Authenticity Ratio, which allows quantifying the authenticity of journal collections in comparison. Comparative thematic analysis of journals collections was carried out using the thematic model with additive regularization. Based on the created thematic model, the authors constructed thematic profiles of the journals archives in a single thematic basis. The approach developed by the authors was applied to archives of two journals on the Rheumatology for the period 2000–2018. As a benchmark for comparing the co-author’s metrics, public data sets from the SNAP research laboratory at Stanford University were used. As a result, the authors adapted the existing examples of the effective functioning of the authors collaborations in order to improve the work of journals editorial staff. Quantitative comparison of large volumes of texts and metadata of scientific articles was carried out. As a result of the experiment conducted using the developed methods, it was shown that the content authenticity of the selected journals is 89%, co-authorships in one of the journals have a pronounced centrality, which is a distinctive feature of the policy editor. The clarity and consistency of the results confirm the effectiveness of the approach proposed by the authors. The code developed in the course of the experiment in the Python programming language can be used for comparative analysis of other collections of journals in the Russian language.
At present, adequate mathematical tools are not used to analyze the arrangement of components in arrays of naturally ordered data of a different nature, including words or letters in texts, notes in musical compositions, symbols in sign sequences, monitoring data, numbers representing ordered measurement results, components in genetic texts. Therefore, it is difficult or impossible to measure and compare the order of messages allocated in long information chains. The main approaches for comparing symbol sequences are using probabilistic models and statistical tools, pairwise and multiple alignment, which makes it possible to determine the degree of similarity of sequences using edit distance measures. The application of pseudospectral and fractal representation of symbolic sequences is somewhat exotic. "The curse of a priori unconscious knowledge" of the obvious orderliness of the sequence should be especially noticed, as it is widespread in mathematical linguistics, bioinformatics (mathematical biology), and other similar fields of science. The noted approaches almost do not pay attention to the study and detection of the patterns of the specific arrangement of all symbols, words, and components of data sets that constitute a separate sequence. The object of study in our works is a specifically organized numerical tuple – the arrangement of components (order) in symbolic or numerical sequence. The intervals between the closest identical components of the order are used as the basis for the quantitative representation of the chain arrangement. Multiplying all the intervals or summing their logarithms allows one to get numbers that uniquely reflect the arrangement of components in a particular sequence. These numbers, allow us to obtain a whole set of normalized characteristics of the order, among which the geometric mean interval and its logarithm. Such characteristics surprisingly accurately reflect the arrangement of the components in the symbolic sequences. In this paper, we present an approach for quantitative comparing the arrangement of arrays of naturally ordered data (information chains) of an arbitrary nature. The measures of similarity/distinction and procedure of comparison of the chain order, based on the selection of a list of equal and similar by the order characteristics of the subsequences (components), are proposed. Rank distributions are used for faster selection of a list of matching components. The paper presents a toolkit for comparing the order of information chains and demonstrates some of its applications for studying the structure of nucleotide sequences.
A frequent problem of traffic flow characteristics acquisition is data loss, which leads to uneven time series analysis. An effective approach to uneven data analysis is the spectral analysis, which requires obtaining process with a constant sampling interval, for example, by restoring missing data, which leads to the appearance of dating error. Thus, the main purpose of this study is to develop a method and software for wavelet analysis of traffic flow characteristics without restoring the missing data.
To analyze and interpret non-stationary uneven time series obtained from traffic monitoring systems, we propose the wavelet transformation method with adjustment of the sampling intervals, which results in a time-frequency domain with a constant sampling interval. Wavelet analysis is applied to the macroscopic traffic flow characteristics.
We developed the software for traffic flow wavelet analysis on the "ITSGIS" intelligent transport geo-information framework using the attribute-oriented approach.
Wavelet analysis of traffic flows characteristics using Morlet wavelets was accomplished for data analysis of the city of Aarhus, Denmark. Wavelet spectra and scalograms were constructed and analyzed, general dependencies in the frequency distribution of extremes, and differences in spectral power were revealed.
The developed software is being experimentally tested in solving practical problems of municipalities and road agencies in Russia.
As a result of the analysis, it was revealed that social networks (Vkontakte, Facebook), thematic communities in microblogging networks (Twitter), resources for travelers (TripAdvisor), transport portals (Autostrada) are a source of up-to-date and operational information about the traffic situation, the quality of transport services and passenger satisfaction with the quality of levels of transport services. However, the existing transport monitoring systems do not contain software tools capable of collecting and analyzing traffic information located in the Internet environment. This paper discusses the task of building a system for automatically retrieving and classifying road traffic information from transport Internet portals and testing the developed system for analyzing the transport networks of Crimea and the city of Sevastopol. To solve this problem, an analysis of open source libraries for thematic data collection and analysis was carried out. An algorithm for extracting and analyzing texts has been developed. A crawler was developed using the Scrapy package in Python3, and user feedback from the portal http://autostrada.info/ru was collected on the state of the transport system of Crimea and the city of Sevastopol. For texts lemmatization and vector text transformation, the tf, idf, tf-idf methods and their implementation in the Scikit-Learn library were considered: CountVectorizer and TF-IDF Vectorizer. For word processing, Bag-of-Words and n-gram methods were considered. During the development of the classifier model, the naive Bayes algorithm (MultinomialNB) and the linear classifier model with optimization of the stochastic gradient descent (SGDClassifier) were used. As a training sample, a corpus of 225,000 labeled texts from the Twitter resource was used. The classifier was trained, during which the cross-validation strategy and the ShuffleSplit method were used. Testing and comparison of the results of the pitch classification were carried out. According to the results of validation, the linear model with the n-gram scheme [1, 3] and the vectorizer TF-IDF turned out to be the best. During the approbation of the developed system, the collection and analysis of reviews related to the quality of transport networks of the Republic of Crimea and the city of Sevastopol were conducted. Conclusions are drawn and prospects for further functional development of the developed tools are defined.
The article presents the differential ranging method of locating modern earth stations with narrow radiation patterns. Earth station position data is proposed to be calculated using maximum-likelihood procedure system solution from three differential equations using one of numerical methods. In this case supplementary assessment parameter of location, calculated by measuring a mutual signal delay of an earth station, relayed through a spacecraft on geostationary orbit and a mobile repeater on the unmanned aerial vehicle, can improve the accuracy of coordinate estimation earth station.
For the developed method the analytical expressions of potential accuracy of calculation of coordinates of the earth station on the basis of the Cramer–Rao lower bound are developed.
To measure the positioning accuracy of located emitters it is suggested to use the errors ellipsoid corresponding to the provision of a source of a radio emission in space.
The analysis of standard routes of the movement of a repeater on the unmanned aerial vehicle is carried out and the conclusion is drawn that the best accuracy and the shortest route simultaneously are achieved, if the unmanned aerial vehicle follows a circular trajectory along the control area.
Calculation of potential accuracy of positioning of the terrestrial station for the area of 50 by 50 km is executed.It is shown that the error of the estimates, received as a result of statistical tests, doesn’t surpass the size of a big half shaft of the error ellipsoid calculated with application of analytical expressions.
The application of the developed method is possible in the implementation of the software of electronic control systems to counteract illegitimate use of frequency resource of space vehicles-satellite repeaters communication system.
Virtual Reality (VR) and Augmented Reality (AR) Head-Mounted Displays (HMDs) have been emerging in the last years and they are gaining an increased popularity in many industries. HMDs are generally used in entertainment, social interaction, education, but their use for work is also increasing in domains such as medicine, modeling and simulation. Despite the recent release of many types of HMDs, two major problems are hindering their widespread adoption in the mainstream market: the extremely high costs and the user experience issues [1]. The illusion of a 3D display in HMDs is achieved with a technique called stereoscopy. Applications of stereoscopic imagining are such that data transfer rates and—in mobile applications—storage quickly become a bottleneck. Therefore, efficient image compression techniques are required. Standard image compression techniques are not suitable for stereoscopic images due to the discrete differences that occur between the compressed and uncompressed images. The issue is that the loss in lossy image compression may blur the minute differences between the left-eye and right-eye images that are crucial in establishing the illusion of 3D perception. However, in order to achieve more efficient coding, there are various coding techniques that can be adapted to stereoscopic images. Stereo image compression techniques that can be found in the literature utilize discrete Wavelet transformation and the morphological compression algorithm applied to the transform coefficients. This paper provides an overview and comparison of available techniques for the compression of stereoscopic images, as there is still no technique that is accepted as best for all criteria. We want to test the techniques with users who would actually be potential users of HMDs and therefore would be exposed to these techniques. Also, we focused our research on low-priced, consumer grade HMDs which should be available for larger population.
Nowadays it is highly important for any organization to manage its resources effectively because of an unstable economy. There are two main resources of an organization: human resources and knowledge, which humans have. One of the ways for knowledge management is formalization of the competence management process by means of information systems. The choice of system depends on future use cases and system requirements. The purpose of this research is to analyze the competence management systems based on revealed common use cases and requirements. The result of the paper is a list with revealed common use cases and requirements, which could be useful for developing a new competence management system or for improving and modification an existing one. Based on the determined use cases and requirements the reference model of context-oriented competence management system in expert networks and context classification for current situation formalization have been developed. Developed reference model is oriented to take into account the current situation in the expert network. For this purposes a context model has been proposed that distinguishes participant context, active context, and project context. For the reference model efficiency estimation task for expert group search with needed competence set has been considered in the paper. In case of small amount of experts in expert network the classical system shows the better results but in case of large amount of experts the proposed system is better.
The methods of fault-tolerant coding are often used in the designing of reliable and safety components of automatic control systems: both in the data transmission between system nodes, and at the level of hardware and software architecture.
The redundant coding is widely used in the management of combinational logic devices control. In this case, codes, which are oriented to the error detection rather than correction of this, are in use. Such features of codes make it possible to implement the checkable automation systems with acceptable redundancy, which does not exceed the redundancy in the situation of duplication using.
The paper highlights the method of the synthesis of self-checking combinational devices, which makes it possible to take into account the features of the source devices architecture, as well as the properties of error detection by redundant codes in solving the problem of the synthesis of technical means for diagnosis. The paper gives the basic information on the theory of the checkable digital systems synthesis on the basis of redundant codes with summation.
The basic stages of the analysis of the diagnosis objects topologies are determined with the selection of groups of outputs — groups of structurally and functionally symmetrically independent devices outputs. The formulas are given to determine the presence or the absence of a symmetrical dependence of the diagnosis object outputs. The example illustrating the calculation process is given. The main stages of the analysis of the redundant codes application in the error detection on the functionally symmetric dependent outputs are formulated. The algorithm of the synthesis of the self-checking combinational devices with taking into account the object of diagnosis structure features and the redundant codes properties is proposed.
The article deals with a tracking control system synthesis for a nonlinear plant functioning under bounded external disturbances which are not available for measurement. A proposed solution is a robust modification of the backstepping approach with the similar controller design structure. The main changes are based on plant model transformations that make it possible to use the only one filer in control system and, along with it, an auxiliary loop method usage to disturbances evaluation and compensation. High-gain observers are used for unknown signals measuring together with their derivatives. Tracking errors and observation errors convergence with the adjustable accuracy during the finite transient time is proved. Efficiency of the algorithm is demonstrated using computer modeling in comparison with an analogue.
When solving problems related to the analysis and synthesis of communication networks for stability, a special place is simple and easy to understand indicators, weakly linked to the classical concept of exit probability from a state of health. Such deterministic indicators of stability (connectedness, a couple of connections, linear functional connectivity, the number of spanning trees) allow, albeit very approximately, to solve a complex of tasks related to the assessment of the reliability and survivability complex networks. Due to the rather simple analytical form of a linear functional connectivity for the synthesis of structures, it is possible to use the analytical method presented in the work. In this general formulation for the synthesis of connected graphs is formulated as the maximization of the linear functional connec-tivity for all possible graphs with a given number of edges, vertices, and with fixed values of their weighting coefficients.
Modern video coding standards have high coding efficiency, but the encoding performance has to be improved to meet the growing multimedia applications. The paper deals with the entropy encoding methods and algorithms in video coding standard H.264/AVC and H.265/HEVC. Context-based Adaptive Variable Length Coding (CAVLC) for the H.264/AVC standard was originally designed for lossy video coding, and as such does not yield adequate performance for lossless video coding. Context-Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the standard H.265/HEVC. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus, limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both coding efficiency and throughput were considered. Based on an analysis of their advantages and disadvantages, a method called the entropy coding algorithm using the enumerative coding of the hierarchical approach is proposed. The proposed algorithm consists of the Context-Adaptive Binary Arithmetic Coding algorithm and the enumerative coding algorithm with a hierarchical approach. The proposed algorithm is tested in the Visual C ++ development environment on various test video sequences. The results of the experiments showed a greater efficiency of coding of multimedia data (the proposed one reduces on average up to 15% of the storage volume compared to the traditional CABAC method), while the method requires a longer coding time (approximately twice). The proposed method can be recommended for use in telecommunication systems for storage, transmission and processing of multimedia data, where a high degree of compression is required first.
The paper presents algorithms for objective evaluation of speech quality are considered, based on the measurement of dynamic and static characteristics of speech signals at the source codec output. The functional scheme of carrying out of experimental researches is proved. The results of the analysis of the correlation of objective and subjective evaluation of speech quality are given. Modifications of the objective quality assessment are proposed on the basis of the correlation of the excitation of the MESC spectrum and modification of the exponent on the basis of the calculation of the sensation function of the spectral dynamics of MFOSD. The algorithm of regression curve formation is proposed, which allows to perform the transformation of objective evaluation to the scale of subjective evaluation of speech quality.
Based on the use of the most accurate modifications of the speech quality assessment indicators for reconstructed speech signals, a complex algorithm for objective hardware evaluation of the speech quality is proposed when the broadband and low-frequency stationary and nonstationary acoustic waves are applied to the microphone. It is shown that the use of a complex algorithm makes it possible to obtain an objective evaluation of the quality of speech according to GOST R 50840-95 with an average error of no more than 0.35 points for signal-to-noise ratios of 30 dB to -10 dB.
The article describes the computer model of using electromagnetic waves with lengths from 0.1 mm to 1 mm for detection of internal defects of products made by additive technology.
Now additive technology and 3D printing use materials transparent for terahertz waves (frequency 3·1011-3·1012 Hz, wave length 0.1-1 mm). At the same time, defects in 3D printed products have sizes like a terahertz wave’s length. Thus the Fresnel diffraction can be observed during illumination of a product with the same defects by monochromatic milliwaves.
Thereby the simulated diffraction method for 3D printed products quality checking can be applied. In this article the checking scheme, the diffraction pattern modeling algorithm using the Rayleigh – Sommerfeld integral, and the computer programme for this algorithm are described. The determination of sizes and positions of defects in products using diffraction patterns is shown.
The proposed diffraction method is fully automated and low-cost, uses safety electromagnetic radiation and can compete with tomography methods.
The article describes a tool software complex allowing to build, execute and integrate simulation models of the space systems’ onboard equipment functioning. It is based on a reuse technology defined by the international Simulation Model Portability (SMP2) standard. Along with the standard rules for building integrable models, we have designed additional original tools of information-graphical and intellectual modeling. In this way, we provide graphical construction of onboard systems’ architecture models, set the methods of model function and determine the options of command execution by onboard equipment.
This work is part of creation of software for the problem-oriented simulation modeling infrastructure in space industry. This software complex will allow designers not only to build their own onboard systems’ models, but also to unite simulators of the equipment produced by different companies and to make simulation tests for preparation and analysis of technical projects. Our approach provides economic and technological advantages for space industry’s knowledge-intensive production development.
We consider the problem of community detection for the graph which is a fragment of the academic Web. The nodes of the graph are the sites of the scientific organizations, and its arcs are hyperlinks. We propose a new approach based on the methods of coalition game theory to derive the Nash-stable coalition partition. This is determined by a function of preferences for any pair of vertices in the graph. The problem of finding a stable partition is connected with finding a maximum of potential function. The algorithm for searching stable partitioning and evaluating its complexity is presented. The proposed method was compared with two well-known methods of finding communities. The efficiency of the new method is demonstrated on the fragment of the Web which consists of the official sites of the Siberian and Far East branches of RAS.
The paper offers an approach for assessment of cyber-resilience of computer networks based on analytical simulation of computer attacks using a stochastic networks conversion method. The concept of cyber-resilience of computer networks is justified. The mathematical foundations of its assessment, allowing to calculate cyber-resilience indices by means of analytical expressions, are considered. The coefficient of serviceability on cyber-resilience is offered to be used as the key such indicator. The considered approach assumes the creation of analytical models of cyber-attacks. The method of the stochastic networks conversion is applied to create analytical models of cyber-attacks. The time distribution function and average time to implement cyber-attacks are the simulation results. These estimates are used then to search cyber-resilience indices. The experimental results of analytical simulation which showed that the offered approach has rather high accuracy and stability of the received solutions are given
The article presents a digital audio watermarking method for air audio data transmission. The digital watermark occupies the whole frequency range of the audio signal. The digital watermark encodes one bit of information. A decision about transmitted bit is based on a sign of the center value of the mutual correlation function. Two methods to design the digital audio watermark are presented. Low complexity of presented methods of digital audio watermarking makes them suitable for use in smartphones. The method can be used in digital audio watermarking of both speech and music. Information is embedded via the digital watermark in the frequency domain of a host audio signal via amplitude modulation of its frequency constituents. The paper includes results of simulation modeling and natural experiments.
Currently, crowd computing is gaining popularity. However, the quality of results obtained by means of crowd computing is often unpredictable, and that fact puts limits on the practical applicability of this technology. Therefore, systematization of information about modern methods for quality control in crowd computing is an important task that can pave the way to new research efforts in this area and therefore widen the scope of its applicability. The paper discusses the results of systematic literature review of journal articles from ScienceDirect and IEEE Xplore bibliographic databases, published after 2012. The paper also identifies main directions in crowd computing quality control, corresponding models and assumptions. In particular, it shows that most scientific attention is concentrated around consensus methods and game theoretic methods.
Effectiveness of practical implementation of integration methods for ordinary differential equations is studied. The algorithm implemented in program realization of Dormand–Prince method (one of the most popular MATLAB built-in integration procedure «ode45») is analyzed. The structural methods for partitioned systems of ordinary differential equations are presented. They demand fewer computations for a single step than the Dormand–Prince method used in ode45. Structural methods are implemented on the basis of the same algorithmic and programming core as ode45 to provide more objective comparison of the considered methods’ effectiveness. For several test problems better performance (in global error to computational cost ratio) of the considered structural methods than of «classical» Runga–Kutta methods is demonstrated.
This article considers the model of impact of non-stationary interference on the satellite communications channels of standard DVB-S and DVB-RCS, and also on the channels with frequency-hopping spread spectrum. The results of the impact of unintentional non-stationary interference on the satellite communications channels, which occurs from stationary and mobile sources of interference with the same average power, were compared. The bit error probability is used as a measure of noise immunity of satellite communications channels. In the article the term time coefficient of the disturbance noise existence was introduced. Its essence is to substantiate the degree of noise energy concentration in some time area of the desired signal. As an indication, characterizing noise immunity of data lines to the influence of unsteady noises the bit-error probability was chosen. It depends on the time coefficient of the disturbance noise existence. In the definite period of time coefficient of the unsteady noise existence for low signal-to-noise ratio they can have a more dangerous effect on data lines with FHSS than the continuous disturbance noise increasing the bit-error possibility.
This paper provides an analysis of the present state in the field of protection against false information in computer networks and formulates current problems related to this protection. An approach to assessing protection activities on the basis of the Markov chain of the disinformation process is proposed. The architecture of a future system of data analysis is described. It implies enhanced methods of text trustworthiness analysis. The proposed complex approach, based on the known and suggested methods, enables detecting false information in computer networks promptly. Furthermore, the proposed method can be used for countering terrorist activities and cybercrimes in order to search for network resources which may be involved in unlawful activities.
The purpose of this paper is to develop an algorithm for analytical design of consecutive compensator for the control system with delay based on typical polynomial dynamical models modification. A formula relating to the characteristic frequency and the cutoff frequency of a transfer function of open loop of the desired polynomial dynamic model was derived. Using this formula a modification of polynomial models was made taking into account a value of delay element of a plant.
Control of the plant with delay using the consecutive compensator has several advantages: it requires a minimum amount of measuring data and eliminates the need to introduce observer; there is no problem of non-zero initial conditions, which may arise during short-term disruption of a normal functioning of the system; a simple construction of consecutive compensator procedures for SISO and MIMO systems.
The article presents a methodology for substantiation of requirements for the technical vision system of a robotic complex. A technical vision system of the robotic complex is viewed as a combination of two subsystems: measurement and recognition. To implement the methodology we developed methods for calculating partial optimality criteria for substantiation of the technical requirements and evaluation of search area of the optimal values of measuring instruments characteristics of the technical vision system of a robotic complex; a recursive procedure for choosing the optimal values of measuring instruments characteristics of the technical vision system; a scheme of trade-off for evaluating the optimal technical requirements for advanced measurement instruments of the technical vision system in different technical and economical conceptions.
The search for optimal solution is done according to partial optimality criteria: recognition efficiency, the cost and risks of creating measuring instruments. For creating the recursive procedure based on formulated assumptions and assertions, a criterion, which provides the search of Pareto-optimal solutions, was synthesized. The developed methodology takes into account the existing (more suitable) technical and economical conceptions of creating a robotic complex while choosing trade-offs.
The article presents a vector barycentric method for solving the internal problem of electrodynamics, i.e. solving Maxwell's equations or their respective wave equations in a bounded computational domain with prescribed boundary conditions. The developed method refers to the method of direct solution of the boundary value problems of mathematical physics, the basis for the formation of which are the results obtained by V. Ritz, I.G. Bubnov and B.G. Galerkin. The basic idea of the method lies in the synthesis of a procedure of the vector potential approximation, done by the polynomials of the Lagrange type. The approximating polynomial is formed in the barycentric coordinate system for the entire region of analysis as a whole without partitioning into elementary sub-areas. It is assumed that the scope of analysis is a region with a piecewise linear boundary, and the dimension of the barycentric coordinate system is determined by the number of vertices of the analyzed region. The vector barycentric method is implemented both in the frequency and time domains. The solution to the problem of controlling the electromagnetic field in the approximation of the vector barycentric method is considered.
This article proposes a method for secure data transmission in the audible domain of the frequency spectrum of the air environment. Specifically, it proposes the method of construction, embedding, extraction and recovery of a hidden signal transmitted through the air audio channel. The hidden signal consists of two parts. One part is used for synchronization, and the other one is used for carrying information. The basis of the synchronization part is the Kasami sequence, whereas the basis of the information part is a code word of the binary BCH code. Both parts of the hidden signal are obtained through special encoding of their binary elements. The encoding uses Gold sequences and RZ codes. Speech or music are used as a carrier signal. The hidden signal is embedded in the frequency domain of the carrying signal. The embedding is based on amplitude modulation of individual spectral components of the carrying signal. The article discusses the possibility of restoring the hidden signal after the transmission of the stego-audio signal through the air audio channel. The article presents the results of simulation modeling and field experiments of transmission of the stego-audio signal through the air audio channel.
. In article new approach to simulation and design of infocommunication systems, in which hierarchical multi-level routing is provided, is offered. Authors considered elements of a set-theoretic base and system of models of infocommunication system, operating not only traditional model elements – bipolar communication networks, – but also the multiple segments such as circuit, star, ring and tree. With use of provisions of the theory of sets to the basic concepts and procedures of a reference model of open system interconnection authors put in compliance mathematical objects providing the strict formal description of infocommunication system in which multipath multiple address physical and logical connections "point-to-multipoint", "multipoint-to-point", "multipoint-to-multipoint" are implemented. On the example of simulation of property of structural reliability of specific infocommunication system constructibility, visualization and systemacity of the developed approach are shown.
In the paper we consider the evolution of design technologies of reconfigurable computer systems based on FPGAs of various families. Five FPGA-based generations of reconfigurable computer systems with high placement density, from Xilinx Virtex-E to modern Virtex UltraScale, are described. We show results of design of high real performance energy-efficient reconfigurable computer systems. The main contribution is a liquid cooling system designed for Virtex UltraScale FPGAs. It provides independent circulation of the cooling liquid in the 19” 3U computational module for cooling of 96-128 FPGA chips that generate 9.6-12.8 kWatt of heat in total. The distinctive features of the designed immersion liquid cooling system are high cooling efficiency with power reserve for the designed perspective FPGA families, resistance to leaks and their consequences, and compatibility with traditional water cooling systems based on industrial chillers.
For cloud computing systems with web interface a set of probabilistic models is proposed. At the same time a model of Java applications with a web interface based on servlet and filters is considered. These models are based on queuing theory and extend its applications by studying the multichannel systems with “warm-up”, “cooling” and phase-type approximation of Markovian and non-Markovian processes. Transition diagrams and matrices for the microstates of queuing systems being models of applications with a web interface are described, and a scheme for computing the stationary probability distributions for requests number, waiting time and expected time in system is being developed. The paper discusses the received computation results of the proposed modeling approach and their application to assessing the performance of the cloud systems using applications based on servlet and filters.
The algorithm of classification of multidimensional group pointwise objects samples is presented. Search is carried out on the basis of combinatorial search of proportionate fragments of matrixes of pairwise relations on a set of templates. The decision on assignment of the sample to this or that template is made according to criterion of the minimum Euclidean distance. The presented approach to recognition allows one to synthesize invariant (concerning rotation, scaling or offset of system of co-ordinates) descriptions of secondary signs and to use quite a powerful toolkit of the theory of multidimensional and metric scaling in compensating distortions of the recognized group pointwise objects images. The algorithm implements a procedure of statistical tests of Monte-Carlo, within the frames of which each point, allocated in a random way in a prospective neighborhood of required coordinates, is checked by condition of the minimum of the quadratic similarity measure. The paper gives an example and the results of using the algorithm for identification and recovery of the distorted radio images exposed to coordinate noises and presented by sampling of templates of "bril-liant" points.
In terms of information security, embedded devices are elements of complex cyber-physical systems, systems of the Internet of Things, working in a potentially hostile environment. Therefore, the development of such devices is a challenging problem, often requiring expert solutions. The complexity of developing secure embedded devices is due to different types of potential threats and attacks to the device, as well as the fact that in practice security of embedded devices is usually considered in the final stages of the development process in the form of adding additional security features. In the paper, we propose a design technique aimed at the development of safe and energy-efficient cyber-physical and embedded devices. This technique organizes a search for the best combination of security components on the basis of solving an optimization problem. The efficiency of the proposed technique is demonstrated through the development of a secure system to protect a room perimeter.
The article considers the issue of automatic completion of ontology with roles and concepts formed by an intelligent system in the provision of new facts. Implementation of specified calculations allows increasing ontology information content during data stream preprocessing.
In this paper, we propose a simple method for assigning importance weights to indi-viduals of a population to determine the association between single nucleotide polymorphisms and quantitative traits in the genome-wide association study. At the first step, pairs of individuals in population are compared in terms of distances between phenotypes and genotypes. At the second step, pairwise comparison matrices of individuals are constructed, and the weights of individuals are computed with respect to additive and multiplicative scales. It is shown how to modify the Lasso method using the weights. Numerical experiments with real data illustrate the proposed method.
In this article we consider an approach to representation of distributions of probabili-ties in the form of the two-level composition of an integral kernel and a phase function which is generalization of the concept of density of random parameter distribution. Possibilities of giper-delta approximation of the phase function and its interrelation with the formation of phase-type distributions are shown. The method of approximating distributions formation on the basis of the arbitrary phase function by the method of derivatives is offered.
The article describes the program complex that gives an opportunity to simulate scenarios of development of small innovative enterprises. A distinctive feature of the proposed solution is the possibility of defining points for making decisions on structural transformations and the use of "inverse" forecasting for defining initial conditions.
Deterministic nonlinear discrete mappings of continuous communication channels are formally described based on functional Volterra series. Evaluation of complexity of carrying out nonlinear discrete-continuous and continuous-discrete transforms with specified level of nonlinearity and signal dimension displays significant discrete-continuous and continuous-discrete transform computational complexity. A block diagram of nonlinear discrete-continuous and continuous-discrete transforms on the basis of functional Volterra series is proposed.
Development of the data-communication equipment with high demands imposed is necessary for solving the problems of unmanned robots group control at various levels. In this paper, methods and algorithms for noise-immunity communication channel implementation are described. It is substantiated that communication equipments for these channels have to be special-purpose and have to use effective signal-code constructions that can adapt to changing environments. Features and options for multiple unmanned ground vehicles (UGV) control communications are described, the advantages and disadvantages of time division multiple access and frequency division multiple access are considered.
The paper discusses a task of sources consumption management in the process of deploying information support systems for complicated technical complexes (CTC). Application of CTC as well as the process of appropriate information support are usually limited by the exact prescribed terms, so any delay is not allowable. The delay may be eliminated more often only by involving additional information sources at later stages. The developed algorithm is based on Bellman principle of optimality that allows one to define not the final correction schedule, but to operate a flexible program of control actions, depending on the concrete result at every stage, duration of which exceeds a defined norm. This program can be used in the appropriate decision support systems and can be included in the simulation models of CTC deploying and applying. The paper describes a detailed algorithm for optimal correction, corresponding to the normal distribution of stages durations.
Singular spectrum analysis (SSA) is a relatively new method of time series analysis. SSA is of particular interest in application to analysis of non-stationary, short and noise time series. One of the drawbacks of SSA is that both simple harmonic oscillations and complex components of analyzed time series are decomposed into more than one component, which leads to the necessity of grouping related components for further analysis. This problem was partially addressed by Alexandrov, Golyandina (2005), mainly in application to the problem of identification of harmonic oscillations. In this paper, we present a more agile and generalized algorithm for automated grouping of components, which allows grouping not only harmonic oscillations, but also components corresponding to amplitude-modulated oscillations, fading oscillations and other. The algorithm was tested on synthetic time series, com-posed of common components: harmonic, amplitude-modulated, and exponentially damped oscillations, sum of two Gaussians, and their linear combinations. Experimental results of quality of grouping were obtained, showing that the proposed algorithm gives on average 26% better grouping results than an existing algorithm.
In the article, we consider an approach to justification of communication network modernization on introduction of new communication services. Optimality criterion of network completions is economic efficiency. Problems of introduction of new communication services are analyzed. The structurally functional model of communication network modernization and a model of decision-making at the choice of optimum by the set criterion of completions of each communication network system are given. Results of modeling are illustrated by a settlement example.
The article discusses approaches to long-term forecast of quantitative and qualitative indices of the security subsystem of information and telecommunication systems.The possibility of their use for the analysis of protection of systems against unauthorized access is assessed.
The method for patient reception processing based on infological system is proposed in the context of infological approach. This method allows organizing electronic queue for specialist attendance in health care facilities by semantic evaluation of patient health complaints.
In the article, the model and implementation feature of the program shell of parallelizing are considered. Results of the comparative performance evaluation of solving the task of access restoring to data on various hardware, using for these purposes, both a sequential algorithm of computing, and implementation on the basis of the program shell of parallelizing are given.
The research purpose was to describe the conditions of formation of mathematical suitability of main ways to store binary large sequences (BLOB or BFILE) in ORACLE databases. The solution is obtained by restoring multiple linear regression equations taking into account significant conditions of use of these datatypes as BLOB and BFILE. Some practical recommendations are formulated to justify the selection of the datatype to store binary large sequences.
The article proposes a method of analytical modeling of viruses propagation process in computer network, which takes into account the characteristics of its topology, behavioral characteristics of viruses and information security sub-systems of nodes and likelihood of source infection of a plurality of nodes with different viruses. The method is based on performance of network topology in the form of a model with discrete states and times of branches, distributed by the generalized Erlang law of n-th order.
Based on the performed image quality research, the given paper offers an information noise model in a video stream and an identification algorithm of the snapshots containing additional information. An identification procedure for the snapshots with information noise based on the integral brightness snaps image, correlation analysis of histograms and comparison of approximate polynomial indexes is presented. It is shown that a combined application of the developed procedure and technologies ensuring facile transformation of one image into another supports the available video communication services quality enhancement thus protecting users from various negative effects.
The article describes the basics of creating operational river flood forecasting systems, based on the integrated use of modern information technologies and integrated proactive modeling. Their practical implementation is also shown. The distinctive features of the proposed interdisciplinary approach are: a) the widespread use of heterogeneous data from a network of gauging stations and Earth remote sensing satellites; b) implementation of forecasting systems based on service-oriented architecture; c) creation of an intelligent interface for selecting the type and parameter setting of hydrological models; d) ensuring convenient and accessible presentation of the forecast results using web-services. Practical testing of the developed software prototype has confirmed the possibility of automatic high-precision operational (from several hours to several days) forecasting of floodingzones and depths of river valleys.
Is examined the method of the autonomous indirect identification of the conversion factor of pendulum compensating accelerometer, which makes it possible with the high accuracy to determine the coefficient under the conditions for orbital flight indicated by the built-in firmware means of this gauge and to, thus, decrease an error in the determination of increase in the apparent in the velocity with the accomplishment of maneuver by automatic spacecraft
The development of factorization mechanisms of composite integer numbers is examined in this work. The author proposes a different approach, based on the study of the internal structure of the positive integers and the use of the properties of numbers which do not depend on their digits (the criterion for divisibility). That kind of approach provides a conversion from integer factorization task to a retrieval task of the special partition of the new characteristic of a number , so-called f-invariant, which turns out to be less complex problem. – Bibl. 22 items.
To solve the problems of design and operational management of fiber-optic communication systems a value of error probability in subdetector area sometimes is necessary. Optimization of transmission systems for this parameter is particularly relevant for fragments of all-optical networks. In article, the estimation of bit error probability is presented for the worst case of optical signal processing. The obtained analytical model was the basis of a method for estimating the possibility of using multi-level optical signals in fiber-optic communication systems. The method allows to determine the suitability of a given transmission system reliability receive signals with a given size of the ensemble.
The results of kinesthetic motor imagery EEG-pattern classification of one hand fingers and wrist movements executed in a given rhythm are presented in this study. The classifiers were based on the support vector machine method and on the developed neural network committee. It was shown that the accuracy of pairwise EEG-pattern classification of imaginary movements by means of the neural network committee was higher on average than the accuracy of the support vector machine classifier. The possibility of improving the accuracy of fine motor imaginary classification was revealed with the help of individual approach implementation for selection of EEG-pattern classification parameters.
Analysis of the existing polymodal interfaces, their main characteristics and applications, as well as the results of common investigations in the field of multimodal interaction and interface design led to make a conclusion about the possibility of building a polymodal infocommunication systems based on multimodal architectures of subscriber’s terminals. To solve the tasks of interpersonal communication through technical means of communication the principles of polymodal systems construction and hierarchical system of their models are suggested in the article.
The development of factorization mechanisms of composite integer numbers is considered in this work. The existent methods will not become more rapid and efficient in the nearest decade, due to narrow and inadequate mathematical approach to solution of this problem, which is based on so-called sieve of Eratosthenes. The mechanism suggested by author of this work, uses a completely new method based on examination of internal structure of natural sequence and application of digit place independent features (the criterion for divisibility).
A numerical-analytical method for non-stationary queueing systems models computation is presented. The solution of Chapman—Kolmogorov equations is found in the analytical form. The algorithm and its practical implementation with Java language are discussed. Computation time and results precision for the presented method and the Runge—Kutta type method used in Matlab are compared.
In article approach to monitoring of integrity of dynamic objects on their metric standards is offered. Creation of a standard is based on sequential conversion of process from a memory dump to the machine gun of transitions on a state graph with calculation of structural, information and operational metrics. It allows to reveal violations of the functional statuses of object in memory of the computing system. The algorithm of monitoring of integrity of dynamic objects of anti-virus means of Dr.Web is provided.
A complex of new models of non-stationary queuing systems with finite source is presented. In contrast to traditional models of queuing theory the proposed models allow to describe the processes of customers servicing in the specified time interval under general assumptions on the time distribution between customer arrival and service. The article presents the principles of such models development, their graphical interpretation and formulae for computation of probabilistic and time characteristics as well as Chapman—Kolmogorov differential equations systems.
For complex industrial facilities providing integrated security is extremely and very important problem for airport facilities (AF). Features AF is a significant set of requirements: aviation security (AS), personnel security, aircraft security and engineering infrastructure. To ensure the functioning of AF for security purposes apply integrated management system (IMS), consisting of the management system set, in accordance with various standards, including international (ISAGO, ISO, ISO/IEC and other). The task seems appropriate to consider a model-based IMS, supplemented both by AS block and comprehensive audits block. In this issue presents the results of calculations according to the presented model IMS with regard to expanded criteria for AF. By a consensus of experts, the requirements of the "base" ISO much lower on the priority of "profile" for AF requirements ISAGO (IATA).
The paper introduces a situational conceptual model designed to investigate complicated spatial systems, Industry-Natural Complexes (INCs) in particular, that provides automation of every modelling stage with possibilities to equally treat information from calculating modules, which simulate parts of an INC, and from integrated GIS and expert system. This approach is featured with wide usage of expert knowledge, employment of the GIS not for object mapping only, but for task setting, spatial-dependent calculations and modelling results' displaying as well.
Analysis of Intrusion Detection System techniques is a perspective area for the protection of networks and network systems. This paper presence the overview of attack detection mechanisms based on data mining approach. The novelty of the this kind of mechanisms is the ability to create self-learning systems for intrusion detection. Also the article describes the basic elements of intrusion detection algorithms.
In the article review of the existing systems of computer sign language is provided and their advantages and disadvantages are identified. The synchronic aspect of these systems was considered. The general case of translation (both sides) from sounding Russian on Russian sign language and vice versa is observed. A new method for constructing a semantic unit of computer sign language is proposed. Lexical meanings of words are defined to match the correspondence "word-gesture". Among the many words alternatives based on the semantic analysis algorithm for each word necessary lexical meaning is confirmed. For simple sentence semantic analyses algorithms are developed. A translation method of the Russian text into Russian sign language based on a comparison of syntactic structures is proposed. Relevant library is developed to determine the syntax constructions. To create the architecture of the future system for gesture recognition are examined existing hardware and media programs. It is noted that this stage there is no solution that meets the specified requirements, so to get a more accurate result, it is necessary to use a combination of these systems.
In this paper we propose an improved methodology and technology of simulation studies of complex systems as a result of development and improvement of the traditional methodology. The main difference is the improved methodology consistent automation of the process of research and the integration of all programs in a single complex. Software systems that are created on the basis of this methodology allowed an average cut several times during the study of complex systems and significantly increase the number of potential users of simulation
Risk analysis of information security is now especially hot topic, owing to that that as insurance companies want to have probably more exact characteristics about the probable extent of damage and the necessary sum of insurance, and the company, wishing to insure the information risks, also want to understand, for what and as far as these or those sums are reasonably paid at the conclusion of the contract of insurance. Besides, any of the called parties doesn't want to lose own resources. Thus, it is necessary to learn to receive adequate, but at the same time the complex, aggregated estimates of security of information systems. The comprehensive analysis of security both program and technical component of system, and the personnel of such systems (their socio-technical component) is for this purpose necessary. The purpose of the present article is development and improvement of the task of the main relations considered before option in the "personnel information system-critical documents" complex at socio-engineering attack of the malefactor.
We provide a description of the methods for socially significant behavior modeling and estimation on the base of supershort incomplete set of observations. We consider agent-based modeling, statistical approach including small samples analysis, time series analysis and their application to the described problem. Finally, we describe advantages of probabilistic graphical models for representation of socially significant behavior.
A survey of smart space prototypes intended for scientific and educational meetings and facilitated by means of automatic speech recording is represented in the paper. Analysis of the used audio visual signal processing means and realized user services allowed us to propose a classification of smart space prototypes. The peculiarities of the developed smart meeting room and its distinction features from the considered prototypes are described.
The paper proposes a method and a set of patterns for the problem of ontology matching, identified in the previous work of the author. The proposed method integrates the lexical, structural and semantic approaches. The presented patterns for knowledge integration and ontology alignment can significantly facilitate the process of ontology matching due to usage of the schemes based on typical solutions.
Descriptions and results of the computing experiments series devoted to the analysis of a lag effect of chaotic processes are submitted. Materials of article are continuation of the researches given in article [1]. Essential difference from the specified work is refusal of segmentation of studied process change area. Such approach allows to adjust more flexibly system of the analysis of a chaotic dynamics lag effect. Earlier received conclusions about existence of a smoothed dynamics lag effect are confirmed. Possibility of effective control strategy creation on the basis of the received conclusions demands the additional researches connected with studying of a trend dynamic properties.
This paper presents a techniques assessing the reliability of the complex structures which can not be reduced to a series-parallel connection of elements. In the case where the system elements can have three mutually exclusive (disjoint) states – up state, “fail-closed” mode, “fail-open” mode there is a need and an opportunity to separate the system failure probability calculations for the "break" and "closure". It is shown that the developed method of estimating the reliability can be used for structures of any complexity with very weak restrictions. Orthogonalization are examples of logic functions based on the incompatibility of individual variables. Correctness of the proposed methods was confirmed by the decision of problems by full exhaustive search. Examples of automated modeling of bridge structure and the structure of the two "stars" connected to a "delta" solved using software ARBITR in which software is implemented algebra of mutually exclusive (disjoint) events.
July 17, 2014 celebrates the eightieth birthday of Prof. Rafael Midkhatovich Yusupov, doctor of technical sciences, corresponding member of the Russian Academy of Sciences, Honored Scientist of the RF, director of the St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences.
The redundant variables method for checking and correction of computing processes in real time is considered, that is necessary for increasing of the computing processes reliability. The questions of equivalence of the initial and extended systems, improve immunity, correction ahead are considered. The redundant variables method is compared with other known methods of control, diagnosis and correction of computer systems.
The article deals with methodology of teaching medical students computer modeling, which improves students’ information and technological competence and has positive effect on learning process in general. It shows the process of choosing teaching method, identifies types of educational activities of students and describes technological actions that enhance knowledge acquisition.
The possibility to use based on the transformational rules transitive approach for the specification and the computer implementation of continuous processes is considered. The examples show techniques of the processes initial specifications conversion in the specifications as sets of transformation rules. As starting specifications in the form of a physical model, the block diagram of dynamic links, in the form of ordinary differential equations are considered. These examples demonstrate the simplicity, clarity and versatility of this approach. Some problems of implementation of the processes specified by using rules are briefly discussed. The resulting model implementations of processes are evaluated using analytical methods and compared with the numerical solutions found using Matlab and MathCad.
Abandoning traditional principles of separation of transmitted information on services for polymodal presentation of information requires development of a new constructive theory of building polymodal infocommunication systems. One of its cornerstones is the quantitative research of the degree of the achievement of the purpose of such systems and subscriber terminals’ participation share in this result. This paper presents an approach to evaluation of polymodal systems effectiveness on the basis of the prime cost index.
DDoS attacks are a widespread method of making network information systems out of service. Furthermore, the malefactors combine multiple types of attacks in order to increase the intrusion efficiency. This paper considers the network traffic parameters enabling system state monitoring and invasion tracking. There are defined thresholds and conditions that allow linking the parameters’ behavior to the type of attacks the system is exposed to.
Existing objective and subjective TV image quality assessment metrics are considered. New metrics of digital image quality test are founded. Local entropy approach to form objective TV image quality assessment metrics is demonstrated experimentally.
Is offered the method of conceptual modelling of technical systems (TS), based on the combined and joint application of known methods of the structurally functional analysis and object-oriented modelling. As a result of it the useful effect which consists in simplification of problem solving of ordering and structurization of knowledge which are used for construction of conceptual models of a design, managerial processes by states and operated functioning of the TS is gained.
The paper encompasses a design conception for combined embedded device security to be applied within the development process of protection mechanisms for systems and services of complex security on rail transport. A model and technique proposed are intended for configuring embedded device security components developed taking into consideration expert knowledge in the embedded security field. The goal of the configuration process is to find a security configuration that meets all necessary security requirements and constraints of the device platform, satisfies set resource consumption criteria and does not contain known types of security component inconsistencies.
The paper considers estimations of the incoming network traffic spectrum characteristics sensitivity to computer attacks. The spectrum is built by means of singular spectrum analysis and “the caterpillar” for various attacks and traffic functions. The discovered spectrum change at the moment of the beginning of an attack and during its running can be useful for intrusion detection systems development.
Multimodality of traditional interpersonal communications points to the purposefulness of using the polymodal dialogue in the process of communicative interaction of infocommunications subscribers. The creation of the polymodal communication systems became possible due to the development of cognitive science and current results in the design area of multimodal interfaces interaction. Application of the existing and expected outcome of signal processing tasks of different modalities in the synthesis of polymodal systems will provide all parties to communicate and their further intellectualization will allow us to approach the infocommunicational interaction of subscribers to the traditional interpersonal communication.
The paper comprises a technique of information flow verification for information and telecommunication systems with embedded devices. The goal of the technique is to evaluate the security level of the constructed system and check the compliance between real information flows and the set policies. The conducted verification is based on model checking with the use of SPIN tool. Implementation of such verification is fulfilled at initial design stages and provides earlier detection of contradictions in the used security policy and inconsistencies between the network topology and requirements of the information system.
The method of an acceleration of algorithms for hierarchic image segmentation is proposed. The algorithm is applied when functionality of the decision rule does not need segments features recalculation on an each iteration.
An approach to improve the speed of telemetry information processing based on the procedure of spectral correlation properties of the telemetered processes verifying, using the mathematical tools of the theory of runs was developed. Implementation of the developed algorithms in special software for processing and analysis of the launcher rocket telemetry information on the active trajectory allows, reducing the redundancy of TMI processing results used as input information to the computer-aided analysis systems, promptly receive standard integrated conclusions about the controlled events occurring during launcher rocket tests.
Leading manufacturers of computer systems and technologies realize the importance of adaptive control and self-organization in information infrastructure of the XXI century. New adaptive technologies were already called “natural” and “organic” (Organic IT). Analysis of modern evolution tendencies for information system and technologies are discussed in the paper. Conception and methods of integrated modeling of self-organization computing technologies in critical applications are presented.
The analysis of a current state of researches in the field of an information technology of designing of the automated systems of monitoring is carried out. The approach to creation of the distributed system of the information on the control of a condition of space objects and objects of a land space infrastructure on the basis of a modern information technology of the automated gathering, integration and the complex analysis of all types of information, circulating in a contour as separate management information system (MIS) of technological processes, and management information system of preparation and start-up and management information system of the cosmodrome as a whole, and creations of a multilevel management information system of the cosmodrome with use of modern principles of the organization of the corporative information systems is offered.
Methodological and techniques foundations of complexity objects monitoring and structural dynamics control theory are proposed in this paper. These foundations include multiple-models descriptions, integrated methods, algorithms and new intelligent information technology which is used for automation designing space and ground based monitoring and control for complex technical-organizational objects in different environments in practice. Some applied examples of proposed intelligent information technology are suggested in the paper.
Particularly urgent formulation and solution of various classes of scheduling problems of structural dynamics of complex objects (CO). This article is based on a generalized settheoretic formulation of the problem of planning the structural-functional reconfiguration of the CO is considered a complex model of planning and management of the processing and transfer of material and / or information resources to restructure, and a model of parametric synthesis image of the CO, providing the robustness of its reconfiguration plans under the optimistic and pessimistic scenarios for the structural dynamics of the CO.
The paper introduces the features of quantum infocommunication and the basic concept definitions, discusses the historical preconditions, possible applications and development perspectives
The article discusses the problem of compression of 3D-video data streams, using the existing well-known codecs. The comparison of the existing and proposed methods is giving
The article discusses the problem of matching the processes of the 3D modeling and 3D prototyping of complex space forms in the technology of cognitive programming. The ways to solve this problem are shown on the example of environment 3D Studio MAX.
Neural methods and logic-object methods for complex images description, analysis and recognition are considered. Algorithms for identification problems and simple pattern classification solution (for example, complex 3D-scenes on 2D-images of partially hidden objects) have been suggested. Comparative analysis for algotihms complexity has been given and computing experiments results have been described.
One of the approaches to the detection of network anomalies is the analyses of parameters of functioning of a network. Characteristics, calculated on a wavelet coefficients, indeed, are more sensitive to changes in the number, than the characteristics calculated directly in a row, but this requires more calculations, the spectral-time algorithms, of course, subject to optimize for application in real-time systems. In addition, there are different approaches to the implementation of wavelet expansions, each of which has its place on the informative value (the number of qualifying ratios), the authentic values, the computational complexity of the transformations. The article offers a reasonable approach to the implementation of these algorithms for use in real-time anomaly detection systems.
The market of tools for development of neural intelligent systems presented in thousands of software titles, due to multiple tasks of intelligent information processing . The paper provides an overview of tools that are applicable for development of neural components of intelligent security systems.
The paper considers the last researches in the area of the security metrics. Classification of the known metrics is suggested. Multilevel approach to the security assessment is suggested. It is based on the attack graphs and service dependencies graphs. The approach allows evaluating different aspect of the system security considering its topology, operation mode, historical data about incidents and other information.
The paper considers the technique of singular spectrum analysis (“the caterpillar”) and its application to the sphere of network traffic time series analysis in order to detect DDoSattacks against the Web-server. A decomposition of the source time series was carried out. Characteristics of the eigenfunctions and principal components of the series under different working conditions were revealed.
The paper introduces a new line of research — intelligent search of predictors of social events (Predictor Mining); this direction lies in the intersection of Data Mining and Social Computing areas of research. The proposed approach includes an automated composition of a predictors set for the social event and then interpretation of the predictors by methods of Social Computing. Detection of protest tones in the Internet, estimation of parameters of socially significant behavior, and investigation of largest social networks members' values and strategies formation specifics are examples of problems that can be analyzed using the Predictor Mining techniques.
In this paper it was shown the advantage of the method of randomized aggregate indices to evaluate the effectiveness of the publication activity of scientific staff of various organizations. Was considered an example of the application of this approach for to building a hierarchy of aggregate indices of two levels which takes into account opinions of two potential of expert groups.
The redundant variables method for checking and correction of computing processes in real time is considered, that is necessary for increasing of the computing processes reliability. The systems with reproducible function are synthesized and analyzed. The results of simulation for different methods of integration are considered. It are registered, that the using of the redundant variables method permit to increase the accuracy of calculations.
The concept of interaction of two binary factors in the experiment with a binary outcome is considered. A mathematical model is developed for two binary factors within the sufficient component cause framework. A formalization of the binary experiment is provided in terms of Boolean functions theory. Axioms of symmetry of the interaction in binary experiment is focused and formalized as automorphisms over corresponding free Boolean algebra. Complete classification of interaction types in binary experiment is presented as well as their geometric interpretation.
The article is devoted to development of a complex speaker model for using at the text-independent speaker identification. The complex speaker model is based on gaussian mixture method. The model is formed by preliminary segmented speech signal, where each segment matches to certain broad phonetic class. Method of speaker models structuring is proposed. Speaker models are structured as a tree, which allows to identify speaker without running a full search on the set of models. Researches have shown the division of the acoustic space of speaker's voice on the set of classes that represent some phonetic events, increases the efficiency of voice identification and the proposed structuring method of models accelerates the search operation.
The paper describes an integrated approach and respective tools which provide methods and means for analysis and verification for representatives of all four major classes of languages used to describe telecom applications: languages of general purpose executable specifications (SDL), languages for high-level descriptions of complex systems behavior patterns and their interconnections (UCM), special purpose language for verification of telecom applications (interpreting MSC, communicating finite state machines, Dynamic-REAL), and industrial imperative languages (C/C++). Verification of specifications is complemented with automated construction of test suites, which ensure the specified test coverage rate of source behavioral specifications and are optimal with respect to specified performance criteria. Tests are run in a specialized automated test-run environment either on system models, or on actual system implementations inserted in respective program shells which provide communication of the system under test with the test environment. The test shell allows for automated analysis of the test-run results along with the test-runs as well.
Museum exposition presents computer machinery of mid XX – beginning XXI cen-turies. Visitors are acquainted with a history of informatics, computer technologies and communication. Origin and development of computer means in this country as well as informatics coming into being a fundamental science are the main topics of the exposition
The paper provides an analytical review of perspective research directions according to the talks by leading foreign and domestic experts in the security of computer networks, presented at the 6th International Conference "Mathematical Methods, Models and Architectures for Computer Networks Security» (MMM–ACNS–2012), held in St. Petersburg from 17 to 19 October, 2012. World-known scientists, such as A. Stavrou, B. Livshits, L. Khan, and F. Martinelli, made invited talks. On sections of the conference there were discussed topical issues related to the intrusion prevention, detection, and response, anti-malware techniques, applied cryptography and security protocols, access control and information protection, security event and information management, security modeling and cloud security, and security policies.
The paper describe key points in algebraic bayesian network knowledge pattern implementation on C++ programming language. Knowledge pattern implemented as class that handle and store estimation for knowledge pattern elements. It also provide a couple of methods for processing knowledge pattern such as consistency update and a posteriori inference.
A formalism for description of labelled transition systems, which unifies the format of states of the systems, the format of computer language instructions represented by labels of the systems and the format and semantics of transition rules, and thus makes the development of operational semantics of computer languages more technological, is proposed.
To monitor the state of the information system it is necessary to track constantly and analyze data received from different security sensors. In the majority of cases this information has textual format, therefore different visualization techniques are used for data analysis. The paper presents the results of the survey on the modern techniques in security visualization.
The paper presents the basic components of the methodology of iterative attack modelling in large computer networks, which constitues formal model, analysis algorithms for probabilistic attack graphs and software. Formal model of iterative attack modelling process involves the process models of task definition modelling, attack model building, model execution and model result analysis. Probabilistic attack graph analysis algorithms provide calculation of security metrics and finding attack sub-graphs associated with intruder action scripts. Software tools for attack model analysis in large computer networks provide their static analysis and analysis of dynamic characteristics.
The relevance of the problem of information and telecommunication systems protection is stipulated by increasing the complexity of hardware and software, high dynamics of their development, distributed and heterogeneous structure and many other factors. Analogy between evolution and natural selection in nature and information and telecommunication systems, including security systems, is obvious. The paper suggests the conception of adaptive protection of information and telecommunication systems which is based on hybrid mechanisms integrating the paradigms of nervous and neural networks.
The given paper shows the ability of construction of decorrelating transformation bases, which can be used for digital images compression, by means of cellular automata dynamics. We introduce the construction algorithms of decorrelating transformation bases, generated from evolving states of partitioning cellular automata, which is considered to be the extension of classical model of cellular automaton.
The authors suggest a new methodological approach for optimal selection of embedded Global Navigation Satellite Systems (GNSS) receiver for airborne navigation equipment (ANE). The analysis and synthesis of various methods of expert evaluation of qualitative and quantitative characteristics of GNSS receivers are held out. It is shown that the use of different methods ultimately leads to identical results in terms of selecting the best alternative for a given set, despite the use of fundamentally different mathematical tools.
The analytic method for calculation of moments of a sojourn time distribution has been proposed. The comparison with estimations from simulation model and numerical calculations has been considered.
Similarity of production processes in different enterprises allows to develop common platform for production planning for these enterprise. However, it is needed to develop enterprise-specific modules for every type of enterprise. Cloud computing technology is proposed for automation of the following processes: transferring of new versions of the platform, setting of the platform, platform maintenance, platform updating, and platform functioning. This technology allows enterprise employees to get a dynamic remote access to the services, computational resources, and applications. Mathematic models and methods are used for solving optimization tasks for production planning and management which are developed as a solver module. An architecture of the platform based on the following levels is suggested: database management, application servers, web-server, and client software. A prototype based on the proposed architecture and solver is developed and described in the paper.
In this paper, common trends of architectural design, technologies, properties, and drawbacks of indoor positioning systems based on communications supported by smartphones are analyzed. The main idea of such kind of systems is that their users can use them through their mobile devices because such systems include positioning functionality based on such technologies as Wi-Fi, Bluetooth, and GSM. For example, museums might not need to buy expensive audioguides, but instead can provide their visitors with appropriate software for their smartphones. The paper presents a comparative analysis of most promising at the moment systems and solutions.
The paper is intended to analyze attack modeling problems in large computer networks with the use of different models, methods and tools. The famous models, as well as methods and tools for attack modelling are examined in detail on the basis of the characteristics of large networks as information security related objects and objects of attack, and directions for further development are provided. The role of information security requirements in attack modeling iterations is shown. Examples of attack modeling problems associated with different types of NOT-factors are presented.
The paper deals with design features of the concept of information security for information and telecommunication systems of State authorities. The distinctive concept provisions describe the object of protection, threats to information security, means of ensuring information security, and management of ensuring information security. As main distinctive concept provisions, the conceptual model of ensuring information security and the conceptual model of the system of ensuring information security are selected. Conceptual model of ensuring information security is presented in functional form, which allows determining the dependence of the performance from the full set of conditions and factors influencing their values. A conceptual model of the system of ensuring information security is presented as oriented graph. It is noted that the challenge of the system of ensuring information security is to implement overlay of each edge of the graph of the relevant packages and means of information security.
We consider a problem of socially significant behavior rate estimates in terms of probabilistic graphical models. Such formal description of the problem allows applying powerful methods and developed algorithms of the theory of Bayesian belief networks. We can use existing software to make computational simulations and apply the model to solve practical tasks. We describe a simple model based on the incomplete data about time intervals between behavior episodes and propose ways of its development.
In this paper the urgency of the problem of assessing the level of information protection against unauthorized access in computer networks is shown. The purpose of the paper is the development of the method of assessing the level of information protection against unauthorized access in computer networks on the basis of the security graph. The developed method provides the increase in information security management efficiency in computer networks due to complex security metric, and also application of the security graph which considers real structure of a computer network and information security system.
In this paper the Bayesian model of estimation of piecewise-constant density corresponding to the decomposition of the ternary range of possible values of the random quantity is considered. The model is based on the estimation of parameters of the Dirichlet distribution for nonnumeric, inaccurate and incomplete information. The analysis is performed for the evaluation and prediction of the statistical characteristics of the CHF with respect to XDR. For comparison the quality of the result for the same data were investigated with the use of classical econometric method: construction of ARIMA – models and forecasting method of exponential smoothing.
This paper proposes a method of constructing modal regulator by transfer function for closed-loop system in case of presence of setting and disturbance influence. The method is simple and allows to express the coefficients of the transfer function of the regulator in terms of coefficients of the wishful polynomial closed-loop system. On the basis of this feature an optimization algorithm of these coefficients on the criterion of maximum robustness is presented.
One of the main problems of researches in the field of socio-engineering attacks is the development of analysis algorithms (assessment) of information system's users' protection estimated on the basis of computing complexity. According to preliminary estimates the application of probabilistic relational algorithm will essentially reduce computing complexity of a program complex. Use of the specified approach will also allow to increase flexibility in a task of estimation of criticality of the documents available in system and chances of successful realization of attacks, in description of communications’ system and accesses among actual components of a complex «information system – personnel – critical documents», and among specified components and the malefactor. Relational models will probably allow to use effective computational methods, implemented in modern DBSM for fast SQL-links implementation.
This work is dedicated to the problem of increasing of diagnostics reliability for the complex technical systems in the conditions of the uncertainty. The diagnostic technique of the complex systems was developed on the basis of the posteriori output of the Bayesian belief networks, including synthesis of the optimal diagnostic strategy taking into account dynamics of the aprioristic information and various laws of the distribution of continuous diagnostic signs
Nowadays more and more different bio-inspired approaches (based on a biological metaphor) for the computer and networks security systems are mentioned and advertised. Traditional computer-based systems and their functionality are often limited by different conditions. Due to frequent minor errors, these systems are subject of failure. They lack scalability, have low adaptation ability to changeable conditions of functioning and its goals. As opposed to traditional computer-based systems, biological systems are often quite reliable. They have great self-protection mechanisms, highly scalable, adaptable and able to self regeneration. These properties of biological systems can be used to construct technical systems (including information security systems). The paper considers different approaches to the protection of computer systems and networks, which are based on a biological metaphor.
In the article new approaches to an estimation of hearing quality with the use of mobile computers are considered. New classification of hearing research methods allowing to allocate possibilities of working out of elements of control systems as a part of computers are suggested. Models of research of the acoustic analyzer of the person allowing to use classical techniques of research of hearing in existing control systems of mobile computers are considered. The modified technique of audiologic researches, namely the basic tonometer experiences (experiments of Rinne, Weber, Zhelle and Federichi) varying in time and qualifying characteristics (to 50 %) is presented.
The paper presents algorithmical complexity estimates for local posteriori inference in algebraic Bayesian networks. We consider the ways of implementing the inference for three types of evidence (deterministic, stochastic, and imprecise). If we need to solve linear programming tasks for inference, the comlexity estimations are given in numbers of such tasks and numbers of variables and constraints in each task. In other cases, complexity estimates are given in numbers of arithmetic operations.
The aim of the paper is construction of calibration relations in the case of class of coordinate non-polynomial splines connected with refinement of grids. An embed\-ding of spline spaces is established for arbitrary refinement of grids. The reconstruction matrixes in the case of a grid on an open interval and a grid on a segment are constructed. The system of biorthogonal linear functionals to splines is constructed. The decomposition matrixes in the case of a grid on an open interval and a grid on a segment are constructed.
In this work modern mechanisms defining complexity of management in modern social systems (in the private and state enterprises, hi-tech Programs, structures of the state management-regulation of economic processes) are analyzed. It is shown that the principal difficulty of modern complexity of management consists in management of not completely formalized technologies of activity of social systems of a great number. Modern feature of management in difficult social systems is an essential influence of technologies of the market which aren't regulated by norms and institutes, and also essential influence of natural restrictions of cogitative possibilities of the person to operate-manage. Influence of poorly regulated technologies in the market can be reduced by use of network, organizational, collective, informational and other developed technologies of maintenance of management and by use of methods integrating science. And reduction of influence of mental restrictions demands creation of new cognitive technologies of management which allow to divide functions at the mental level.
In this work various methods of Lyapunov quantities computation are discussed and implementations of symbolic computation algorithms, based on them, in Matlab are shown.
In this paper, we give an analytical survey of peculiarities of the Russian sign language and calc signed (gesture) speech, including state-of-the-art of sign lexicons and grammatical constructions, as well as methods for formalization of sign lexicon items. In the course of the multidisciplinary research, a virtual 3D model of human being’s avatar has been adopted for the Russian sign language synthesis, and a multipurpose model of multimodal audio-visual synthesizer aimed for text to Russian auditory speech and calc signed speech transformation.
The design problem of observed stochastic process system component, reflecting dynamic system significant changes, is considered. It is shown that system component formation quality criteria should be defined by requirements of hierarchic higher metasystem. Thus is arise new estimation statement, leading to necessity of calculating schemes construction, essentially different from known statistical process estimation algorithms.
The paper describes an algorithms lgebraic bayesian networks reconciliation. We prove correctnes of this algorithms and provide computational comlexeti estimates.
The prototype of the program complex, used for demonstration of basic possibility for estimation the protection of personnel of informative system from socioengineering attack on the base of generalized approach, focused on analyze of trees of attacks, is described. The representation of informative system and its personnel in the specified program complex is based on hierarchy of information models, which consists of information model of the user, information model if the users group, information model of control area, information model of hardware and software complex, informative model of critical information objects (system of documents), information model of informative system itself and links between corresponded objects. The list of technologies, used during the development of this product, the reasons for using this technologies and brief substantiation of some technical solutions is resulted. The example of proceeding of program complex prototype during editing the information about socioengineering attack, as well as during the imitation of socioengineering attack on the recompensation type on the personnel of this system is considered.
The article proposes a network key formation protocol on open communication channels with errors. The task formulation of forming a network key is done. It is proposed that the protocol include three time phases. The first phase establishes crypto connection in independent groups of communication objects (CO). The second phase establishes crypto connection between independent groups of CO. The third phase selects the network key from the set of generated keys, and transmits it over the network. The protocol of the network key formation is discussed. A model of a channel connection with the procedures of this protocol is proposed. The parameters of the protocol are optimized, and its effectiveness is discussed.
Developed description of informative models of the component of complex “in-formative system – personnel”, which is under threat of socioengineering attack is being presented in this paper. Informative model of user, users group, controlling areas, information objects (system of documents), hardware-software maintenance and information system itself are considered. Specified informative models are included into the base for analyzing protection of informative system under the threat of socioengineering attacks. Hierarchy of these models allows to describe scene (context), in which socioengineering attack developes, to touch possible attacks (trees of attacks), and, on the base of gained results, study possible approaches to estimation the degree of protection of complex “information system – personnel” from socioengineering attack.
The article suggests a criterion for the optimization of information systems. Demonstrates the possibility of using the entropy index of the processed information in assessing the complexity indicators. An example of evaluation of an information system for processing the speech signal. Demonstrates its effectiveness.
Procedure of construction and research of dynamical properties of the aggregated composite system for interval control system is presented. Results of Lyapunov’s method development on the objects with parametrical uncertainty class for the purpose of reception of composite model for separate subsystems and their aggregation in system are offered.
Algebraic Bayesian networks (ABN) are probabilistic-logic graphic models of knowledge systems with uncertainty and gives an advantage to deal with interval probability estimates. Secondary structure usually represented as an join graph is essential for ABN work. The article analyses edges of various minimal join graph cliques to specify different clique types. In particular, it is proven that vertex set of the class of cliques that are basic for minimal join graph set synthesis equals to set of end of specified edges, weight of those equals to the clique weight.
A multiple-model description of interaction between a ground-based control complex GCC and orbital system (OrS) of navigation spacecrafts (NS) is presented. A dynamic interpretation of operations and control processes is implemented. The proposed approach lets use fundamental scientific results of the modern control theory for new applied problems. In particular, a scheduling problem for GCC ground-based technical facilities was reduced to a boundary problem with the help of the local section method. Scheduling problems of the considered class are usually solved via methods of discrete programming, but when the dimensionality is high, the optimal solution is not provided and heuristic algorithms are needed. This paper introduces an original approach, based on models and methods of optimal control theory, to scheduling problems of high dimensionality.
Algebraic Bayesian networks are probabilistic-logic graphic models of knowledge systems with uncertainty and can be applied to statistic data processing and machine learning. Secondary structure usually represented as join graph is crucial for its work. The article represents minimal join graphs cliques classification on the number of their vertexes and the number of special edges they contain. Eight different clique types are designed and estimations of number of components depended on them (feuds and sinews) are obtained and proven.
In this article the urgency of information security estimation problem from unauthorized access in the automation system is shown. The purpose of the article is working out a model for a quantitative estimation of information security from unauthorized access, which provide increasing efficiency management of information security in the organizations. For solution of the information security quantitative estimation task the complex metric is offered — security coefficient of the automation system. On the basis of the given metric the comparative analysis for the standard automation systems of firms is carried out, guidelines on rise of their level of security are resulted.
The paper provides an analytical review of talks by leading foreign and domestic experts in the security of computer networks, presented at the International Conference «Mathematical Methods, Models and Architectures for Computer Networks Security» (MMM-ACNS-2010), held in St. Petersburg from 8 to 10 September, 2010. World-known scientists, such as E. Debar, D. Golmann, G. Morrisett, B. Prenel, R. Sandhu, and A. Sabelfeld, made invited talks. On sections of the conference there were discussed topical issues related to the modeling of security and covert channels, security policies and formal analysis of security properties, authentication, authorization, access control and public key cryptography, intrusion detection, malicious software, security of multi-agent systems and software protection, adaptive information security, survivability of computer networks and virtualization.
Current status and perspectives of an interdisciplinary knowledge domain including informatics, computer science, control theory, and IT applications were analyzed. Scientificand- methodological and applied problems of IT integration with existing and future industrial and socio-economical structures were stated.
Advantages of compound dynamic systems presentation in the form of an interacting subsystem models complex are discussed. A holonic complex structure and expanded situationevent formalism of hybrid processes specification are proposed. The formalism enables to specify model processes taking into consideration an interaction of the processes between each other and with the environment. It is shown that the formalism proposed enables to simulate system structure changes. Some problems of the model complex realization are considered. A model complex realized in terms of the formalism proposed is produced as an illustrative example. The complex enables to simulate automatic coordination of processes, which occur in two systems of automatic positioning a roll on a plane.
Hardware-software complex «RiskDetektor» (ISA of Russian Academy of Sciences) is described in the article. This Complex realizes in a mode of dialogue the computer—the user the basic procedures of maintenance of the transport safety, defined by directive documents. The ideology of a complex has been developed and published earlier and assumes that the control system of risks of infringement of transport safety is constructed on a basis of categori-zation of transport objects by an estimation of a possible damage at realization threats of terror-ist influence.
The article is devoted to indexes analysis of structural reliability based on the logical-probabilistic and fuzzy-possibilistic approaches for the simplest indecomposable P-networks with the inverse elements.
There is a description of the problem and automated technology for coating articles of complex shape with the help of adaptive robotic systems that ensure the maintenance of defined parameters in order to reach the plasmotron torch necessary characteristics on the complex details of a convex or nonconvex shape. The proposed adaptive robotic systems can be applied Nanotechnology Surface Sealing on parts of complex shape without having to enter drawings and details of its precise alignment on the stand in terms of interference and the presence of obstacles.
The paper presents results of a stage of research devoted to approaches to an estimation of rate of risky behavior. The paper contains the description of ways of formation of the specified estimation of the rate on the basis of the maximum and minimum interval, and also an median interval between the behavior episodes. A solution of the problem is based on formation and the analysis of formulas for distribution (and joint distribution) functions corresponding to ordinal statistics, distribution density, and also on a choice of values of their parameters. Several approaches to the analysis of quality of the received estimations are offered.
The paper proposes a way to estimate risky behavior rates ratio in two groups. These groups can be either the same group but considered a priori and a posteriori a behavioral intervention, or two groups/two samples from different populations to be compared simultaneously. A similar estimation is proposed to cumulative risk ratio (odds ratio). The risky behavior rate can be estimated either with data about several last episodes of the behavior or with the data on extreme intervals between the episodes during a certain period of time. For these two estimations may be different; a measure of their consistency is introduced.
The information and communication technologies’ role and place are analyzed with regard to the national security assurance in the environment of the information society forming.
The article deals with a probability of instrumental approach to a psychosomatic status estimation in the article. This approach is bases on joint (combined) processing polytypic biometric data which is results of pulse measurements, vibroimage registration and psychological testing. The article is concern with minimum program-instrumental complex solution. Also, in article a polytypic biometric data bases design is presented.
In a complex business network finding a supplier can be a very time consuming task. In advanced supply networks like Build-to-Order supply chains, this task should be carried out under time constraints and under uncertainties both in suppliers and in the orders. The technology of semantic service oriented architectures is aimed to support such kind of tasks, enabling construction of self-organizing flexible supply networks. A novel approach to dynamic network members’ discovery and selection based on competence profiles included in provided service descriptions is proposed. The approach is grounded in a method for service discovery with incomplete information using query expansion techniques. The approach is illustrated by example from automotive industry.
This article describes method of external component integration to instrumental system in conditions of limited abilities of source code modification. Basic characteristics of integrated components are described along with features of integration process that need new methods of instrumental system unification. Different methods of information system integration are analysed. Method of integration based on provision of access to internal handlers of host system and usage of dynamical interface principles is described.
This paper evaluates existing approaches to the security protocols verification and explains why it is impossible to thoroughly verify security protocols using only one of them. To solve this problem combined verification approach which is based on the assembly of strong sides specific for different existing approaches and tools is suggested.
A multilevel model of infocommunication system is reviewed from the sociology of management point of view. A way to assess the social risks for the different levels of this model is suggested.
The appointment, structure an usage of applied pprogram packgare for queuing problems calculation are described. Special accent is done on principles and some results of its testing.
The topical problems and the key tasks of technical-organizational complex systems management in crisis situations are formulated. The theory of technical-organizational complex systems management in crisis situations and the methodology of XXI century situational centers development are substantiated.
The paper presents results of development of the reliability optimization methods for structurally- complex technical systems to select an optimum elements structure and their redundancy level within the specified reliability and cost constraints. Optimization problem means integer programming, and the scope of the task is reduced by using the multivariate bisection method. Functionality of the method is shown on the example of the pump control and emergency shutdown system.
The ways of development of data transmission systems, computing and control systems based on nanotechnological electronic components are considered. The conceptual scheme for determination of limits of perspective data transmission systems as prototype of programmed radio (SDR — Software Defined Radio) with nanotechnology application is proposed.
Methods of calculation of quantity indicators of reliability of the special software at a stage of its designing and development are offered. For modelling computing process and calculation of quantity indicators of its reliability it is offered to use the mathematical device semi-Markov processes. Modelling is made in computer mathematical system MathCAD.
The main process in system of protection of health - interaction of the doctor and the patient. The uniform approach to information of public health services on the basis of modern GRID is offered to technology which consists in creation Web and Grid the services providing information support of interaction of the doctor and the patient and integration of the appropriate data. The sum of the distributed data formed as a result of interaction is necessary and sufficient for the decision of the majority of information problems of public health services. The approach bases on purposeful formation of individual network information resources for each citizen of Russia, the potential client of system of public health services, and each licensed doctor. The base structure of personal network resources of the doctor and the patient is offered. The order of stage-by-stage introduction of personal network resources is considered. The estimation of expenses for the first stage of introduction of an information network is executed. The mathematical model of information of public health services and technique of an estimation of results of information is offered.
The method of JPEG compression usage detection in computer image processing is proposed. Necessity of this method is caused by increased complexity of further processing of images created with use of JPEG compression.
The possibility of web-service creation with the help of pure Prolog (without other programming languages) is reviewed. The main features of web-documents parsing and generating by SWIProlog are described. The different models of HTTP servers and clients supported by SWI-Prolog are presented.
The problems of organizing distributed computations are considered. It is shown that the main reason for arising difficulties is an absence of adequate computational model. Dynamic automata networks (DAN) are proposed as such a model. Properties of DAN and possibilities of realizing multiprocessors with dynamic architecture based on this model are considered. Retrospective review of developing and realizing this concept from recursive computers to multiprocessors with dynamic architecture is shown.
Computing procedure for technical analysis of stock market based on fractals and immunocomputing approach is considered. The gradient algorithm of singular value decomposition of multivariate interval matrix is offered and investigated in the first part of this article. The numerical examples for the chosen stock is presented.
The article is devoted to APC (Advanced Process Control) applications in processes automated control. Developed program complex “Matrix” is offered to elaborate forecasting models and, in turn, to optimize multiple process control according to the chosen criterion of efficiency and available restrictions.
This article proposing the dynamic model for evaluating the properties of additive mixtures with nonlinear interactions between components. The model-based algorithms improved planning process mixing in the flow lines in filling fuels with different characteristics, optimal control algorithms mixing in the lines filling in real-time control algorithms and filling the fuel tank. The operability of appropriated algorithms is tested by the optimization task for technology flow control for diesel fuel compounding on the flow.
The paper reviews the state-of-the-art of perspective research directions in the field of computer networks security fulfilled on basis of the International Workshop «Mathematical models, methods and architectures for computer networks security» (MMM-ACNS-2005), which took place from September, 25th till September, 27th 2005 in Saint-Petersburg. The common information on the workshop is presented, the invited and sectional reports of leading scientists in the field of information security in such perspective research directions as models, architectures and protocols for information security, authentication, authorization and access control, informational flow analysis, covert channels, security policy and operating system security, vulnerability assessment, network forensics and intrusion detection are described.
Approach to computer network security analysis for using both at design and operation stages is suggested. This approach is based on generating common attack graph and using qualitative security metrics. The graph represents possible scenarios of distributed attacks taking into account network configuration, security policy, malefactor’s location, knowledge level and strategy. The general architecture of the security analysis system proposed, the main concepts of common attack graph, used security metrics taxonomies, metrics calculation rules and general security level evaluation procedure are considered. The suggested security metrics allow to evaluate computer network security level with different detailing level and taking into account different aspects. The implemented software prototype is described, and examples of using the prototype for express-analysis of computer network security level are considered.
We consider an approach for constructing the security policy verification system intended for detection and resolution of conflicts in computer network security policy specifications. The architecture of the security policy verification system suggested is considered. The models of two verification modules are proposed. The first one is based on proof theory, namely Event Calculus, and uses abductive reasoning. The second module uses model checking technique. The current implementation of the security policy verification system is described.
Principles of design of automated decomposition tools for sequential software are considered on the basis of its parallels form, reached from information streams graph. Template (prototype) of automated decomposition system for software compiled in C is developed, source text being under restrictions of structure programming.
Different approaches to decomposition of measured upward radiation into the component reflected by the water object and the noise component, generated by the reflection from atmospheric layers are considered. Advantages and disadvantages of the considered approaches are analyzed from the point of view of their application for the retrieval of surface water quality from the remote sensing observations.
This paper proposes a novel approach to computation of hydroacoustic fields based on the immunocomputing. We demonstrate that the approach essentially improves the computational performance and accuracy. We also propose to develop a special immunochip for on-line simulation, visualization, and recognition of physical fields of mobile objects.
We describe collecting and further analysis of expert’s estimates regarding the results of an incomplete sentences test (9480 sentences, 5 expert’s estimate for each) as well as the means used for automation of these processes. We also present the most important statistical conclusions.
We consider the application of variation approach for solution of complex problems of statistic estimation of non-linear dynamic systems meeting the criterion of maximum verisimilitude. We discuss questions of variation and direct estimations adequacy.
One considers the task of optimum control of active objects movement at it’s consecutive meeting with a system of mobile objects for special targets. The composite method of numeric solution and example is suggested.
We present here a method of comparative indicators that is a method for computation of the linear convolition in comparison to the selected unit. We describe and then consider a way to mutual expression of results, calculated for a comparison of two systems (groups of objects).
The problems of creating and using of fault-tolerant computer systems is considered. The article presents different solutions of problem of structure dynamic control for the fault-tolerant computer systems functioning in high availability and load balancing modes.
The paper describes dynamic (non-stationary) models for global telecommunication systems with changable structure and their generalizations. Optimization algorithms for dynamic, adaptive and neural routing and principles for multiagent processing of information flows in integrated infotelecommunication networks are investigated.
One of the main features of modern complex technical object (CTO) is the variability of their parameters and structures as caused by objective and subjective factors at different phases of the CTO life cycle. The aim of this investigation is to develop principles, methods and algorithms for the tasks decision of comprehensive planning for modernization of the CTO (information systems). These methodological and technical bases are founded on the theory of CTS structure dynamics control developed by authors of this investigation.
The problems, which are solved during project development performance of a program product by using of mathematical models, are defined. Diagrams of technological and information interaction between models are shown. The project basic characteristics are defined. The description of the models, which are included in a complex (evaluating and tracking models) is given.
The analysis of existing methods for performance evaluation of computing systems is given. The basic computer subsystems are considered, characteristics and criteria for performance evaluation are chosen. The new method of performance based on the analysis of values of characteristics and convolution of these values in an integrated parameter is offered. Such characteristics of systems behaviour, as efficiency, stability and domination of loading are considered.
Algorithms of priorities assignment for real-time systems inclusive simple and complex tasks are discussed. The algorithms provide RAM use efficiency increase.
Analysis of compilers of compilers LLgen and ACCENT is conducted on the base of testing on the settings examples for conclusion about their applicability. The tables of characters of the little known compilers of compilers are contained and their Internet-addresses are listed.
The different approaches to video data processing, including video data compression are viewed. Some standarts of Wireless Networks are considered. The Wi-Fi and Bluetooth technologies are compared with.
Construction of the abstract computer memory automat is provided, which realizes functionality of the computer’s memory, additive and multiplicative operation are defined. Discussed some questions of utilization achieved formal results in practice of distributed computing.
One of type of automation system — automation system of the monitoring and control of complex technical objects in real time in conditions of structural degradation.
The methodological regulative components, such as the two-point-of-view definition of organizational technical system, the paradigm and principles of structure-and-purpose analysis of this class of systems are discussed. The conception of structure-and-purpose analysis of the systems based on the methodological regulative components is defined. It allows us to improve quality of the results of systems analysis.
One considers the application of variation approach for solution of complex problems of statistic estimation of non-linear dynamic systems due to criterion of maximum verisimilitude. One discusses questions of accounting of a priori information and regularization of marks appreciation.
The key aspects of optimal modernization of mobile communication systems are analyzed. The problem is shown to be reduced to the basic problem of optimal planning of such systems.
Technological scheme of surface water remote sensing in visible and near infrared spectral regions under natural illumination is considered. The technology is based on the numerical solution of radiative transfer equation in the atmosphere and water media and includes the formation of special data bases for the acceleration of water state retrieval procedure.
Computational models are considered for real-time software applications with compound tasks. The opportunity of feasibility analysis of such applications is represented.
At the moment, the theory, methods and techniques concerning the application of mathematical models are wide-used. The research in this field is very intensive, and area of applications and range of the models' classes are growing permanently. Nevertheless such problems as a problem of multi-criteria models' quality estimation, a problem of analysis and arrangement of models' classes, a problem of justified selection of applied task-oriented models are not well investigated yet. The importance of the considered problem increases when the object of research is described not via a single model, but via a multiple-model complex, consisting of models related to different classes or combined models (f. e. analytical-imitating, logical-algebraic, etc). Aforementioned problems are the primary objects of the theory of mathematical models' and multiple-model complexes' quality control. The article presents methodological and technical basics of this theory.
The overview on evolution of digital technologies is leaded. The difference between paradigm of digital realization of analog models and digital technologies is noted. The algorithmic approach to audio video signal processing is considered.
Ones of the perspective mechanisms supplementing existing mechanisms of information resources protection in computer networks are malefactors’ deception mechanisms. These mechanisms are intended for increasing the security of target information systems on the base of attraction of malefactors to false information goals, deceptions, identification of their actions and disclosure. Malefactors’ deception mechanisms are realized by means of development and usage of deception systems (DS) or components named also false information systems, simulators of information systems, traps or honeypots. The paper defines the destination, functions and structure of DS, presents their classification, submits the offered approach to development of perspective DS, offers the schemes of realization of disguised counteraction against network attacks and architecture of the DS prototype developed, describes the experiments spent with the prototype.
The paper presents the recent practice of grid technologies applications in constructing of high performance cluster. Brief description of layered grid architecture is presented. The analysis being implemented of the experience of parallel program (software) development reveal the need in the further progress of parallel computations. Parallel programs structures and process interaction schemes are also presented in the paper. The need for creation of efficient system of resource distribution control is demonstrated. The perspectives and trends of SPIIRAS cluster construction are outlined towards the integration with clusters of the other academic institutions.
Three scheduling algorithms are examined, static priority-driven scheduling, time sliced scheduling and deadline-driven sheduling. Their characteristics are compared for scheduling tasks in real time. New, more simple proof is offered to show that deadline-driven scheduling algorithm is capable to utilize all proocessor resource without transient overloads.
This paper proposes a new Plague Risk Index based on the Immunocomputing approach. An application of the index is provided by an example of monitoring of the natural plague focus in Kazakhstan.
The state of the art review of bases for computer interpretation of applied formalized axiomatic theories; its advantages and restrictions when implemented in high-tech software for the information and management systems; development of the algebraic model of the theory adequacy evidence to its computer interpretation.
In this article principle characteristics and differences of partially computer based courses, virtual courses and on-line courses and some of their interface peculiarities are discussed. General scheme of the learning process is given which reflects principle interface elements. Relationship between a new education model, which is gaining acceptance in higher education institutes, and the structure of such computer based, virtual and on-line courses is discussed.
The proposed method provides an algorithm to calculate the minimal volume of resources, necessary for a communication channel under given requirements on the quality of network traffic service. The method is based on the information about joint communication channels and estimates of the traffic parameters. If a part of traffic is certainly known to be denied, this part should be eliminated from the network as early as possible. Although the provided method has been developed for the calculations on parameters of reserved communication channels, it can be applied for the planning in the network as a whole.
This paper presents the main ideas and directions of development of a new type of computing based on mathematical models of information processing by proteins and immune networks.
In the paper, a review of the taxonomies of the attacks on computer systems is presented. The analysis of the following types of the taxonomies are fulfilled: lists of attack terms; lists of attack categories; attack results categories; empirical lists of attack types; vulnerabilities matrices; actionbased taxonomies; taxonomies based on the attack signatures; security flaws or vulnerabilities taxonomies; incident taxonomies.
The Algebraic Bayessian Networks (ABN) approach is attracted to build effective model of reliability forecasting for complex-structure system nodes, which are hard of access. ABN provides error monitoring for reliability function behavior forecasting.
The brief description of new technology detection of logic laws is resulted on the basis of representations of local geometry. The features of its application for search of complex acyclic patterns in sequences of numbers and symbols are described. The examples are resulted.
The problems of estimation of availability of hierarchical communication systems for intertier ring protection schemes considered. The cases of unprotected, partially protected, and completely protected system analyzed.
The new program-technical solutions using real successes and possibilities of domestic science in the field of analytical systems are offered. The possibilities of creation of effective strategic analytical systems of the "partner" class, having no analogies in the world, are discussed.
The problems of the creating, using and development of automation system is considered. A special attention is given to computer-aided monitoring system for technical object states in real time with possibility of theirs structures destruction.
Methods and means for organizing programs in multiprocessor environment with dynamic architecture are considered. Object-oriented method for programming autotransforming network- represented programs that reflect mainly the structure of the problem solved but properties of computational environment is suggested. Means for supporting this approach on hardware level and on the level of the operating system and the method of graphical parallel program design in the network representation are discussed.
Development of an effective strategy of the reduction of air pollution in the regional scale requires fast estimation of both existing pollution level and that resulting from the different possible sets of measures affecting the pollutant emissions from industrial sources. In order to accelerate the necessary calculations the Euleran-Lagrangian scheme for regional air pollution modelling and the corresponding computer code were modified for multiprocessor systems. The code was tested on three systems with local memory and different software environment. The results demonstrated the effectiveness of high performance computing in the field of environmental management.
The automatic devices with stack memory traditionally being as a formalism for representation of algorithms of the syntactic analysis, can have much wider application — in particular, for formalization of rules of the decision of combinations tasks in various subject domains. In the article on a several examples discus features of a design and organization such automatic devices. Also definition the analogy between similar automatic devices and mechanism of backtracking in a prologue — system.
The methods of structural synthesis and common principles of the invariant analysis of complicated nonlinear mathematical models are considered. The analytic form of representation of models looks like the multiparameter differential equations or dynamic systems with control. The concepts about formal models of type polynomial, formal integral manifolds and differential complexes are introduced. The examples of algorithms are reduced. The models polynomial and singular type converted and controlled, ecological minimum type are considered. The examples of the mathematical Handbook are reduced.
In today’s world, the Internet of Things has become an integral part of our lives. The increasing number of intelligent devices and their pervasiveness has made it challenging for developers and system architects to plan and implement systems of Internet of Things and Industrial Internet of Things effectively. The primary objective of this work is to automate the design process of Industrial Internet of Things systems while optimizing the quality of service parameters, battery life, and cost. To achieve this goal, a general four-layer fog-computing model based on mathematical sets, constraints, and objective functions is introduced. This model takes into consideration the various parameters that affect the performance of the system, such as network latency, bandwidth, and power consumption. The Non-dominated Sorting Genetic Algorithm II is employed to find Pareto optimal solutions, while the Technique for Order of Preference by Similarity to Ideal Solution is used to identify compromise solutions on the Pareto front. The optimal solutions generated by this approach represent servers, communication links, and gateways whose information is stored in a database. These resources are chosen based on their ability to enhance the overall performance of the system. The proposed strategy follows a three-stage approach to minimize the dimensionality and reduce dependencies while exploring the search space. Additionally, the convergence of optimization algorithms is improved by using a biased initial population that exploits existing knowledge about how the solution should look. The algorithms used to generate this initial biased population are described in detail. To illustrate the effectiveness of this automated design strategy, an example of its application is presented.
Hydrocephalus is a central nervous system disorder which most commonly affects infants and toddlers. It starts as an abnormal build-up of cerebrospinal fluid in the ventricular system of the brain. Hence, early diagnosis becomes vital, which may be performed by Computed Tomography (CT), one of the most effective diagnostic methods for diagnosing Hydrocephalus (CT), where the enlarged ventricular system becomes apparent. However, most disease progression assessments rely on the radiologist's evaluation and physical measures, which are subjective, time-consuming, and inaccurate. This paper develops an automatic prediction utilizing the H-detect framework for enhanced accurate hydrocephalus prediction. This paper uses a pre-processing step to normalize the input image and remove unwanted noises, which can help extract valuable features easily. The feature extraction is done by segmenting the image based on edge detection using triangular fuzzy rules. Thereby, the exact information on the nature of CSF inside the brain is highlighted. These segmented images are saved and again given to the CatBoost algorithm. The Categorical feature processing allows for quicker training. When necessary, the overfitting detector will stop model training and thus efficiently predicts Hydrocephalus. The outcomes demonstrate that the new H-detect strategy outperforms the traditional approaches.
Automatic syntactic analysis of a sentence is an important computational linguistics task. At present, there are no syntactic structure parsers for Russian that are publicly available and suitable for practical applications. Ground-up creation of such parsers requires building of a treebank annotated according to a given formal grammar, which is quite a cumbersome task. However, since there are several syntactic dependency parsers for Russian, it seems reasonable to employ dependency parsing results for syntactic structure analysis. The article introduces an algorithm that allows to construct the constituency tree of a Russian sentence by a syntactic dependency tree. The formal grammar used by the algorithm is based on the D.E. Rosenthal’s classic reference. The algorithm was evaluated on 300 Russian-language sentences. 200 of them were selected from the aforementioned reference, and 100 from OpenCorpora, an open corpus of sentences extracted from Russian news and periodicals. During the evaluation, the sentences were passed to syntactic dependency parsers from Stanza, SpaCy, and Natasha packages, then the resulted dependency trees were processed by the proposed algorithm. The obtained constituency trees were compared with the trees manually annotated by experts in linguistics. The best performance was achieved using the Stanza parser: the constituency parsing F1–score was 0.85, and the sentence parts tagging accuracy was 0.93, that would be sufficient for many practical applications, such as event extraction, information retrieval and sentiment analysis.
The article is devoted to the original mathematical models of combat operations developed in Russia at the beginning of the XX century. One of the first works in which approaches to mathematical modeling of military operations were outlined can be considered an article by Y. Karpov «Tactics of fortress artillery», published in 1906. It considered the task of defending the fortress from attacking enemy infantry chains. Based on the idea of the attackers overcoming the line of defense, mathematical relations were obtained linking the parameters of the shot of the shrapnel charge with the movements of the infantryman. Similarly, the task of using a machine gun for the defense of the fortress was considered. After analyzing the obtained ratios, Y. Karpov came to the conclusion that all means of defense of the fortress can be correlated through the length of the area defended by this means. P. Nikitin developed Y. Karpov's ideas. He considered a wide range of means of destruction. Based on the results of the research, the author made recommendations on the distribution of forces and means in the defense of fortresses. M. Osipov in 1915 published vivid and original models of two-way combat operations, a year earlier than the well-known Lanchester theory. Summing up the numbers of the fighting sides at infinitesimal intervals of time, and then moving to the limits, he obtains linear and quadratic laws of the influence of the ratio of the number of the fighting sides on their losses, and explores heterogeneous means of destruction. All this is verified by the practice of various battles. M. Osipov showed that the coefficients in the laws of losses depend on the training of personnel, terrain, the presence of fortifications, the moral and psychological state of the troops, etc. Based on the results of mathematical modeling, M. Osipov for the first time substantiated a number of provisions of the art of war. He showed that neither linear nor quadratic laws of losses in general do not correspond to the practice of the battles conducted. For ease of use at that level of computer technology development and to obtain a more reliable result, M. Osipov proposed using the degree of "three second" in the laws of losses, although he himself understood its approximate nature. Much attention is paid to the problem of authorship, the search for a prototype of the creator of the first two-sided model of combat operations, and the application of theory to solve modern applied problems.
The most difficult task of secure telecommunication systems using symmetric encryption, due to the need for preliminary and resource-intensive organization of secret channels for delivering keys to network correspondents, is key management. An alternative is the generating keys methods through open communication channels. In information theory, it is shown that these methods are implemented under the condition that the channel information rate of correspondents exceeds the rate of the intruder interception channel. The search for methods that provide the informational advantage of correspondents is being updated. The goal is to determine the information-theoretical conditions for the formation of a virtual network and an interception channel, for which the best ratio of information speeds for correspondents is provided compared to the ratio of the original network and interception channel. The paper proposes an information transfer model that includes a connectivity model and an information transfer method for asymptotic lengths of code words. The model includes three correspondents and is characterized by the introduction of an ideal broadcast channel in addition to an errored broadcast channel. The model introduces a source of "noisy" information, which is transmitted over the channel with errors, so the transmission of code words using the known method of random coding is carried out over the channel without errors. For asymptotic lengths of code words, all actions of correspondents in processing and transmitting information in the model are reduced to the proposed method of transmitting information. The use of the method by correspondents within the framework of the transmission model makes it possible to simultaneously form for them a new virtual broadcast channel with information rate as in the original channel with errors, and for the intruder a new virtual broadcast interception channel with a rate lower than the information rate of the initial interception channel. The information-theoretic conditions for deterioration of the interception channel are proved in the statement. The practical significance of the results obtained lies in the possibility of using the latter to assess the information efficiency of open network key formation in the proposed information transfer model, as well as in the development of well-known scientific achievements of open key agreement. The proposed transmission model can be useful for researching key management systems and protecting information transmitted over open channels. Further research is related to the information-theoretic assessment of the network key throughput, which is the potential information-theoretic speed of network key formation.
On the Internet, "fake news" is a common phenomenon that frequently disturbs society because it contains intentionally false information. The issue has been actively researched using supervised learning for automatic fake news detection. Although accuracy is increasing, it is still limited to identifying fake information through channels on social platforms. This study aims to improve the reliability of fake news detection on social networking platforms by examining news from unknown domains. Especially, information on social networks in Vietnam is difficult to detect and prevent because everyone has equal rights to use the Internet for different purposes. These individuals have access to several social media platforms. Any user can post or spread the news through online platforms. These platforms do not attempt to verify users or the content of their locations. As a result, some users try to spread fake news through these platforms to propagate against an individual, a society, an organization, or a political party. In this paper, we proposed analyzing and designing a model for fake news recognition using Deep learning (called AAFNDL). The method to do the work is: 1) First, we analyze the existing techniques such as Bidirectional Encoder Representation from Transformer (BERT); 2) We proceed to build the model for evaluation; and finally, 3) We approach some Modern techniques to apply to the model, such as the Deep Learning technique, classifier technique and so on to classify fake information. Experiments show that our method can improve by up to 8.72% compared to other methods.
The article is devoted to the development of model-algorithmic support and software tools for automating the integration of Earth remote sensing data and other heterogeneous information resources in solving problems of monitoring and proactive management of territories development. A distinctive feature of the problem statement is the inclusion of tools for modeling the state of natural and technical objects located in the analyzed territory into the resources should be integrated. The development is based on the justification of the technology for integrating heterogeneous information resources, which includes an algorithm for choosing the type of architecture for the created automation tool complex, a method for describing the information process of integrating data and their joint processing, an algorithm for determining the best configuration of information resources when solving thematic problems, as well as a set of software and technological solutions for integration of remote sensing data with other necessary data and their joint use in modeling. As a result of research and developed algorithms application, it has been established that the most preferred type of systems’ architecture for integrating heterogeneous information resources is a service-oriented architecture. To describe the information integration process, it is proposed to use a Business Process Model and Notation. The key component of the development in terms of software and technological solutions for the integration of heterogeneous data is the proposed interaction scheme with data providers and consumers based on data abstraction layer creation. The application of the proposed solution allows you to bring heterogeneous data to a single format suitable for further processing on modeling tools. The testing carried out on specific thematic tasks of monitoring and managing the territories’ development showed the feasibility of the proposed integration technology and the developed software tools, as well as the achievement of a significant gain in the rapidness of solving thematic tasks.
A model of oligopoly with an arbitrary number of rational agents that are reflexive according to Cournot or Stackelberg, under the conditions of incomplete information for the classical case of linear functions of costs and demand is considered. The problem of achieving equilibrium based on mathematical modeling agents' decision-making processes is investigated. Works in this direction are relevant due to the importance of understanding the processes in real markets and the convergence of theoretical models with them. In the framework of a dynamic model of reflexive collective behavior, each agent at each moment adjusts its output, making a step in the direction of output maximizing its profit under the expected choice of competitors. The permissible step value is set by the range. This article sets and solves the problem of finding the ranges of permissible steps of agents, which are formulated as conditions that guarantee the convergence of dynamics to equilibrium. The novelty of the study is determined by the use of the norm of the error transition matrix from the t-th to (t+1)-moment of time as a criterion of the dynamics convergence. It is shown that the dynamics converge if the norm is less than unity, starting at some point in time, and the failure to fulfill this criterion especially manifests itself in multidirectional choice, when some agents choose "big" steps towards their current goals, while others, on the contrary, choose "small" steps. Failure to meet the criterion also increases as the market grows. The general conditions for the ranges of convergence of dynamics for an arbitrary number of agents are established, and a method for constructing the maximum such ranges is proposed, which also constitutes the novelty of the study. The results of solving the above problems for particular cases of oligopoly, which are the most widespread in practice, are presented.
Ensemble learning algorithms such as bagging often generate unnecessarily large models, which consume extra computational resources and may degrade the generalization ability. Pruning can potentially reduce ensemble size as well as improve performance; however, researchers have previously focused more on pruning classifiers rather than regressors. This is because, in general, ensemble pruning is based on two metrics: diversity and accuracy. Many diversity metrics are known for problems dealing with a finite set of classes defined by discrete labels. Therefore, most of the work on ensemble pruning is focused on such problems: classification, clustering, and feature selection. For the regression problem, it is much more difficult to introduce a diversity metric. In fact, the only such metric known to date is a correlation matrix based on regressor predictions. This study seeks to address this gap. First, we introduce the mathematical condition that allows checking whether the regression ensemble includes redundant estimators, i.e., estimators, whose removal improves the ensemble performance. Developing this approach, we propose a new ambiguity-based pruning (AP) algorithm that bases on error-ambiguity decomposition formulated for a regression problem. To check the quality of AP, we compare it with the two methods that directly minimize the error by sequentially including and excluding regressors, as well as with the state-of-art Ordered Aggregation algorithm. Experimental studies confirm that the proposed approach allows reducing the size of the regression ensemble with simultaneous improvement in its performance and surpasses all compared methods.
An effective economy requires prompt prevention of misconduct of legal entities. With the ever-increasing transaction rate, an important part of this work is finding market collusions based on statistics of electronic traces. We report a solution to this problem based on a quantum-theoretical approach to behavioral modeling. In particular, cognitive states of economic subjects are represented by complex-valued vectors in space formed by the basis of decision alternatives, while decision probabilities are defined by projections of these states to the corresponding directions. Coordination of multilateral behavior then corresponds to entanglement of the joint cognitive state, measured by standard metrics of quantum theory. A high score of these metrics indicates the likelihood of collusion between the considered subjects. The resulting method for collusion discovery was tested with open data on the participation of legal entities in public procurement between 2015 and 2020 available at the federal portal https://zakupki.gov.ru. Quantum models are built for about 80 thousand unique pairs and 10 million unique triples of agents in the obtained dataset. The reliability of collusion discovery was defined by comparison with open data of Federal antimonopoly service available at https://br.fas.gov.ru. The achieved performance allows the discovery of about one-half of known pairwise collusions with a reliability of more than 50%, which is comparable with detection based on classical correlation and mutual information. For three-sided behavior, in contrast, the quantum model is practically the only available option since classical measures are typically limited to the bilateral case. Half of such collusions are detected with a reliability of 40%. The obtained results indicate the efficiency of the quantum-probabilistic approach to modeling economic behavior. The developed metrics can be used as informative features in analytic systems and algorithms of machine learning for this field.
The increasing storage density of modern NAND flash memory chips, achieved both due to scaling down the cell size, and due to the increasing number of used cell states, leads to a decrease in data storage reliability, namely, error probability, endurance (number of P/E cycling) and retention time. Error correction codes are often used to improve the reliability of data storage in multilevel flash memory. The effectiveness of using error correction codes is largely determined by the model accuracy that exhibits the basic processes associated with writing and reading data. The paper describes the main sources of disturbances for a flash cell that affect the threshold voltage of the cell in NAND flash memory, and represents an explicit form of the threshold voltage distribution. As an approximation of the obtained threshold voltage distribution, a Normal-Laplace mixture model was shown to be a good fit in multilevel flash memories for a large number of rewriting cycles. For this model, a performance analysis of the concatenated coding scheme with an outer Reed-Solomon code and an inner multilevel code consisting of binary component codes is carried out. The performed analysis makes it possible to obtain tradeoffs between the error probability, storage density, and the number of P/E cycling. The resulting tradeoffs show that the considered concatenated coding schemes allow, due to a very slight decrease in the storage density, to increase the number of P/E cycling up to 2–2.5 times than their nominal endurance specification while maintaining the required value of the bit error probability.
An increase in the number of cars is higher than rates of transport infrastructure development, resulting in a reduction of cargo and passenger transportation efficiency in city conditions. Simulation of flow irregularity in time (peak hour) shows the key role of a car motion interval as a factor of overcoming accumulation at average speed reduction in conditions of highly loaded roads. To reduce the effective time of driver reaction, defining the least distance between cars, it is necessary to minimize the influence of human factors. Automation of the process (unmanned control) requires an effective exchange of navigation and route data between traffic participants. A summary of requirements for such an information exchange system defines the priority of the suggested communication and navigation system (CNS) on the base of radio broadcast communication. Its application gives an opportunity to rise simultaneously traffic safety and efficiency. An increase in neighbor driver action predictability leads to traffic safety ensuring. The exchange of data with traffic control centers (TCC) enables the centralization of motion regulation. A distributed network of transceiver stations forms a local positioning system based on trilateration principles. Algorithms of onboard positioning result verification and automatic resolution of communication conflicts ensure high reliability of CNS functioning. Refusal from point-to-point communication principles allows it to operate even in conditions of high car density up to several thousand per square kilometer. In cooperation with advanced technologies of traffic organization (formation of city highway grid and “total green wave” mode), CNS and TCC are capable of rising the average speed in city conditions higher than 45 km/hour. The aggregate economy of expense on last mile transportation because of the suggested innovations is to be at the level of several GDP percent due to a decrease in accidents and congestion even without accounting for social and ecological effects.
The article presents an analytical review of research in the affective computing field. This research direction is a component of artificial intelligence, and it studies methods, algorithms and systems for analyzing human affective states during interactions with other people, computer systems or robots. In the field of data mining, the definition of affect means the manifestation of psychological reactions to an exciting event, which can occur both in the short and long term, and also have different intensity. The affects in this field are divided into 4 types: affective emotions, basic emotions, sentiment and affective disorders. The manifestation of affective states is reflected in verbal data and non-verbal characteristics of behavior: acoustic and linguistic characteristics of speech, facial expressions, gestures and postures of a person. The review provides a comparative analysis of the existing infoware for automatic recognition of a person’s affective states on the example of emotions, sentiment, aggression and depression. The few Russian-language, affective databases are still significantly inferior in volume and quality compared to electronic resources in other world languages. Thus, there is a need to consider a wide range of additional approaches, methods and algorithms used in a limited amount of training and testing data, and set the task of developing new approaches to data augmentation, transferring model learning and adapting foreign-language resources. The article describes the methods of analyzing unimodal visual, acoustic and linguistic information, as well as multimodal approaches for the affective states recognition. A multimodal approach to the automatic affective states analysis makes it possible to increase the accuracy of recognition of the phenomena compared to single-modal solutions. The review notes the trend of modern research that neural network methods are gradually replacing classical deterministic methods through better quality of state recognition and fast processing of large amount of data. The article discusses the methods for affective states analysis. The advantage of multitasking hierarchical approaches is the ability to extract new types of knowledge, including the influence, correlation and interaction of several affective states on each other, which potentially leads to improved recognition quality. The potential requirements for the developed systems for affective states analysis and the main directions of further research are given.
The increasing flow of photo and video information transmitted through the channels of infocommunication systems and complexes stimulates the search for effective compression algorithms that can significantly reduce the volume of transmitted traffic, while maintaining its quality. In the general case, the compression algorithms are based on the operations of converting the correlated brightness values of the pixels of the image matrix into their uncorrelated parameters, followed by encoding the obtained conversion coefficients. Since the main known decorrelating transformations are quasi-optimal, the task of finding transformations that take into account changes in the statistical characteristics of compressed video data is still relevant. These circumstances determined the direction of the study, related to the analysis of the decorrelating properties of the generated wavelet coefficients obtained as a result of multi-scale image transformation. The main result of the study was to establish the fact that the wavelet coefficients of the multi-scale transformation have the structure of nested matrices defined as submatrices. Therefore, it is advisable to carry out the correlation analysis of the wavelet transformation coefficients separately for the elements of each submatrix at each level of decomposition (decomposition). The main theoretical result is the proof that the core of each subsequent level of the multi-scale transformation is a matrix consisting of the wavelet coefficients of the previous level of decomposition. It is this fact that makes it possible to draw a conclusion about the dependence of the corresponding elements of neighboring levels. In addition, it has been found that there is a linear relationship between the wavelet coefficients within the local area of the image with a size of 8×8 pixels. In this case, the maximum correlation of submatrix elements is directly determined by the form of their representation, and is observed between neighboring elements located, respectively, in a row, column or diagonally, which is confirmed by the nature of the scattering. The obtained results were confirmed by the analysis of samples from more than two hundred typical images. At the same time, it is substantiated that between the low-frequency wavelet coefficients of the multi-scale transformation of the upper level of the expansion, approximately the same dependences are preserved uniformly in all directions. The practical significance of the study is determined by the fact that all the results obtained in the course of its implementation confirm the presence of characteristic dependencies between the wavelet transform coefficients at different levels of image decomposition. This fact indicates the possibility of achieving higher compression ratios of video data in the course of their encoding. The authors associate further research with the development of a mathematical model for adaptive arithmetic coding of video data and images, which takes into account the correlation properties of wavelet coefficients of a multi-scale transformation.
This article proposes algorithms for planning and controlling the movement of a mobile robot in a two-dimensional stationary environment with obstacles. The task is to reduce the length of the planned path, take into account the dynamic constraints of the robot and obtain a smooth trajectory. To take into account the dynamic constraints of the mobile robot, virtual obstacles are added to the map to cover the unfeasible sectors of the movement. This way of accounting for dynamic constraints allows the use of map-oriented methods without increasing their complexity. An improved version of the rapidly exploring random tree algorithm (multi-parent nodes RRT – MPN-RRT) is proposed as a global planning algorithm. Several parent nodes decrease the length of the planned path in comprise with the original one-node version of RRT. The shortest path on the constructed graph is found using the ant colony optimization algorithm. It is shown that the use of two-parent nodes can reduce the average path length for an urban environment with a low building density. To solve the problem of slow convergence of algorithms based on random search and path smoothing, the RRT algorithm is supplemented with a local optimization algorithm. The RRT algorithm searches for a global path, which is smoothed and optimized by an iterative local algorithm. The lower-level control algorithms developed in this article automatically decrease the robot’s velocity when approaching obstacles or turning. The overall efficiency of the developed algorithms is demonstrated by numerical simulation methods using a large number of experiments.
The article is devoted to the study of one of the current scenarios for the development of population processes in contemporary ecological systems. Biological invasions have become extremely common due to climate change, economic activities to improve ecosystem productivity, and random events. The invader does not always smoothly occupy an ecological niche, as in logistic models. The dynamics of the situations we have chosen after the introduction of an alien species is extremely diverse. In some cases, the phenomenon of an outbreak of abundance is quickly realized up to the beginning of the destruction by the species of its new range. The development of the situation in the process of invasion depends on the superposition of biotic and abiotic factors. The dynamics of the abundance of the invader is affected by the favorable conditions and, to a greater extent, by the possibility of realizing the reproductive potential and the resistance of the biotic environment. Counteraction develops with a delay and manifests itself when the invader reaches a significant number. In the work, a continuous model of the invasive process with a sharp transition to a state of population depression has been developed. The stage of the population crisis ends with the transition to equilibrium, since the resistance in the model scenario depends adaptively and in a threshold way on the number. The problem of computational description of a scenario with active but delayed environmental resistance is practically relevant for situations of developing measures of artificial resistance to an undesirable invader. In the solution of our model, there is a mode of prolonged stable fluctuations after exiting the depression stage.
An oligopoly with an arbitrary number of Stackelberg leaders under incomplete, asymmetrical agents' awareness and inadequacy of their predictions of competitors' actions is considered. Models of individual decision-making processes by agents are studied. The reflexive games theory and collective behavior theory are the theoretical basis for construction and analytical study process models. They complement each other in that reflexive games allow using the collective behavior procedures and the results of agents' reflections, leading to a Nash equilibrium. The dynamic decision-making process considered repeated static games on a range of agents' feasible responses to the expected actions of the environment, considering current economic restrictions and competitiveness in each game. Each reflexive agent in each game calculates its current goal position and changes its state, taking steps towards the current position of the goal to obtain positive profit or minimize losses. Sufficient conditions for the convergence of processes in discrete time for the case of linear costs of agents and linear demand is the main result of this work. New analytical expressions for the agents' current steps' ranges guarantee the convergence of the collective behavior models to static Nash equilibrium is obtained. That allows each agent to maximize their profit, assuming common knowledge among the agents. The processes when the agent chooses their best response are also analyzed. The latter may not give converging trajectories. The case of the duopoly in comparison with modern results is discussed in detail. Necessary mathematical lemmas, statements, and their proofs are presented.
The article presents the application of a statistical analysis algorithm for multi-temporal multispectral aerial photography data to identify areas of historical anthropogenic impact on the natural environment. The investigated site is located on the outskirts of the urban-type village of Znamenka (Znamensky District, Tambov Region) in a forest-steppe zone with typical chernozem soils, where arable lands were located in the second half of the 19th - early 20th centuries. Grown vegetation as a result of secondary succession in abandoned areas can be a sign for identifying traces of historical anthropogenic impact. Distinctive signs of such vegetation from the surrounding natural environment are its type, age and growth density. Thus, the problem of detecting the boundaries of anthropogenic impact on multispectral images is reduced to the problem of vegetation classification. The initial data were the results of multi-temporal multispectral imaging in green (Green), red (Red), edge of red (RedEdge) and near-infrared (NIR) spectral ranges. The first stage of the algorithm is the calculation of the Haralick texture features on multispectral images, the second stage – reduction in the number of features by the principal component analysis, the third stage – the segmentation of images based on the obtained features by the k-means method. The effectiveness of the proposed algorithm is shown by comparing the segmentation results with the reference data of historical cartographic materials. The study of multi-temporal multispectral images makes it possible to more fully characterize and take into account the dynamics of phytomass growth in different periods of the growing season. Therefore, the obtained segmentation result reflects not only the configuration of areas of an anthropogenic transformed natural environment, but also the features of overgrowth of abandoned arable land.
The energy capacity of the batteries used as the main power source in mobile robotic devices determines the autonomous operation of the robot. To plan the execution of tasks by a group of robotic tools in terms of time consumption, it is important to take into account the time during which the battery of each individual robot is charged. When using wireless power transfer, this time depends on the efficiency of the power transfer system, on the power of the transferring part of the system, as well as on the level of charge required to recharge. In this paper, we propose a method for estimating the time of transfer of energy resources between two robots, taking into account these parameters. The proposed method takes into account the application of the algorithm for the final positioning of robots, the assessment of linear offsets between robots, includes the calculation of efficiency, as well as the determination of the battery charge time, taking into account the parameters obtained at the previous stages of the method. The final positioning algorithm for robots uses algorithms for processing data from a robot vision system to search for fiducial markers and determine their spatial characteristics to ensure the final positioning of mobile robotic platforms. These characteristics are also used to determine the linear offsets between robots, on which the efficiency of energy transfer depends. To determine it, the method uses a mathematical model of the energy characteristics of the wireless power transfer system and the obtained linear offsets. At the last stage of the method, the time for charging the battery of the mobile robot is calculated, taking into account the data from the previous stages. Application of the proposed method to simulate the positioning of robots in a certain set of points in the working space will reduce the time spent on charging the robot battery when using wireless power transfer. As a result of the simulation, it was determined that the transfer of energy resources between robots took place with an efficiency in the range from 58.11% to 68.22%, and out of 14 positioning points, 3 were identified with the shortest energy transfer time.
The problems of organizing medical care in the context of the COVID-19 pandemic, associated with the uncertainty and limitedness of various resources, led to the need to improve decision-making systems for hospitalization of patients. Situational management can improve the decision-making process to fit the current situation better. At the same time, it becomes important to take into account the influence of psychological factors on decisions made during hospitalization. The paper proposes the use of coalition games for situational management during hospitalization of patients. The players and members of the coalition are hospitals, ambulance teams, patients and computed tomography centers. The goal of the game is to form a coalition of participants that provides the maximum benefit in terms of time and cost of hospitalization at the time of decision making. The general scheme of hospitalization, the main sources of information about the situation, the formulation and formalization of the problem are considered. An experiment was carried out in which the formation of a coalition during hospitalization was tested based on data obtained from analyzing the dynamics of the COVID-19 pandemic. Due to the small amount of data and the lack of approved models of the situation development, when carrying out the calculation, some of the parameters were estimated using heuristic models of the development of the situation, based on the analysis of information from open sources of information. The experiment result contains a set of coalitions that provide the maximum benefit under the specified constraints. At the same time, the calculation time of the coalition game allows using the proposed model of decision-making support during hospitalization in the dispatch service of ambulance stations.
The article discusses the procedure for correcting the trajectory of a robotic platform (RTP) on a plane in order to reduce the probability of its defeat/detection in the field of a finite number of repeller sources. Each of these sources is described by a mathematical model of some factor of counteraction to the RTP. This procedure is based, on the one hand, on the concept of a characteristic probability function of a system of repeller sources, which allows us to assess the degree of influence of these sources on the moving RTP. From this concept follows the probability of its successful completion used here as a criterion for optimizing the target trajectory. On the other hand, this procedure is based on solving local optimization problems that allow you to correct individual sections of the initial trajectory, taking into account the location of specific repeller sources with specified parameters in their vicinity. Each of these sources is characterized by the potential, frequency of impact, radius of action, and parameters of the field decay. The trajectory is adjusted iteratively and takes into account the target value of the probability of passing. The main restriction on the variation of the original trajectory is the maximum allowable deviation of the changed trajectory from the original one. If there is no such restriction, then the task may lose its meaning, because then you can select an area that covers all obstacles and sources, and bypass it around the perimeter. Therefore, we search for a local extremum that corresponds to an acceptable curve in the sense of the specified restriction. The iterative procedure proposed in this paper allows us to search for the corresponding local maxima of the probability of RTP passage in the field of several randomly located and oriented sources, in some neighborhood of the initial trajectory. First, the problem of trajectory optimization is set and solved under the condition of movement in the field of single source with the scope in the form of a circular sector, then the result is extended to the case of several similar sources. The main problem of the study is the choice of the General form of the functional at each point of the initial curve, as well as its adjustment coefficients. It is shown that the selection of these coefficients is an adaptive procedure, the input variables of which are characteristic geometric values describing the current trajectory in the source field. Standard median smoothing procedures are used to eliminate oscillations that occur as a result of the locality of the proposed procedure. The simulation results show the high efficiency of the proposed procedure for correcting the previously planned trajectory.
In recent years the interest in automatic depression detection has grown within medical and scientific-technical communities. Depression is one of the most widespread mental illnesses that affects human life. In this review we present and analyze the latest researches devoted to depression detection. Basic notions related to the definition of depression were specified, the review includes both unimodal and multimodal corpora containing records of informants diagnosed with depression and control groups of non-depressed people. Theoretical and practical researches which present automated systems for depression detection were reviewed. The last ones include unimodal as well as multimodal systems. A part of reviewed systems addresses the challenge of regressive classification predicting the degree of depression severity (non-depressed, mild, moderate and severe), and another part solves a problem of binary classification predicting the presence of depression (if a person is depressed or not). An original classification of methods for computing of informative features for three communicative modalities (audio, video, text information) is presented. New methods for depression detection in every modality and all modalities in total are defined. The most popular methods for depression detection in reviewed studies are neural networks. The survey has shown that the main features of depression are psychomotor retardation that affects all communicative modalities and strong correlation with affective values of valency, activation and domination, also there has been observed an inverse correlation between depression and aggression. Discovered correlations confirm interrelation of affective disorders and human emotional states. The trend observed in many reviewed papers is that combining modalities improves the results of depression detection systems.
A new fast method for pupil detection and eyetracking real time is being developed based on the study of a boundary-step model of a grayscale image by the Laplacian-Gaussian operator and finding a new proposed descriptor of accumulated differences (point identifier), which displays a measure of the equidistance of each point from the boundaries of some relative monotonous area (for example, the pupil of the eye). The operation of this descriptor is based on the assumption that the pupil in the frame is the most rounded monotonic region with a high brightness difference at the border, the pixels of the region should have an intensity less than a predetermined threshold (but the pupil may not be the darkest region in the image). Taking into account all of the above characteristics of the pupil, the descriptor allows achieving high detection accuracy of its center and size, in contrast to methods based on threshold image segmentation, based on the assumption of the pupil as the darkest area, morphological methods (recursive morphological erosion), correlation or methods that investigate only the boundary image model (Hough transform and its variations with two-dimensional and three-dimensional parameter spaces, the Starburst algorithm, Swirski, RANSAC, ElSe). The possibility of representing the pupil tracking problem as a multidimensional unconstrained optimization problem and its solution by the Hook-Jeeves non-gradient method, where the function expressing the descriptor is used as the objective function, is investigated. In this case, there is no need to calculate the descriptor for each point of the image (compiling a special accumulator function), which significantly speeds up the work of the method. The proposed descriptor and method were analyzed, and a software package was developed in Python 3 (visualization) and C ++ (tracking kernel) in the laboratory of the Physics and Mathematics Faculty of Kamchatka State University of Vitus Bering, which allows illustrating the work of the method and tracking the pupil in real time.
The review focuses on the most promising methods for classifying EEG signals for non-invasive BCIs and theoretical approaches for the successful classification of EEG patterns. The paper provides an overview of articles using Riemannian geometry, deep learning methods and various options for preprocessing and "clustering" EEG signals, for example, common-spatial pattern (CSP). Among other approaches, pre-processing of EEG signals using CSP is often used, both offline and online. The combination of CSP, linear discriminant analysis, support vector machine and neural network (BPNN) made it possible to achieve 91% accuracy for binary classification with exoskeleton control as a feedback. There is very little work on the use of Riemannian geometry online and the best accuracy achieved so far for a binary classification problem is 69.3% in the work. At the same time, in offline testing, the average percentage of correct classification in the considered articles for approaches with CSP – 77.5 ± 5.8%, deep learning networks – 81.7 ± 4.7%, Riemannian geometry – 90.2 ± 6.6%. Due to nonlinear transformations, Riemannian geometry-based approaches and complex deep neural networks provide higher accuracy and better extract of useful information from raw EEG recordings rather than linear CSP transformation. However, in real-time setup, not only accuracy is important, but also a minimum time delay. Therefore, approaches using the CSP transformation and Riemannian geometry with a time delay of less than 500 ms may be in the future advantage.
A context-aware approach to intelligent decision support based on user digital traces is proposed. The concept of human digital life with regard to intelligent decision support is discussed. The aims of addressing this concept in diverse domains are clarified and approaches to modelling human digital life are identified. In the proposed approach, digital traces serve as a source of information to reveal user preferences and decision-making behaviour. Perspectives on decision support based on user digital traces are developed. The research outcomes are the specification of requirements to intelligent decision support based on user digital traces, the principles, conceptual framework and information model of such support. The principles form the basis for the conceptual framework of intelligent decision support based on user digital traces. Components of the conceptual model are user profiles; a user digital life model that structures information containing in the digital traces; group patterns that describe preferences and decision-making behavior shared by a user group; and a decision maker ontology. The information model defines information flows between the framework’s components, identifies tasks that require solutions to implement the framework and offers techniques for this. The novelties of the research are applying the concept of human digital life to intelligent decision support and context-dependent ontological inference of the type of user as a decision-maker, which determines a group of users sharing their preferences and behaviours with the active user, to predict a recommended decision. The paper contributes to the areas of modelling human digital life and intelligent decision support.
Event-driven software systems, belonging to the class of systems with complex behavior in the scientific literature, are reactive systems, which react to the same input effect in different ways depending on their state and background.
It is convenient to describe such systems using state-transition models utilizing special language tools, both graphical and textual. Methodology for automated development of systems with complex behavior using the designed CIAO language (Cooperative Interaction of Automata Objects), which allows formally specifying the required behavior based on an informal description of the reacting system, is presented.
An informal description of a reacting system can be provided verbally in a natural language or in another way adopted in a specific domain. Further, according to this specification in the CIAO language, a software system for interacting automata in the C++ programming language is generated with a special system.
The generated program implements a behavior guaranteed to correspond to a given specification and original informal description. CIAO provides both graphical and textual notation. Graphic notation is based on an extended notation of state machine diagrams and component diagrams of the unified modeling language UML, which are well established in describing the behavior of event-driven systems.
The text syntax of the CIAO language is described by context-free grammar in regular form. Automatically generated C++ code allows using of both library and any external functions written manually.
At the same time, the evident correspondence of the formal specification and the generated code is preserved on conditions that the external functions conform to their specifications.
As an example, an original solution to D. Knut's problem of a responsive elevator control system is proposed. The effectiveness of the proposed methodology is demonstrated, since the automaton-converter generating the C++ code is presented as a responsive system, is specified in the CIAO language and implemented by the bootstrapping. The proposed methodology is compared with other well-known formal methods for describing systems with complex behavior.
The paper considers the problems of developing recommendations in the area of fiscal and trade policies to counter economic sanctions at the level of both individual countries subject to such sanctions and at the level of economic union including such countries. Research study has been carried out based on the developed dynamic multi-sectoral and multi-country computable general equilibrium model, which describes the functioning of the economies of nine regions of the planet, including five countries of the Eurasian Economic Union (EAEU). The initial data of the model contain built sets of consistent social account matrices (SAMs) for the historical and forecast periods based on data from the Global Trade Analysis Project (GTAP) database, national input-output tables, international trade and IMF data (including forecast) for the main macroeconomic regions indicators. Results of the impact on macroeconomic and sectoral indicators of the EAEU countries and other regions of a hypothetical scenario providing the imposition of additional economic sanctions since 2019 against Russia from some regions were obtained. An approach to solving problems to counter the sanctions policy based on the parametric control theory by setting and solving a number of dynamic optimization problems to determine optimal values of the corresponding fiscal and trade policy instruments at the level of individual EAEU countries and the EAEU as a whole was proposed.
The results of the model-based calculations were tested for the possibility of practical application using three approaches, including evaluation mappings’ stability of the exogenous parameters’ values of a calibrated model to the values of its endogenous variables. The results demonstrate greater efficiency for each EAEU country using a coordinated economic policy to counter sanctions, compared with the implementation of such policy separately at the level of each country.
To date, a huge amount of data on organisms diversity has been accumulated. Databases help to store and use these data for scientific purposes. There exists several dozens of databases for storing biodiversity data that were described in publications. Each has an original structure which badly correlates with the structures of other databases. This complicates data exchange and the formation of big biodiversity data array.
The cause of this situation is the lack of the formal definitions of universal data components, which allow to build the database with any data on the diversity of organisms. The analysis of publications and author’s experience show that such universal components are present in the characteristics of any organisms. For example, it is an organism taxonomic name and a location where it was found. There are six such components and they answer to one of the six questions: what, where, when, who, where from and where to. What determines the name of an organism; where determines the location where it was found; when indicates the date of finding; who enumerates the persons, who found and analyzed an organism; where from refers to publications, where data about an organism are extracted or published; where to shows in which biological collection an organism is put in.
Each component corresponds to a separate database table. These tables are linked to the table with data about organism (individual) and they are not linked with each other. Attributes of the links between the organism table and the component tables are stored in intermediate tables. They are used, for example, to store bibliographic facts, descriptions of collection items or geographical points. They also act as docking stations to which tables with any other information are attached.
The creation of any database about the diversity of living organisms begins with the definition of the table of organism specimens. It must be used even if there is no explicit data on organisms. In that case virtual organisms should be introduced and the other components should be linked with them by means of intermediate tables. The latter are docked to other data. Minimal structures of all the tables, links between them and examples of databases construction are described in the work.
A method for streamlining state partitioning procedures with two and three outcomes is considered. A terminology and methods of the questionnaire theory were used, and the sequence of partitioning procedures itself was defined as a heterogeneous questionnaire with questions having two or three answers. This class of questionnaires is special and is defined by the authors as a class of binary-ternary questionnaires. This is the simplest class of heterogeneous questionnaires. An increase in number of answers to a question in practice can give an advantage in parameters of the questionnaires, including in the indicator of its effectiveness – the average implementation cost. It is noted that the use of binary-ternary questionnaires in practice can reduce the average time for identifying events on a questionnaire, which is extremely important in those applications of questionnaires in which there is a time limit for identifying events, for example, in critical application systems. A method for optimizing binary-ternary questionnaires is presented, based on the search for the most preferred questions for each subset of identifiable events. The choice of preferred questions is based on establishing a comparison relationship between them. The article describes all possible types of comparison relations between two questions with two answers, two questions with three answers, and also between a question with two answers and a question with three answers. An example of obtaining a mathematical expression for a function that characterizes the preference of questions over each other, as well as a generalized formula for choosing the most preferred question for any heterogeneous questionnaires is given. An algorithm has been formed for the method of ordering questions, which allows one to construct a binary-ternary questionnaire with the lowest implementation cost in polynomial time. An example of a binary-ternary questionnaire optimization by the presented method is given.
The paper considers the problem of planning a mobile robot movement in a conflict environment, which is characterized by the presence of areas that impede the robot to complete the tasks. The main results of path planning in the conflict environment are considered. Special attention is paid to the approaches based on the risk functions and probabilistic methods. The conflict areas, which are formed by point sources that create in the general case asymmetric fields of a continuous type, are observed. A probabilistic description of such fields is proposed, examples of which are the probability of detection or defeat of a mobile robot. As a field description, the concept of characteristic probability function of the source is introduced; which allows us to optimize the movement of the robot in the conflict environment. The connection between the characteristic probability function of the source and the risk function, which can be used to formulate and solve simplified optimization problems, is demonstrated. The algorithm for mobile robot path planning that ensures the given probability of passing the conflict environment is being developed. An upper bound for the probability of the given environment passing under fixed boundary conditions is obtained. A procedure for optimizing the robot path in the conflict environment is proposed, which is characterized by higher computational efficiency achieved by avoiding the search for an exact optimal solution to a suboptimal one. A procedure is proposed for optimizing the robot path in the conflict environment, which is characterized by higher computational efficiency achieved by avoiding the search for an exact optimal solution to a suboptimal one. The proposed algorithms are implemented in the form of a software simulator for a group of ground-based robots and are studied by numerical simulation methods.
The problem of a priory control of potential degeneration of continuous multichannel dynamic systems is considered in the paper. Degeneracy is a property of a system describing operability of a multichannel dynamic system together with the basic properties of stability, reliability and invariance to the changing conditions. An assessment of potential generation of a system and its configuration together with the interconnections and polynomial exogenous signal is proposed. Degeneration process of a multichannel dynamic systems is a process of the rank reducing of the linear operator of the system. This statement is a basic concept of the degeneration factors approach. Algebraic properties of the matrix of the system’s operator is considered, and the matrix is named as the criterion matrix. Degeneration factor is calculated with the singular values of the criterion matrix. The global degeneration factor is conditional number of the criterion matrix of a system. In contrast to previous solutions it is proposed to form the criterion matrix of a system with the resolvent of its state matrix. Deparameterization of the linear algebraic problem is realized by additive decomposition of the output vector of the system by derivatives of the exogenous signal, and the steady-state mode of the system is considered. The procedure of a priori estimation of degeneration of continuous multichannel dynamic systems is proposed. The ways to achieve the required value of degeneration of the criterion matrix of the system with the modal control methods are discussed. The paper is supported with examples.
An algorithm (divided into multiple modules) for generating images of full-text documents is presented. These images can be used to train, test, and evaluate models for Optical Character Recognition (OCR).
The algorithm is modular, individual parts can be changed and tweaked to generate desired images. A method for obtaining background images of paper from already digitized documents is described. For this, a novel approach based on Variational AutoEncoder (VAE) to train a generative model was used.
These backgrounds enable the generation of similar background images as the training ones on the fly. The module for printing the text uses large text corpora, a font, and suitable positional and brightness character noise to obtain believable results (for natural-looking aged documents).
A few types of layouts of the page are supported. The system generates a detailed, structured annotation of the synthesized image. Tesseract OCR to compare the real-world images to generated images is used. The recognition rate is very similar, indicating the proper appearance of the synthetic images. Moreover, the errors which were made by the OCR system in both cases are very similar. From the generated images, fully-convolutional encoder-decoder neural network architecture for semantic segmentation of individual characters was trained. With this architecture, the recognition accuracy of 99.28% on a test set of synthetic documents is reached.
A widespread use of multi-user interfaces, due to multimodality of traditional interpersonal communication, a transition to a polymerized presentation of information and systems, has allowed the creation of new approaches to their implementation based on distributed terminal systems. An approach to the synthesis of topological structures of such systems implemented in two stages is proposed in the article. The first stage determines a minimum set of communication nodes and their location based on the requirements for the availability of communication nodes for various categories of users and the globality of a distributed terminal system. The second stage determines options for constructing communication nodes and connections between them, which ensure the performance of audio monitoring functions of users of local information spaces while ensuring continuity of a bridge for different categories of users. A model example of the synthesis of a distributed terminal system for audio monitoring of two categories of users (adults and children) in the local information space (home), voice control subsystems of the "smart home" is presented. As a part of its solution, at each stage of the synthesis, the initial data are determined, a formal formulation of the synthesis problem is carried out, an algorithm for the solution and the results are presented. So the task of the first stage of the synthesis is a linear integer mathematical programming problem, solved in the model example by the simplex method, the solution of the second stage problem is based on the alternative graph formalization and the method of "branches and borders". The obtained results clearly demonstrate the capabilities of the proposed scientific and methodological tools for the synthesis of the topological structure of distributed terminal systems and the prospects of its use in the newly arising tasks of the technical implementation of new infocommunication technologies and services.
One of the approaches to organization of error correcting coding for multilevel flash memory is based on concatenated construction, in particular, on multidimensional lattices for inner coding. A characteristic feature of such structures is the dominance of the complexity of the outer decoder in the total decoder complexity. Therefore the concatenated construction with low-complexity outer decoder may be attractive since in practical applications the decoder complexity is the crucial limitation for the usage of the error correction coding.
We consider a concatenated coding scheme for multilevel flash memory with the Barnes-Wall lattice based codes as an inner code and the Reed-Solomon code with correction up to 4…5 errors as an outer one.
Performance analysis is fulfilled for a model characterizing the basic physical features of a flash memory cell with non-uniform target voltage levels and noise variance dependent on the recorded value (input-dependent additive Gaussian noise, ID-AGN). For this model we develop a modification of our approach for evaluation the error probability for the inner code. This modification uses the parallel structure of the inner code trellis which significantly reduces the computational complexity of the performance estimation. We present numerical examples of achievable recording density for the Reed-Solomon codes with correction up to four errors as the outer code for wide range of the retention time and number of write/read cycles.
Systems of interval control of train movement Signaling systems, which are currently in service in Russian railways, use the electric track circuit as the main data channel between signals and locomotives. Code-modulated electric signals transferred through that channel are frequently get corrupted which leads to railway traffic delays.
Decoding of the electric signal received from a track circuit can be represented as an image classification problem, and thus the stability of the data channel could be significantly improved.
However, to build such a classifier based on some machine learning algorithm, one needs a large dataset. In this article, a simulation model to synthesize this dataset is proposed.
The structure of the computer model matches the main stages of the electric code-modulated signal generation in a track circuit: code signal generator, rails, locomotive receiver.
Based on code signal generator schematic and waveform diagrams, a generator algorithm is developed. At this stage, we modeled timings of electric code signals according to the specification as well as their random deviations caused by various factors.
The analysis of substitution circuits of the rail line revealed that it has the properties of a low-pass filter. So, the rail line using the Butterworth digital filter with corresponding parameters is modeled. Additionally, at this stage, random noise during transmission was taken into account.
A similar technique is applied for modeling of a locomotive receiver which has a band-pass filter as the first signal processing block.
Thus, the proposed simulation model consists of a set of algorithms which run in series. By varying the parameters of the model, one can synthesize waveform diagrams of the electric code-modulated signal received by the locomotive equipment from a track circuit working in various modes and conditions.
An approach is proposed to assess the quality of stationary Markov models without absorbing states on the basis of a measure of statistical stability: the description is formulated and its properties are determined. It is shown that the estimates of statistical stability of models were raised by different authors, either as a methodological aspect of the model quality, or within the framework of other model properties. When solving practical problems of simulation, for example, based on Markov models, there is a pronounced problem of ensuring the dimension of the required samples. On the basis of the introduced formulations, a constructive approach to solving the problems of sample size optimization and statistical volatility analysis of the Markov model to the emerging anomalies with restrictions on the accuracy of the results is proposed, which ensures the required reliability and the exclusion of non-functional redundancy.
To analyze the type of transitions in the transition matrix, a measure of its divergence (normalized and centered) is introduced. This measure does not have the completeness of the description and is used as an illustrative characteristic of the models of a certain property. The estimation of the divergence of transition matrices can be useful in the study of models with high sensitivity of detection of the studied properties of objects. The key stages of the approach associated with the study of quasi-homogeneous models are formulated.
Quantitative estimates of statistical stability and statistical volatility of the model are proposed on the example of modeling a real technical object with failures, recovery and prevention. The effectiveness of the proposed approaches in solving the problem of statistical stability analysis in the problems of qualimetric analysis of quasi-homogeneous models of complex systems is shown. On the basis of the offered constructive approach the operational tool of decision-making on parametric and functional adjustment of difficult technical objects on long-term and short-term prospects is received.
An algorithm for the formation of the quinary Gordon-Mills-Welch sequences (GMWS) with a period of N =5 4 -1=624 over a finite field with a double extension GF[(5 2 ) 2 ] is proposed. The algorithm is based on a matrix representation of a basic M-sequence (MS) with a primitive verification polynomial h мs ( x ) and a similar period. The transition to non-binary sequences is determined by the increased requirements for the information content of the information transfer processes, the speed of transmission through communication channels and the structural secrecy of the transmitted messages. It is demonstrated that the verification polynomial h G ( x ) of the GMWS can be represented as a product of fourth-degree polynomials-factors that are indivisible over a simple field GF(5). The relations between roots of the polynomial h мs ( x ) of the basic MS and roots of the polynomials h с i ( x ) are obtained. The entire list of GMWS with a period N =624 can be formed on the basis of the obtained ratios. It is demonstrated that for each of the 48 primitive fourth-degree polynomials that are test polynomials for basis MS, three GMWS with equivalent linear complexity (ELC) of l s =12, 24, 40 can be formed. The total number of quinary GMWS with period of N =624 is equal to 144. A device for the formation of a GMWS as a set of shift registers with linear feedbacks is presented. The mod5 multipliers and summators in registers are arranged in accordance with the coefficients of indivisible polynomials h сi ( x ). The symbols from the registers come to the adder mod5, on the output of which the GMWS is formed. Depending on the required ELC, the GMWS forming device consists of three, six or ten registers. The initial state of cells of the shift registers is determined by the decimation of the symbols of the basic MS at the indexes of decimation, equal to the minimum of the exponents of the roots of polynomials h сi ( x ). A feature of determining the initial States of the devices for the formation of quinary GMWS with respect to binary sequences is the presence of cyclic shifts of the summed sequences by a multiple of N /( p –1). The obtained results allow to synthesize the devices for the formation of a complete list of 144 quinary GMWS with a period of N =624 and different ELC. The results can also be used to construct other classes of pseudo-random sequences that allow analytical representation in finite fields.
. As is known, today the problem of geomagnetic field and its variations parameters monitoring is solved mainly by a network of magnetic observatories and variational stations, but a significant obstacle in the processing and analysis of the data thus obtained, along with their spatial anisotropy, are omissions or reliable inconsistency with the established format. Heterogeneity and anomalousness of the data excludes (significantly complicates) the possibility of their automatic integration and the application of frequency analysis tools to them. Known solutions for the integration of heterogeneous geomagnetic data are mainly based on the consolidation model and only partially solve the problem. The resulting data sets, as a rule, do not meet the requirements for real-time information systems, may include outliers, and omissions in the time series of geomagnetic data are eliminated by excluding missing or anomalous values from the final sample, which can obviously lead to both to the loss of relevant information, violation of the discretization step, and to heterogeneity of the time series. The paper proposes an approach to creating an integrated space of geomagnetic data based on a combination of consolidation and federalization models, including preliminary processing of the original time series with an optionally available procedure for their recovery and verification, focused on the use of cloud computing technologies and hierarchical format and processing speed of large amounts of data and, as a result, providing users with better and more homogeneous data.
Recently, Speech Emotion Recognition (SER) has become an important research topic of affective computing. It is a difficult problem, where some of the greatest challenges lie in the feature selection and representation tasks. A good feature representation should be able to reflect global trends as well as temporal structure of the signal, since emotions naturally evolve in time; it has become possible with the advent of Recurrent Neural Networks (RNN), which are actively used today for various sequence modeling tasks. This paper proposes a hybrid approach to feature representation, which combines traditionally engineered statistical features with Long Short-Term Memory (LSTM) sequence representation in order to take advantage of both short-term and long-term acoustic characteristics of the signal, therefore capturing not only the general trends but also temporal structure of the signal. The evaluation of the proposed method is done on three publicly available acted emotional speech corpora in three different languages, namely RUSLANA (Russian speech), BUEMODB (Turkish speech) and EMODB (German speech). Compared to the traditional approach, the results of our experiments show an absolute improvement of 2.3% and 2.8% for two out of three databases, and a comparative performance on the third. Therefore, provided enough training data, the proposed method proves effective in modelling emotional content of speech utterances.
The security of two recently proposed symmetric homomorphic encryption schemes based on residue system is analyzed.
Both schemes have a high computational efficiency since using residue system naturally allows parallelizing computations. So they could be good candidates to protect the data in clouds. But to the best of our knowledge there is a lack of security analysis for these encryption schemes.
It should be noted that the first cryptosystem under our consideration was already considered in literature.
The sketch of adaptive chosen-plaintext attack was proposed and estimation of its success was given.
In this paper the attack is analyzed and it is shown that in some cases it may work incorrectly. Also more general algorithm of known-plaintext attack is presented. Theoretical estimations of probability to recover the key using it and practical estimations of this probability obtained during the experiments are provided.
The security of the second cryptosystem has not been analyzed yet and we fill this gap for known-plaintext attack. The dependency between the number of «plaintext, ciphertext» pairs required to recover the key and parameters of the cryptosystem is analyzed. Also some recommendations for increasing the security level are provided.
The final conclusion of our analysis is that both cryptosystems are vulnerable to known-plaintext attack. And it may be dangerous to encrypt private data using them.
Finally it should be noted that the key element of the proposed attacks is the algorithm of computing the greatest common divisor. So their computational complexity depends polynomially on the size of input data.
Internet-of-Things networks are applied in many areas of people life now. A cornerstone in a issue of a possibility of further distribution and use of these networks is the aspect of security support. However, the features of these networks complicate the use of traditional means and systems of computer protection in them. One of such features is the need to analyze very large volumes of data, heterogeneous by the nature, in real time and with the minimum computing expenses. Taking into account the features of computational capabilities of Internet-of-Things networks the architecture of the system for parallel big data processing based on the data processing technology named as Complex Event Processing and the parallel computing platform Hadoop is offered. The issues directly connected to the architecture of the system and with implementation of its principal components are considered. These components are: data collection component, data storage component, data normalization and analysis component, and data visualization component. An interconnection between components is provided by means of the Hadoop Distributed File System that is a basis for creation of the distributed data storage. The data collection component organizes the distributed data acquisition and their storage in the data storage component. The data normalization and analysis component transforms data to a uniform format and processes them by means of correlation rules. The data visualization component presents data in a graphical form more suitable for further perception by the operator. The results of the experimental evaluation of the system performance confirming a conclusion about its high performance are discussed.
Naturalness is one of the most important aspects of synthesized speech, and state-of-the-art parametric speech synthesizers require training on large quantities of annotated speech data to be able to convey prosodic elements such as pitch accent and phrase boundary tone. The most frequently used framework for prosodic annotation of speech in American English is Tones and Break Indices – ToBI, which has also been adapted for use in a number of other languages. This paper presents certain deficiencies of ToBI when applied in synthesis of speech in American English, which are related to the absence of tags specifically intended to mark differences in the level of prosodic stress (emphasis) related to a particular sentence constituent. The research presented in the paper proposes the introduction of a set of tags intended for explicit modeling of the degree of prosodic stress. Namely, a certain sentence constituent can be particularly emphasized, when it is the intended focus of the utterance, or it can be de-emphasized, as is commonly the case with phrases reporting direct speech or with comment clauses. Through several listening tests it has been shown that learning such prosodic events from data has distinct advantages over approaches attempting to exploit the existing ToBI tags to convey the degree of emphasis in synthesized speech. Namely, speech synthesized by a neural network trained on data tagged for the level of prosodic stress appears more natural, and the listeners are more successful in locating the sentence constituent carrying prosodic stress.
The dynamic method of searching anthropogenic objects in the seabed with au-tonomous underwater vehicles use is offered. Unlike a static method where all devices with geophones onboard are buried and attached to a bottom simultaneously and after the end of a search session float at the same time, the continuity of guiding of search due to dynamic behavior of group of devices is provided in the offered method. It is offered that while the main part of devices with geophones listens to the reflected signal, other part of devices moves further along the route. The continuity of guiding of seismic exploration in the preset area and essential abbreviation of time for its carrying out is in such a way reached. The algorithm of the coordinated behavior of devices with geophones onboard and the submersible moving the radiator is given. The mathematical model of functioning of "the radiator — geophones" system is described. Experiments by determination of optimum parameters of guiding of seismic exploration of anthropogenic objects are made. Results of simulation allowed to evaluate a scoring from use of the offered method, to determine its optimum parameters and to develop recommendations about its use for search of anthropogenic objects in the thickness of a seabed.
Both timely and adequate response on the computer security incidents and organization losses from the computer attacks depend on the accuracy of situation recognition under the cybersecurity monitoring. The paper is devoted to the enhancement of the attack models in the form of attack graphs for the cybersecurity monitoring tasks. A number of important issues related to the application of attack graphs and their solutions are considered. They include inaccuracies in the definition of the pre- and post-conditions of attack actions, the processing of attack graph cycles for the application of Bayesian inference for the attack graph analysis, the mapping of security incidents on an attack graph, the automatic countermeasure selection in case of a high security risk level. The paper demonstrates a software prototype of the security monitoring system component which was earlier implemented and modified considering the suggested enhancements. The results of experiments are described. The influence of the modifications on the cybersecurity monitoring results is shown on a case study.
Nature inspired algorithm based Load balancing of tasks on virtual machines (VMs) has become an area of greater research interest. Honey Bee Behavior Based Load Balancing (HBB-LB) was introduced to balance the load with a maximum throughput. This approach also balances the priorities of the tasks on the VM to minimize the waiting time of the tasks. However, HBB-LB considers only the VM load for balancing the load, which might not be sufficiently effective. This paper proposes an Improved Honey Bee Behavior Based Load Balancing (IHBB-LB), taking into consideration a few more QoS parameters of VM, such as service response time, availability, reliability, cost and throughput to enhance load balancing. Response time is vital in determining the instant activity of a VM while availability determines available resource and state of VM (idle or active) and Reliability determines the level of trust in a VM. Most importantly, Cost for utilizing a VM and Throughput (capability of VM) are also essential in determining the VM efficiency. But, the inclusion of multiple QoS parameters results in multi-objective optimization problem. As a number of QoS parameters are computed, the Fuzzification of the QoS values was performed through the generated fuzzy rules and multi-objective optimization problem was eliminated. The experiments were performed in terms of makespan, response time, degree of imbalance and the number of tasks migrated and results indicate that the IHBB-LB provides a better level of performance.
In this paper, we consider the problem of mutual reconstruction of face image pairs. We addressed this problem in our previous article, where the proposed solutions were discussed in connection with Heterogeneous Face Recognition and Cross-Modal Multimedia Retrieval problems. Those solutions are based on one-dimensional and two-dimensional Principal Component Analysis performed over two original face images followed by their projection on independent eigenspaces, estimation of a transformation matrix and mutual reconstruction of the face image by means of one-dimensional and two-dimensional Karhunen-Loève Transform.
In this article, we propose new approaches and solutions, which are based solely on the two-dimensional eigenspace projection methods, and two regression models — Multiple Linear Regression and Partial Least Squares regression.
We present the experiments on mutual reconstruction of face images in sketch/photo pairs, in pairs of face images with age-related changes, and in pairs of 2D/3D face images. In order to conduct the experiments, we selected two variants of the proposed approach. First one is based on two-dimensional Principal Component Analysis and Partial Least Squares regression, and the second one is based on two-dimensional Partial Least Squares and Multiple Linear Regression. Both variants showed acceptable performance for practical applications involving the mutual reconstruction of face images. Furthermore, we consider the method to improve the quality of reconstructed face images in the case of mixed datasets. This method involves classification of the dataset by means of two-dimensional Linear Discriminant Analysis and fitting of a separate regression model for each class.
In addition, we show that generally, mutual reconstruction of face images is also achievable in conditions when original images are not a part of training sets of face images.
In this paper a comprehensive architecture for emotional and affective process in a virtual agent is presented . By fusing video, audio and text emotion of the users as affective sources to the system, the virtual agent can appraise the mood of clients. To emulate the influence of the human hormones in the virtual agent, the proposed system employs Artificial Endocrine System (AES) in the aspects of moods and biological needs, by controlling the concentration level of the influential hormones. The agent affective processor engages AES, personality and mood modules to manage the internal state. Intelligent virtual agent would interact with clients according to its affective state circumstances.
The proposed system presents a complete platform to capture emotional channels through the network to analyze and process them in an affective engine in order to determine the emotional quality of the response.
Since early 1990th, multi-agent technology is evaluated as one of the most perspective design and implementation technologies for industrial scale distributed applications. However, the practice has falsified all the prognoses and expectations. The paper examines the current state-of-the-art in industrial use of the multi-agent technology. It analyzes external and internal reasons preventing broad practical use of the technology and formulates the lessons learnt through this examination. Finally, the paper outlines the basic issues to be revised in order to practically realize the great potential of the multi-agent technology. The paper also shows, by example, that multi-agent technology has currently no alternative for many novel most important applications including Internet of Things.
Modern hardware systems of processing the video data stream for color coding apply the principle of constant brightness proposed in the development of the NTSC color coding system. This principle, like its implementation, is not free from drawbacks: loss of information on the clarity of the encoded color images, degradation of clarity in achromatic details and images as the color saturation increases, etc. In addition, the use of video data decoding formats in digital video data processing systems, such As 4:2:2, 4:2:0, 4:1:1, distorts the decoded video image.
An alternative approach for encoding a color video stream is to apply the principle of constant color luminance. The work describes the coding according to the principle of constant color luminance. A comparative analysis of the transformed images is carried out with the help of the two principles given. The advantage of applying the principle of constant color brightness in digital video coding systems is shown. It is shown that using the principle of constant color brightness it is possible to obtain a gain of more than 6 dB.
The implementation of the principle of constant color luminance for real and integer modern hardware platforms is described. A comparative analysis of the realizations of the principles of constant brightness and constant color luminance was carried out, showing the advantage of applying the principle of constant color luminance for some modern processors.
The application of the principle of constant color luminance in digital video encoding systems can help improve the quality of recoverable color coded images.
The transformable designs of space basing are delivered into orbit in a folded state, which creates the task of their reliable disclosure. In this paper we propose to use an actuator in the form of electrical machines as the executive body. The use of this type of actuator allows controlling the process of deployment.
As a large-sized transformable structure we consider the space-based reflector. At present, the transfer of the machines from the folded state to the operation state is carried out by stages. The paper considers the joint implementation of two stages: the rotation of the root unit of the spoke and the extension of the intermediate unit. Mathematical models for rotational and translational motions are developed which take into account such parameters as bending and contraction of the spoke. Modeling and analysis of the results of different variants of the joint disclosure of the reflector elements are made: the use of the engine for each of the components of the motion and the use of centrifugal force for the extension of the spoke.
The application of the algorithm for correcting the control parameters is considered. One of the important advantages of the algorithm is the ability to carry control in real time. It can be used to calculate the reference control in algorithms based on the two-channel principle.
Networking description issues, in respect to graph theory, occupy a special place in tasks connected with communications network analysis and synthesis by stability indexes. An approach, implying formal representation of a telecommunications network as a non-oriented graph, is traditional. Graph representation variants based on different number stacks or one number instead of diagrams are considered. In some cases, such description significantly simplifies procedures of stability and other index computation. Thus, the tasks of telecommunications network synthesis can be solved not only by enumeration visual methods but also algorithmically. Examples of computing net-working numeric features for elementary versions are given. In the paper, we not only describe analytical instruments for networking structure declaration but also introduce relationships transforming such descriptions into one another.
An algorithm for the formation of a set of effective classification features, based on the truncated search concept and the use of the information about individual classification indicators in the granules selection, is proposed. Its computational efficiency is ensured by the use of simple comparison operations of classification results of individual classes when choosing the most informative granule at the next iteration and using the parallel computing technology on graphics processing units.
Known methods of the truncated selection for the formation of sets of effective classification features are considered. The results of the informative features search are discussed through the example of solving the cloud classification problem on the basis of the application of a probabilistic neural network and the texture information of MODIS satellite imagery. A description of the used classifier and the statistical approach to describing the texture of images is given.
The most effective cloud classification characteristics are determined by comparing the combinations of textural features obtained by truncated selection methods. The study results of the change dynamics in the correctly classified clouds estimation when performing various algorithms for informative features searching are shown. It is established that the method, developed in this paper, makes it possible to reduce the variance of probability values of the correct classification of individual classes.
Electrogastroenterography is the promising method of examination of the motion activity of the digestive system. It is based on the measurement and further processing of bioelectric signals. During last years the progress in the development of electrophysiological methods of diagnostics is due to the computer processing of measuring signals. This paper is devoted to the aspects of organization of measurements in electrogastroenterography. In the paper, we present an introduction to the problem domain; analyze the information structure of a measuring signal; review the diagnostics parameters obtained as a result of spectral analysis of electrogastroenterography signals; discuss the tasks of automation of diagnostics.
We propose the new method of sampling of gastroenterograms. It considers the factor of the finite length of measurement sessions and spectral properties of signals. Representation of a signal as the finite sum of finite cardinal B-splines with integer degrees is used in the method. The computer experiment for testing the accuracy of signal reconstruction with parameters of measuring session used in electrogastroenterography was conducted.
The paper proceeds research of the security event correlation methods in Security Information and Event Management (SIEM) systems. In this part we consider correlation methods of information security events that can be applied during separate correlation stages described in the previous paper. Classification of the considered correlation methods and analysis of their advantages and disadvantages are provided. The effectiveness of using these methods at different stages of the correlation process is evaluated.
We propose a mathematical method based on the use of complex functions as a status attributed to the state of the object functions. The method is focused on the direct de-scription of the mathematical model of the feedback channel of ergatic systems. Status functions are formed as an optimal, orthonormal basis of the system. The rules for working with the status functions are introduced, and their interpretation is proposed. A method for forming the operator for conversing signals given as status functions is proposed. Thus, the mathematical support of an analysis of the interaction in the integrated educational environment is improved on the basis of modelling competence portraits of participants of the learning process, which is characterized by the use of status functions. This allowed for the multicomponent assessment of competence in the form of complex-valued functions.
This paper discusses the problems of application and choice of cryptographic standards taking into account user requirements and preferences. User profiles are created by means of the ontology apparatus. On the basis of user profiles and document features an appropriate set of documents is formed, the elements of which are then arranged according to the degree of compliance to user requirements. Various filtration methods, such as collaborative filtering, content analysis and filtering, as well as hybrid methods combining both approaches, are used. Thus, a recommender system for choosing cryptographic standards and algorithms is built. If there are several user selection criteria, it is reasonable to apply an integral index of object’s relevance to user preferences. This index is defined as the weighed sum of the particular indices.
In the article, we propose an approach to the creation of a method of optimal complexity through examples of economic and mathematical models that are most common at micro - and macro-levels. This method minimizes the total error for given duration of solving information and calculation tasks in the model studies of organizational-economic systems. In addition, this approach allows us to substantiate the requirements for the accuracy of input information. It is shown that to ensure the sustainable level of simulation accuracy, the authority (a model customer) needs to consider real relations of the accuracy of initial information, the structural accuracy of the model, the functional model accuracy and the accuracy of numerical algorithms.
In this paper, the criteria of detection of groups of objects are suggested. These criteria are based on uncertain estimates of objects qualitative attributes. The tasks of the homogeneous and heterogeneous detection of groups of objects are solved. In the homogeneous groups, the values of cognominal qualitative attributes are equal. In the heterogeneous groups, the values of such attributes may differ, but they have to match the a priori set valid combinations. The groups detection is based on the graph-theoretical approach. The decision about the pair of objects belonging to the same group is made by the ternary logic. That allows detecting possible and reliable object groups.
Energy measurement of side electromagnetic radiation signals is an important task in addressing issues of electromagnetic compatibility and information security. When constructing video systems of computer equipment, parallel transmission lines are used, side electromagnetic radiations from which interact. The article presents the characteristics of circuit realization of video systems and radiation models considering them. In addition, we present a model of information signals of video interfaces, which takes into account the compensatory properties of differential lines in a video system with a DVI interface.
Computer programs for scientific research with complex geometric models should be integrated with a CAD-system. In the paper, we consider three approaches based on data exchange in the DXF (drawing exchange format); COM-technologies; and application programming interface (API) ObjectARX. DXF data exchange with CAD is a simple and universal way available to most researchers, but it excludes interactive CAD control. COM technologies provide simple, reliable mechanisms of interactive CAD control from an external program, although they do not have the highest level of performance. Combination with DXF increases their performance. For problems with complex geometric models, Auto-CAD-system provides ObjectARX API and .NET API – low-level technologies that ensure the maximum possible functionality and performance, compared to other technologies, but have some limitations. We present program listings that simplify the understanding of considered technologies. Their performance analysis has been conducted, and recommendations for their use are given from the researcher’s perspective.
The paper is devoted to the analysis of security event correlation methods in Security Information and Event Management (SIEM) systems. The correlation process is considered to be a multilevel hierarchy of stages. The goal of each stage consists in executing appropriate operations on security data being processed. Based on this analysis we outline each correlation stage and their interaction scheme.
The paper deals with a comparative analysis of the hydroacoustic modems currently available on the market. Theoretical calculations of the operating range of the developed hydroacoustic modem are given. The results of the experimental verification of the hydroacoustic modem model in the pool are presented.
An improvement of existing navigation algorithms for a generic polygonal linkage is presented. Our algorithm constructs a path between two arbitrary configurations of a polygonal linkage. This path contains att most eight steps
To assess the efficiency of functioning of the onboard control complex of spacecraft Earth remote sensing, it is proposed to use models of open queuing networks. For queuing networks, nodes are given by multi-channel non-Markov queuing systems. The proposed model allows one to take into account the costs for compressing and broadcasting the graphics in the calculation of the distribution of the residence time of the application in a network model.
In the article the option of application of provisions of the theory of sets and the theory of hierarchical systems for the formal description of elements of the open systems realizing interdependent multi-level processes of routing is offered. Infrastructure facilities, in which distribution of the material resources, energy or information with the use of hierarchically nested functions of control of flows and/or routings is realized, can serve as real prototypes of such open systems.
A more detailed representation of Shannon's formula for determining the channel capacity is developed. In a modified form it takes into account the parameters of the signal in both time and frequency range. One of the components introduced to the modified ratio of the Shannon communication channel throughput is the uncertainty relation. A new method of calculating the uncertainty relation for the signals is proposed and described. Examples of calculations of the uncertainty relation for different classes of signals using existing methods and the proposed approach are presented.
Summary information on existing integrated circuits (IC) analog master slice array (MSA) and configurable structured array (CSA) crystals of domestic and foreign production-oriented applications in sensor systems, including robots for various purposes, and aircraft are reduced.
The parameters of the new integrated circuits MSA (AGAMC-2.1) and CSA (MH2XA010), as well as the prospects of designing on their basis of radiation-resistant IC for analog signal processing and interfaces of sensors of different physical kind.
The paper describes an approach to form a harmonized model of requirements for a specific software development project. Such a model is intended to resolve the contradictions caused by different understanding of the "program requirements" definition, as well as to coordinate the various models of requirements types that underlie certain types of documents, specifications and methodologies of requirements engineering. The most commonly used examples of requirements specifications were analyzed. We propose the theory of the field structure of parts of speech as a basis for requirement type classification and give a special definition for "requirements types." In addition, we propose a set of criteria for identification of types and fields of requirements. Based on a set of criteria for a requirement instance, this approach al-lows one to identify the requirements type and then recommend adding desired types to the requirements specification.
Zeros of the control objects are the cause of difficulties in control system design process. In particular, this effect takes place in direct drive servo with elastic coupling whose finite-dimensional models have zeros. In modern literature, these difficulties are associated with the attenuation of controllability and observability of the objects with zeros. This article shows that controllability and observability properties could not explain the problem of the control system design process for objects with zeros because these properties are not invariant to the choice of basis. It is proposed to consider the completeness property of the object instead of the controllability and observability properties. The proximity of the object to singularity or to a loss of the completeness determines the difficulty of the control system design process. The article analyzes one of the methods to regularize the control system synthesis procedure.
In this paper, we propose a formal model of individual and group behavior based on p-adic coordinate system. The model developed allows description and prediction of behavioral reactions of the personnel of critically important objects under conditions of external destructive informational impacts.
The article describes the main effort estimation models for software development. It is spoken in detail about the most widely used software effort estimation model, the Constructive Cost Model (COCOMO). An approach to improve the accuracy of estimates of COCOMO model based on neural network approximation is proposed. It deals with the choice of a neural network with back-propagation errors as an approximator. Data are given about numerical results of neural network learning using COCOMO model parameters as input.
In this paper, we review the actual and perspective areas of use of high-speed video cameras. We discuss the possibility of applying high-speed cameras in the field of human-computer interaction to detect dynamic video information (including visual speech). We also describe main tasks, which can be solved with high-speed cameras, such as: automatic lip-reading, eye blink detection, facial micro-expression recognition, etc. We identify potential challenges associated with the introduction of high-speed video cameras and analyze the conditions of research area. Besides, we analyze state-of-the-art in the field at the moment and prove that there is an urgent need for further scientific and technical developments in this area. We propose some advanced applications and tasks in the human-computer interaction domain, where high-speed video capturing can be useful, such as audio-visual continuous speech recognition and automatic reading speech by lips. In further research, we will implement such a multimodal system for audio-visual Russian speech recognition using a microphone and a high-speed video camera JAI Pulnix.
The article deals with the application of systems of the basiс functions, defined on finite argument intervals, in the problem of obtaining discrete signal samples. These mathematical bases allow justifying the size of signal sample lattices for actual situations where their spectra are infinite and are characterized by a certain degree of attenuation at high frequencies. For expressions with finite functions, which do not have the time as argument, the conception “the Nyquist frequency” loses its significance.
The article discusses a number of problems of space cybernetics connected with the optimum process control of informational interaction of a spacecraft with the surface of the Earth. A spacecraft is regarded as an informational active mobile plant, i.e. as a complicated mobile system supplied with necessary devices for the realization of informational interaction with the enclosing physical medium and the corresponding necessary onboard resource. It is shown that these problems are reduced to the problems of optimum programmed control by some special differential dynamic system in a Hilbert space of conditions. For a solution of the specified problems, the paper uses L.S. Pontrjagin's expanded maximum principle and the general concept of Lagrange.
The paper describes a method of applying the laws of equilibrium thermodynamics to complex socio-economic systems, in particular, for the development of a methodology of calculation of social dissatisfaction indicators, and also two information systems proposed for calculating this indicator.
The problem of identification of the different aspects of the interacting objects in-formation and telecommunication networks (ITN) on the results of network traffic monitoring is analyzed. As a solution to this problem in terms of identifying the types of network facilities and operations interaction graph model of behavior is proposed, in terms of relations disclosure of anonymity interacting objects predicate state model objects ITN based on the relationship between instances is offered.
This paper represents a model of socially important Internet resources. The model is designed to study the processes of communicative and cognitive users interact modern socially important Internet resources. Combining the results obtained on the basis of cognitive theories of conformity in social psychology, studies of perception and forgetting information in physiology as well as basic laws of information theory has allowed us to develop the model, equally adequately describe the information-psychological interaction of participants in various types of social resources: forums, social networks, blogs and online media.
In this paper we propose an approach to the detection of a wide class of visual contaminants on the basis of visual perceptual hash calculation and formation of a reference database of potentially dangerous media objects for building an automated system to protect consumers of multimedia content from unwanted effects on their psychic and consciousness.
This article considers approaches to information systems remote security analysis. The model of process of remote security analysis of information systems using decision making theory is proposed. Existing methods to solve partially observable Markov decision processes problem are reviewed.
Discusses the principles of evaluation the effectiveness of the malefactor in the critical infrastructure. The "operating complex" process modeling of security breach is presented. Investigated uncertainty modeling process of malefactor and ways to overcome them. A mathematical model of aggregate effectiveness of the malefactor, which removes some limitations of existing probabilistic models of random phenomena in the field of information security is developed. The model is called stochastic super indicator and is intended for research of conflict situations in critical infrastructure.
The technique of formalization of fuzzy predicates together with crisp logical variables for the specification of fuzzy logic-dynamic situations and crisp logical commands is considered. The technique is based on the submission of crisp and fuzzy logic variables by means of membership functions and on the use of fuzzy inference rules. Here we only used the forms of presentation of fuzzy logic functions which are also suitable for presentation of crisp logic functions. By the examples the possibility of using the considered technique for the computer implementation of hybrid processes is shown.
The paper presents a method of constructing a sentiment classifier on two and three classes (positive and negative, positive, neutral and negative texts). It also presented the results of experiments showing the high accuracy of the proposed method on text which are not belong to any pre specified domains. The effectiveness of the presented method is confirmed by experiments' results on the text collection of blogs from ROMIP 2012 seminar. It was used following metrics for classifier evaluation: precision, recall, accuracy and F-measure. The value of F-measure of the proposed method for classification into 2 classes is up to 93%. In addition to blog collection ROMIP 2012 for experiments were used a collection of news and a collection of short-texts from social networks.
N -tuple algebra is a mathematical system to formalize n -ary relations. This algebra provides for modelling both data (graphs, n -ary relations, etc.) and knowledge (semantic networks, reasoning models, formulas of propositional and predicate calculi, production systems, ontologies and so on) by the same structures. These structures look like matrices and can be easily processed by parallel algorithms.
The article describes the models and methods of design automation processes functioning man-machine systems based on functional-structural theory of man-machine systems and generalized structural method of prof. A.I. Gubinsky. Describes the basic concepts and definitions of the functional-structural theory. An algorithm for generating a series-parallel connection operations with the additional constraints, the algorithm generating alternatives process of man-machine systems based on the coincidence of the objectives of operations, the algorithm generating parametric alternatives based on a template. The basic concepts and definitions necessary for the generation algorithm of process fragments with the mandatory combinations of operations. Proposed the use of combinations of binding matrix operations, in which the nonzero elements of rows are meaningful only possible combinations of standard ways of performing the relevant functional units in alternatives. Introduced the concept of the composition and the concept of incompatibility steam trains, on the basis of which the distribution of functions performed compositions. Describes the integration of optimization models processes functioning man-machine systems with simulation method, as of the functional-structural theory is only applicable for processes without aftereffect and in the absence of dependent operations. Suggested remedy this limitation by integrating design technology processes functioning man-machine systems based on the functional-structural theory with the method of simulation of the process areas that are not fulfilled the above requirements of the functional-structural theory.
The paper describes the overall architecture of the system of intelligent information security services (SIISS) for usage in critical infrastructures, as well as its constituent components. In the overall architecture of SIISS the event level, the data layer and applied level are determined. Structural and functional models of the SIISS overall architecture are outlined to highlight the main functional mechanisms for selected levels. As key components of SIISS, which provide a more detailed description of their architectural design, we consider the event correlation management module, the prognostic security analyzer, the component of attack and security system behavior modelling, the decision support and reaction component, the visualization module, and the repository.
In this paper we investigated the possibility of application a Stable aggregate currency unit of account consisting of four simple currencies – euro, pound, dollar, yen for analyze the dynamics of a exchange value of the aggregated commodity, consisting of four simple commodities – silver, gold, platinum, palladium. It has been shown that the variability of exchange value "baskets" which is measured by the standard deviation from one of corresponding the multiplicative monetary indices less than the same measure the variability of exchange value of national currencies and some precious metals. It was also investigated the dynamics of price a stable aggregate of precious metals, which is measured in units of stable aggregates of four "solid" currencies.
In the paper the original algorithm of solution of an applied task of the graph theory on finding of k maximum flows between the two set count's tops is proposed. The approach described represents a complex application of Ford-Fulkerson (Edmonds-Karp or Dinitz) algorithm and the algorithm of creation of a truncated tree of states in width in an indivisible optimizing cycle.
A problem of construction of a level description of classes with objects characterized by properties of their elements and relations between them is under consi\-de\-ration in the paper. The problems of recognition and analysis of such objects are NP-hard, but if descriptions of classes contain short enough and frequently occurred sub-formulas then it is possible to build a level description of classes essentially decreasing an exponent in upper bounds of steps for an algorithm solving the problemr. Usually an extracting of these sub-formulas is leaved to the investigator will. An approach to their automatic extraction is proposed in the paper.
In the article the mathematical model of informational interaction of the spacecraft with a surface of the Earth is considered. In a basis of construction of model lies proposed of the author the concept of active movable object as the complicated movable system intended for informational, energy or material interaction with an ambient physical environment or with other similar systems. It is shown that the corresponding model can be presented in the form of Fredholm integral operator mapping set of elements of a Hilbert space of controls (a class of admissible control actions) in a Hilbert space of information states. Investigated the properties of this operator and the corresponding reachability sets in space of information states. Considered a simplified variant of the proposed mathematical model - for interaction with the discrete environment (the isolated sources of information).
The need for efficient algorithms for processing strings arises in many practical problems. One of the most universal approaches is the use of suffix trees. However, this data structure has high memory requirements, which limits area of its application. In this article we consider a way to partially eliminate this disadvantage and give an example of solving the problem of the longest symmetric substring. The described method can be also be used for other problems too.
Communication system synthesis task statement under Bayesian criterions with variant loss functions is considered. Significant differences in communication system structure and parameters synthesized under Bayesian criterions with simple loss function and uncertainty function are obtained at sufficiently general conditions.
The methodology of extracting context labels from internet dictionaries was developed. In accordance with this methodology experts constructed a mapping table that establishes a correspondence between Russian Wiktionary context labels (385 labels) and English Wiktionary context labels (1001 labels). As a result the composite system of context labels (1096 labels), which includes both dictionary labels, was constructed. The parser extracting context labels from the Russian Wiktionary was developed. The parser can recognize and extract new context labels, abbreviations and comments placed before the definition in Wiktionary articles. One outstanding feature of this parser is a large number of context labels which are known in advance (385 context labels for Russian Wiktionary). The parser can recognize and extract new context labels, abbreviations and comments placed before the definition in Wiktionary articles. The database of machine-readable Russian Wiktionary including context labels was generated by the parser. An evaluation of numerical parameters of context labels in the Russian Wiktionary was performed. With the help of the developed computer program it was found in the Russian Wiktionary that (1) there are 133 000 definitions with context labels and comments, (2) one and a half thousand definitions were supplied with regional labels, (3) it was calculated a number of definitions with labels for each domain knowledge. This paper is an original contribution to computational lexicography, setting out for the first time an analysis of numerical parameters of context labels in the large dictionary (500 000 entries).
Merits and demerits of straight and iterative methods for BD LAES are shown. In article is offered the new «direct» method (algorithm) for solution of BD LES with varied parameters. It effectively uses basic solution LAES and matrix sparseness information and allows in the tasks using BD LAES, which need to be solved repeatedly, significantly increase speed of settlement algorithms due to reduction of number of computing operations, to lower requirements for random access memory volumes of computers.
The issues of energy efficiency of decentralized power complexes with superconducting equipment through the use of protected intellectual dialogue automatic control system that performs the adaptation to the complex modes of operation, external perturbations and conducting biometric access control of operators are considered.
Design approach to the optimal structure of multifunctional wireless tactile measuring means with self-contained power supply based on the search of the shortest paths in the graph, the balance of which is mapped to composite index that takes into account energy consumption, cost and technical compatibility of the functional units is considered. An example of increasing the accuracy of estimates of the measured parameters of objects with the combined wireless measuring tools is provided.
In this paper, we describe some prospective directions of the use of service robots (robot-assistants) in the high-tech domain of manned space exploration. We analyze conceptual approaches for the organization of an internal environment of service robots and an external work environment for joint functioning a human-operator and a service robot.
For improving the validaty of the accepted decision during the synthesis of management-data-exchange network, providing the minimization of resource costs of traffic capacity of transmission links of heterogeneous telecommunication systems, should be taken into account feature of traffic from sources of controlling information.The model of process of multiplexing the protocol data units in management channel offered in article take into account the variability of intensity of the flow of the official messages from sources of controlling information. . Use of the modified formula of Engset allows to provide more rational distribution of the channel resource necessary for the organization of transmission of protocol data units.
There is presented the definition of logic-dynamic situation that is used in specifying the operation of hybrid dynamic systems, including hybrid control systems. The influence of random effects on the behavior of systems is discussed. The methods of estimating the probabilities of occurrence of situations based on estimates of the statistical characteristics of random processes are considered. There show how to use estimates of the probabilities for the control and decision support. The results of experimental research of methods under consideration are given.
The paper considers the given algebraic Bayesian network with interval probability estimates primary structure transformation to the primary structure of the network that is acyclic and stochastically equivalent to the given one. It is shown that this transformation only possible when hypergraph corresponding to the resulting primary structure comprises the hypergraph corresponding to the given one. We propose a method constructing probability estimates of the resulting primary structure, which makes it to be stochastically equivalent to the given one.
It is recognized that incorporating context information into recommender systems is one of the most effective ways to increase their quality and predictive abilities. The paper surveys primary methods of enhancing collaborative filtering systems by taking actual context information into account. The focus is mostly on different flavours of contextual prefiltering and matrix factorization approaches which are the most popular and promising.
Means of use of based on transition rules formalism for determinate processes specification for random processes implementation and their characteristic estimation are considered. A short description of the formalism and methods of its use for simulation of dynamical systems in the presence of random impacts are presented. Methods of realization of random processes with specified statistic properties and methods of random ergodic process numerical characteristic and correlation functions estimation are discussed. Examples of random process implementations and results of their characteristics and correlation functions estimations are produced.
In article questions of complex use of space and land data for the oil pollution of water areas analysis are considered. Stages of data processing are shown and features of these stages are analyzed at the solution of this task with use of space and land data
A review of existing approaches to the qualification assessment of a visualization system for flight simulators is described in the paper. A set of criteria for assessing the quality of clouds visualization as well as procedures for their use are reported.
A novel method for procedural modeling of clouds is described in the paper. An analytical form of the cloud structure for a lot of different types of clouds and cloud layer is proposed. The given method has practical realization and application in virtual reality systems of flight simulators.
In this paper we carried out the search of adequate description with the categories and functors for multifunctional infocommunications systems
The article discusses the alternative concept of computing processes. Individual implementation of consecutive programs by the complex computing unit is replaced with collective activity of simple automata, each of which independently solves small part of a complex challenge. Such approach allows to increase repeatedly productivity and energy efficiency of computers in combination with high reliability and information security
The current state of research in the field of computerized electrophysiological gastrointestinal tract (GIT) diagnostics methods in terms of cooperation of medical specialists and IT-engineers is analyzed in the paper. Hardware, software and methodology of lectrogastroenterography (EGEG) are reviewed. The features of the wavelet transform (WT) in the processing of non-stationary signals of EGEG are viewed. The research infrastructure built in recent years in the North-West region of Russia is presented. Prospects are offered: using telemedicine technologies in EGEG and development of open Internet-platform for accumulation and sharing experience between the researchers in this field.
The paper considers the phenomena of lingual and social colonization within the framework of infocommunication process. The description of mathematical, physical and biology objects by the process of cognitive programming which is defined not by the notion of information, but by the notion of infology (the process of informational transformations) is proposed.
The article sets out the methods proposed for solving the generalized problem eigenvalues and vectors for singular matrix bundles, occurring in the important applied problems of different branches of knowledge.
The review of the 1st, 2nd, 3, 4 and 5 Russian Conference on the Simulation is presented. The points of view are used methods, language and systems of modeling, application areas. Technologies of interaction of imitating modeling with other types of modeling — analytical, integrated, hybrid are analysed. The main tendencies of modeling specified types development are defined.
A method for estimation of time, requiered for external events processing by distributed program applications is presented. It is shown that feasibility of such applications may depend on variation of task execution time and message delivery time. The influence of this variation shall be taken into account during feasibility checking for task chain, that participate in reaction to concrete separate external events.
A quantitative analysis of the Russian lexicon was performed in the paper. The thesaurus Russian WordNet and two electronic dictionaries are under examination: the Russian Wiktionary and the English Wiktionary. The quantity of Russian words and their meanings (senses) according to the parts of speech are compared. The distribution of words for each part of speech, the quantity of monosemous and polysemous words and the distribution of words by number of meanings were calculated and compared across these dictionaries. The analysis of the distribution of words by number of meanings revealed a problem that too few or no ambigous Russian words with the number of meanings more than 4 are presented in the English Wiktionary (in comparison with the Russian Wiktionary). The analysis shows that the average polysemy, the number and the distribution of word senses follow similar patterns in both expert and collaborative resources with relatively minor differences.
Cloud computing model should be adapted for high readiness and security of com-putational resources cloud. In this paper there were investigated relevant taxonomy access of data and services in cloud computing in part of SaaS (Software as a Service) on the base of ontology description of taxonomy to data and service. The types of cloud computing and necessity of standardization in this area are analyzed.
The paper considers the phenomena of information redundancy emergence in multimedia data on the example of images and video. The examples of redundancy emergence and variation are given, the method for redundancy estimation is proposed.
This paper discuss the problem of personal adjustment of smart room devices and forming of a user profile based on processing of multichannel audio and video streams, which register of the current situation and meeting participants behavior in the meeting room. Estimation of preference of device usage, user interface, participant role and their activity during meeting allows us to automate the processes of smart room preparation as well as manage multimedia presentation and record devices during events. 212 records were made during several meetings in the smart room with the help of the developed system of audio and video speaker localization. The accumulated experimental data allowed us to estimate the places in the room, from which the participants asked questions most of the time. The accuracy of camera pointing on speaker in the presentation zone as well as in the rows of sits estimated by participant’s face size and its position in frame during whole recording equals 90% approximately.
The paper considers an analysis of a protection mechanism against infrastructure attacks based on the bio-inspired approach ―nervous network system‖. We propose to use a network packet-level simulation to investigate the protection mechanism ―nervous network system‖. The paper presents the structure of the protection mechanism, the algorithms of its functioning, and the results of the experiments. Basing on the experimental data, we analyze the effectiveness of the proposed protection mechanism.
The paper discusses problems of decision support for configuring flexible networked organizations. It is shown that one of the most perspective forms of decision support in the above domain is collaborative recommending systems. Such systems recommend some solutions (related to products, services, technologies, tools, materials and business-models) based on user groups’ requirements, their preferences, and willingness to compromise and propose new ideas. Specific features of collaborative recommending systems are considered together with major problems that need to be solved in order to increase the efficiency of such systems. Approaches to solve the above mentioned problems are proposed.
In the course of the development of interactive dynamic web-applications it is necessary to take into account used data types, their input/output means, as well as provide application capability to analyze current conditions, in which interaction with a user will be conducted, and correspondingly adapt multimedia content in order to improve the usability and naturalness of man-machine dialogue. Survey of modern papers concerned with automatic generation of web-interfaces, development of multimodal user web-applications is presented in the paper. Approaches to description, extraction and processing of context information required for fitting web-interface to the current conditions of usage during interaction with a user are considered.
A quantitative analysis of the English lexicon was performed in the paper. The three electronic dictionaries are under examination: the English Wiktionary, WordNet, and the Russian Wiktionary. The quantity of English words and their meanings (senses) are calculated. The distribution of words for each part of speech, the quantity of monosemous and polysemous words and the distribution of words by number of meanings were calculated and compared across these dictionaries. The analysis shows that the average polysemy, the number and the distribution of word senses follow similar patterns in both expert and collaborative resources with relatively minor differences.
The paper is devoted to the proof of upper bounds of steps of logic-objectiv algorithms for recognition of a complicated image situated on a display screen. It is proved that the problem of separation and recognition of an etalon object from a complicated scene has a polynomial algorithm. The problem of separation and recognition of an object from a class the description of which contains only distinctive attributes of this class belongs to. To decrease the algorithm number of steps a notion of "fuzzy image"' is introduced. The problem of invariant (under rescaling) image recognition is regarded.
The goal of this paper is to obtain a wavelet decomposition (wavelet renement) of the chain of embedded spaces of splines for an arbitrary renement of a nonuniform grid, and to derive the corresponding decomposition and reconstruction formulas, to construct wavelet decompositions and decomposition and reconstruction algorithms in the case of an innite ow for a grid on an open interval and a nite ow for a grid on a segment.
Advantages of a physics approach to dynamic systems simulation are discussed. A great need for the approach at the modern state of system engineering is pointed out. In particular, such an approach may be useful for simulation of control and self-organizing systems. A situation-event formalism of an interacting hybrid process specification is briefly stated and some ways of its use for physic simulation model implementation are shown. Facilities of the considered methods are illustrated by examples of some simple dynamic system models implementations. — Bibl. 9 items.
Methods for measuring music similarity allow for implementations of completely automated content-based music recommendation systems (similar to Pandora, but without the manual work of expert musicologists). This paper presents a novel method of measuring music harmony similarity based on an original probabilistic graphical model. The model includes information about the current chord and mode; we introduce a hidden parameter, style, which governs the probability of using of a certain chord within the context of a certain mode, and propose to measure the similarity as a distance between parameter vectors of the probability distribution function for style. Similar to some methods for extracting chord progressions, our model includes neither the rhythmic information nor the dependencies between neighboring chords. We describe the implementation of our model done with the Infer.NET system and show experimental results on generated data. The results of experiments with real-world data are negative, which indicates that simple bag-of-chords models are not suitable for the music similarity task
The approach is based on using of the known situation-event formalism of an interacting hybrid processes specification. A short description of the formalism is set out. Some advantages of its using for dynamic systems computer implementation are pointed out and its abilities for processes coordination are discussed. Hybrid processes coordination methods based on it and peculiarities of their usage are reviewed. Instances of automatic coordination systems which illustrate some methods usage are given.
In this article we treat a question how to develop a procedure of functional and parametric analysis, which is based on calculation of effectiveness of multifuncional complex operation during decision making; we offer correlations for calculation, mark practicability of adding of support devices to information complexes.
This paper presents the current state of art in the field of automated musical harmony analysis. Research in this field can be motivated by the real-world problems of creating completely automated content-based music recommendation systems (similar to Pandora, but with-out the manual work of expert musicologists). The paper is mainly focused on probabilistic graphical models as one of the most promising approaches, although we also give background in alternative methods. We consider works that use Markov chain models, hidden Markov models, and multi-level graphical models. Along with the models that capture only harmonic information—chord progressions, in some cases also the key,—we also list several models that combine harmonic structure with rhythmic or stream structure.
The survey, based on domestic and foreign literature, as well as the results of projects carried out by the order for Russian Foundation for Basic Research, is devoted to the problems research current state analysis of the integration and complex automation of management processes by the main and auxiliary production, logistic, service organizationaltechnical systems that supports life-cycle (LC) of a specific product; problems of creation, use and development of intelligent information technologies, including technologies of ubiquitous computing and communications, and multimodal user interfaces; problems of multicriteria evaluation and analysis of the contribution of information technologies and systems into the mainstream enterprise (firms), including when selecting and implementing effective technologies of life-cycle products (LCP) management. The main feature of this review is that the consideration of all these problems is based on a new cybernetic paradigm of the XXI century, associated with the concept of complexity control.
The article shows distinctive features of reliability calculations and simulating for network structure systems. Reliability optimization technique for network structure systems which is based on logical-probabilistic reliability optimization algorithm is provided. The article presents results of reliability optimization tasks solving for network structure systems. These results are compared with solutions received using other techniques.
A multiple-model approach to description and investigation of control processes in space systems is presented to answer the changeability of space-facilities (SF) parame-ters and structures, which can as caused by objective (subjective) external (internal) reasons. The presented multiple-model complex, as compared with known analogues, has several advantages. It simplifies decision-making in SF control system (CS) structure dynamics management, for it allows seeking for alternatives in finite di-mensional spaces rather than in discrete ones. The complex permits to reduce dimensionality of SF CS structure-functional synthesis problems to be solved in a real-time operation mode. This statement is exemplified by an analysis of SF CS information-technological abilities and goal abilities
Some artificial intelligence problems including such ones as pattern recognition, medical diagnostics, market analysis is reduced to the proof of satisfi\-abi\-li\-ty of predicate calculus formulas with a symple structure. Some algorithms solving such problems are regarded and the upper bounds of their steps are proved.
Piecewise continuous spline of Lagrange type are constructed. An embedding of spline spaces is established for arbitrary refinement of grids. The system of biorthogonal linear functionals to splines is constructed. Wavelet decompositions and decomposition and reconstruction algorithms in the case of an infinite flow for a grid on an open interval and a finite flow for a grid on a segment are constructed.
The article presents the systematic results on the present level of a common logicalprobabilistic method (CLPM) development, theory and technology of automated structural and logical simulation (ASLS). The article identifies the main sections and areas for further development CLPM analysis systems, given their brief informative description, examples of solving problems.
This paper considers the phenomenon of identification in its development. Some mathematical models and approaches to identification are shown. The adequacy of Kolmogorov’s programmed technology and digital communication environment is shown.
For multi-criteria evaluation for basic topological structures and fractal structures of GRID-systems and telecommunication networks of new generation basic quality predicates have been proved and formalized. The results of comparative analysis for fractal and multifractal architectures of distributed GRID-systems by criteria of reliability, cost and bandwidth have been described.
Image registration is one of the basic problems of computer vision. It arises in optical flow estimation, stereo vision, and tracking problems. One of the classical approaches proposed by B. Lucas and T. Kanade is based on optimization of some cost function. In this article image registration algorithm based on Lucas–Kanade approach is proposed (random sampling algorithm). It shows high performance results.
An approach to the formation of complete and reliable primary health information. The analysis of annual data on the nation's ill health. Pointed to the lack of other sources of information for the calculation of medical errors. As a method of access to primary health care documents are invited to specialized information retrieval system implemented on immunocomputing. System is full, reliable and affordable basic health information, as well as related software are considered as the basis of a common information space of public health. A fundamental feature of this software is the use of open source and free license.