Nicely visualized summary of our recent research can be downloaded from: .
In Terziyan, Gryshko & Golovianko (2018), we have announced a new emergent component of the collective intelligence for Industry 4.0 systems, which is a digital cognitive clone of a human and related technology for cognitive cloning (Pi-Mind). We have also shown that such component not only brings new opportunities for managing processes in cyber-physical systems but also brings new vulnerabilities related to potential cognitive hacking.
In Terziyan, Golovianko & Gryshko (2018), we study the vulnerabilities typical for intelligent systems working in Industry 4.0. These include data poisoning and data evasion attacks. We also formulate the major principles of digital immunity and the main objectives for further studies and developments within our NATO SPS project. The study of vulnerabilities related to the data used for intelligent systems training is continued in Terziyan & Nikulin (2019) and in Terziyan & Nikulin (2021), where we study the geometry of data manifolds and methods to discover and analyze the voids within the manifolds. If data is used for training the intelligent systems, then such voids are associated with the potential vulnerabilities of these systems towards adversarial attacks (poisoning, evasion). We discovered that a smart way of filling this voids with labelled (adversarial) samples would work as a kind of digital vaccine for future protection.
In Gavriushenko, Kaikova & Terziyan (2020), we study the specifics of systems where the processes are secured by collective (human + AI) intelligence. Because the cognitive processes work differently within human and within artificial minds, the cognitive vulnerabilities are different and appropriate attack scenarios could be hybrid to succeed with both components. We argue (on an abstract level) that to be able to achieve high level of security we have to train (using hybrid training methods) both (human and AI) components of security systems together as a collective intelligence. In the extended version of this study (Terziyan, Gavriushenko, Girka, Gontarenko & Kaikova, 2021), we suggest specific architectures to train collective intelligence using adversarial learning. The architectures are able to generate sophisticated attacks (aka digital vaccines), which pushes collective intelligence to learn by adaptation. We train the intelligence to compromise in the cases of adversarial attacks: on the one hand, keeping as much as possible of the human individual features (donors of the individual digital clones) and, on the other hand, we train the capability of each group member to find reasonable compromises in making responsible group decisions from the individual expert opinions.
In Girka, Terziyan, Gavriushenko & Gontarenko (2021), we suggest a new algorithm for protecting sensitive data (against adversarial attacks) used for training and testing intelligent systems based on deep neural networks. It can be used as an important feature of the digital immune system and as a complementary alternative to a digital vaccination concept. The method is based on secure topological transformations of the data space in a way, which makes potential adversarial attacks on the intelligent system (after it learns) unfeasible.
Adversarial attacks can cause immediate disruptions in the system leading to disabilities in functional parts. Another brand-new technology called Complementary Artificial Intelligence (CAI) is reported in Terziyan & Kaikova (2021). CAI is based on the so-called “coolabilities” (enhanced capabilities in disability conditions). We present several neural network architectures (controlling a cyber-physical process), which are resilient in case of various kinds of disabilities (e.g. under adversarial attacks) and capable to keep the decision making process ongoing even with a seriously damaged sensors and actuators infrastructure.
A taxonomy of adversarial neural networks’ architectures for development of artificial digital immunity of intelligent systems is presented in Terziyan, Gryshko & Golovianko (2021). These architectures support digital vaccination as a proactive protection strategy. We suggest several innovative components for the generative adversarial networks, which can be used in other domains giving essential added value to the adversarial learning field.
In Golovianko, Gryshko, Terziyan & Tuunanen (2021, a), we report on successful experiments done within our NATO SPS project: (a) modelling adversarial attacks on intelligent system via corrupting camera images and (b) automatic generating digital vaccine (special images) for retraining and protection of the system. We find the tasks of the digital immunity design analogical to the digital cloning of human decision-making. That`s why the study on the common tools for cloning and digital immunity development was continued in Golovianko, Gryshko, Terziyan & Tuunanen (2022) where we present a detailed description of the techniques used to protect critical systems and processes against adversarial attacks and natural disruptions. We use the concept of digital cognitive cloning as the major defence component. We design the clone training architectures to ensure the sustainability of critical processes in usual settings and under sophisticated adversarial attacks. Enabling “digital immunity” for autonomous intelligent systems (such as digital clones) also means well-formed reliable decision boundaries between critical decision options to avoid various speculations within the vulnerable zones in the decision space. In Branytskyi, Golovianko, Gryshko, Malyk, Terziyan & Tuunanen (2021), we argue that the process of adversarial learning, which includes adversarial samples’ selection and generation, could handle both emergent objectives (digital clones and digital immunity). Such adversarial samples help building more accurate personalized decision boundaries (for digital cloning), and also play the role of “digital vaccine” which is used to protect vulnerable regions close to decision boundaries in digital immunity.
In Semenets, Terziyan, Gryshko & Golovianko (2021), we developed special analytics for sustainable collaborative decision-making, which (a) is based on explainable AI; (b) capable to support collaborative (human + AI) decision-making; (c) resilient against “cognitive hacking” attacks on the individual (human or AI) decision makers due to special compromise decision-making techniques and “transparent minds” of the decision makers (shared individual value systems). This analytics and its sustainability have been tested within the collective awareness platform (Semantic Portal TRUST) for real collaborative decision processes (academic assessment and selection), which includes multiple decision makers including autonomous AI-driven ones. In Semenets, Gryshko, Golovianko, Shevchenko, Titova, Kaikova, Terziyan & Tiihonen (2021), we further study the impact of collective awareness on the development and sustainability of the academic mindset. The lessons learned from the TRUST portal’s active use has been presented and they once again proved that the minds of academic personnel in universities would be more secure and resilient against various cognitive manipulations if digitalized and take part in various transparent cognitive processes at collective awareness platforms.
Our latest research is broadening the scope of sustainable and resilient models we are suggesting to the industry. In (Kumpulainen & Terziyan, 2022), we study the potential of using (in addition to computational intelligence) also the strong AI (Artificial General Intelligence) in the context of smart manufacturing and Industry 4.0. In (Terziyan & Vitko, 2022), we show how to enable Explainable Artificial Intelligence while dealing with the deep learning (black-box) models in the context of asset management, condition monitoring, industrial diagnostics and predictive maintenance. In (Branytskyi, Golovianko, Malyk & Terziyan, 2022), we designed and experimentally tested the modified and novel biologically-inspired neural network architectures to increase the performance of AI models working within Industry 4.0 and smart manufacturing.
In Terziyan, Malyk, Golovianko & Branytskyi (2022), we suggested new flexible architecture for Convolutional Neural Networks, which is based on parametrized and trainable generalized Lehmer and Power means. We experimentally show that such architectures are substantially more robust against adversarial attacks than traditional architectures. In Kaikova, Terziyan, Tiihonen, Golovianko, Gryshko & Titova (2022), we have summarized and show the connection between the IMMUNE project (digital immunity for AI systems against adversarial attacks) and WARN project (collective immunity of citizens against hybrid threats) assuming that, on the abstract level, both projects’ objectives could be reached via special “vaccination” (adversarial training for digital systems and for humans). In Terziyan, Malyk, Golovianko & Branytskyi (2023), we extend our previous study on data anonymization for privacy-preserving machine learning also to image data. We show that our homeomorphic encryption algorithm can be applied on the feature level of image representation and enables both: hiding the private information and achieving a good classification performance on the encrypted data. In Terziyan, Bilokon & Gavriushenko (2024) we have developed also the metric for quality evaluation of homeomorphic encryption and similar algorithms; and we have made the experimental evaluation of the proposed metric. In Terziyan & Vitko (2023), we have suggested the way to discover (and take into account in image classification and generation) hidden causal relationships among image features. We introduced an architecture for convolutional neural network, which is enhanced by a causality map (as a special type of self-attention mechanism). Such architecture if applied with generative adversarial networks can generate images, which are realistically looking also from the causality point of view. In Terziyan, Kaikova, Malyk & Branytskyi (2023), we suggested a kind of uncertainty-driven technique to improve the performance of convolutional neural networks for image classification. The intuition behind our suggested “decontextualize-and-extrapolate” approach is as follows: any image not necessarily contains all the needed information for perfect classification; any trained network will give (for the entire image and with some uncertainty) the probability distribution among possible classes; the same network may also give similar probability distribution to the “part” of the image (i.e., with the higher uncertainty); one may discover the trend of the probability distribution change with the change of uncertainty value; a better (refined) probability distribution could be computed from these two distributions as the result of their extrapolation towards less uncertainty.
Smart manufacturing is being shaped nowadays by two different paradigms: Industry 4.0 proclaims transition to digitalization and automation of processes while emerging Industry 5.0 emphasizes human centricity. This turn can be explained by unprecedented challenges being faced recently by the societies, such as, global climate change, pandemics, hybrid and conventional warfare, refugee crises. Sustainable and resilient processes require human back into the loop of organizational decision making. In Golovianko, Terziyan, Branytskyi & Malyk (2023), we argue that the most reasonable way to marry the two extremes of automation and value-based human-driven processes is to create an Industry 4.0 + Industry 5.0 hybrid, which inherits the most valuable features of both - efficiency of the Industry 4.0 processes and sustainability of the Industry 5.0 decisions. Digital cognitive clones twinning human decision-making behavior are represented as an enabling technology for the future hybrid and an accelerator of the convergence of the digital and human worlds.
One of our recent concerns was the relationship between the AI and Information Systems (IS) domains. We were curious on, whether popular discoveries in IS (design science and appropriate recommendations, research methods and appropriate recommendations, etc.) really contribute to the major discoveries in AI and (if not) then whom these recommendations are meant for. In Shukla, Terziyan & Tiihonen (2023), we have suggested an analytics to approach and answer such and similar research questions using recent bibliography from Scopus and WoS. Current published evidence tells us that AI depends on IS research much less than IS depends on AI research and the discovered trend shows that this gap is becoming wider with the time …
In Terziyan, Kaikova, Golovianko & Vitko (2024), we suggested the dialogue schema in which ChatGPT is asked to answer the research questions from a target article and then to compare its own answers with the answers from the article. Finally, ChatGPT is asked to integrate both suggested solutions into a consistent story. In Terziyan & Vitko (2024), we study the concept of so-called Taxonomy-Informed NN, which combines data-driven training of NNs with available ontological knowledge. We study different patterns of controlling NN training with additional knowledge of complex taxonomies (class-subclass hierarchies), known instance-class relationships with multiple inheritance and potential for federated learning, and other logical constraints. In Kaikova & Terziyan (2024), we select two discrete modelling paradigms (Petri nets and cellular automata) and one continuous modelling paradigm (deep learning) to study potential hybrids of those. We represent four hybrids as follows: Petri nets controlling deep neural networks and vice versa; cellular automata controlling deep neural networks and vice versa. Such hybrids represent cases when either: (a) a discrete and explainable model controls some continuous and black-box model bringing, therefore, certain explainability and robustness to the hybrid; or (b) a black-box continuous control is applied on top of a discrete and explainable model bringing, therefore, certain accuracy and efficiency to the hybrid. We argue that the flexibility of these and similar hybrids may increase the scope and quality of modelling and simulation tasks in smart manufacturing. In Terziyan, Terziian & Vitko (2024), we suggest several updates to cellular automata (particularly Conway’s “Game of Life”) to make it a suitable tool to model resilience. These include “Life and Creation”, “War and Peace”, and their hybrid “War and Creation” cellular automata capable of addressing the important components of resilience, such as controllable creativity and adversarial interactions. We posit that combining these additional components with the known advantages of cellular automata, such as simplicity, emergent behavior, nonlinear dynamics, parallelism, and adaptability, makes it a powerful simulation tool that can be used to model a wide range of complex Industry 5.0 systems that involve humans with different roles, smart infrastructure, their complex interaction and logistics, safety, and resilience. In Terziyan & Tiihonen (2024), we propose an architecture and analytics for a generative adversarial network, called Cloning-GAN, which enables donor-clone knowledge transfer, including the donor’s individual biases. The architecture involves generating challenging samples to be labeled by the donor and used as training data for the clone. Our approach considers several multicriteria requirements for the generated training data, including being close to the potential decision boundary, distributed uniformly among the decision space, maximally confusing for the donor and challenging for the clone. By using this architecture, the clone can quickly learn the hidden cognitive skills and biases of the donor and represent them in further simulations. We also present various strategies to balance the conflicting quality criteria for the generated data, allowing the architecture to find optimal settings for the parameters during the training process.
Currently we are working on the topics related to "Knowledge-Informed Neural Networks", including its particular case on Physics-Informed Neural Networks (PINN). These are quite promising and cool instruments. Therefore, we expect more discoveries to come soon …