IAGON – The Global Supercomputer

IAGON – The Global Supercomputer

IAGON represents a pioneering project to deliver decentralised supercomputing capabilities powered by Artificial Intelligence and Blockchain Technology. The Open-Source platform is designed to intelligently manage and harness the storage capacities and processing power of multiple enabled computers over a decentralized Blockchain grid.

IAGON’s Artificial Intelligence Technology

The core IAGON’s technology is an advanced autonomous Artificial Intelligence architecture known as the Alexandria Protocol. In simple terms, this is a Smart Control Plane Protocol that uses the Artificial Intelligence (AI) and Blockchain to coordinate, optimize and control Decentralized Smart Global Computational Grid (SGCG) resources. It is the main layer of IAGON’s underlying technology and addresses the need for a decentralized resource optimization mechanism to schedule, coordinate and allocate the decentralized peer-to-peer (P2P) and through distributed ledger technology (DLT) grid-connected computing storage and processing resources among several IAGON’s Blockchain-coordinated miner nodes.

Alexandria uses techniques for continuous analysis of the distribution performance parameters such uptime, trustability, amount of work and longevity of miners and optimum resource allocation of P2P/DLT networks. It’s parameters are defined in the Proof-of Utilitarian Algorithm where miner nodes need to proof the usage of their spare resources with meaningful computational work (CPU Cycles and storage) beyond Bitcoins Proof-of Work (PoW), where computation power and energy are wasted without any real application.

On a deeper level, the Alexandria Protocol can be considered more technically as a hybrid of both a second wave (probabilistic & statistical) and integrated components of third wave (contextual adaptation) Artificial Intelligence Learning systems. Built upon core second wave concepts, it employs artificial neural networks (ANN), and statistical models, such Bayesian networks (that characterises the problem domain that it is trying to solve) to learn and to adapt to different situations that are defined within the smart computational grid.

Further enhancements through the adoption of third wave design concepts base the system’s reliance on different statistical models together with Reinforcement Learning and Markov Decision Processes to extract relevant information from external data sources (e.g. data centers and technology rich CPU/GPU based devices). Doing so, it can reach a more complete understanding of the computational grid. In this process, it can self-educate, identifying common sense rules, creating new dataset relationships and having the potential (though not explicitly revealed) to improve the model functionality and to develop abstract thinking.

The precise details behind the employed AI are not readily known due to the proprietary nature of the technology under development, but more importantly it is also impossible to predict for such a complex system in advance before the AI has been trained on relevant data and engineered statistical models. However, what can be understood is that it will fundamentally be based on three core components: Deep / Ensemble Learning – Big Data Processing Framework, Self-Learning Computation – the adaptation of dynamic Artificial Neural Network (ANN) and predictive models, and Scalable Infrastructure.

Machine learning paradigms, such as Ensemble- and Deep-learning in this framework, are tasked with learning multi-level features and complex relationships between inputs (i.e. user uploaded files) and output performance metrics from the computational grid and thus the data-miners (e.g. CPU/GPU compute power, idle processes, latency and storage statistics) to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between those observed variables. Ensemble Learning has been initially adopted due to its multiple learner strategy and efficiency as it can obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. In essence, this creates a holistic and high-level understanding and distributed representation of the grid performance that paves the way for further self-learning functionality and optimization.

The Self Learning Computation component (as mentioned earlier) is not bound by a specific framework but rather by multiple elements or “base-learner” algorithms that dynamically interact together with contextual models giving the ability to learn, to change (depending on performance metrics), to adapt and to evolve over time in ways that the models are best structured. This adaptive learning strategy improves the model through experience, becoming more sophisticated (with each iteration) and reliable to abstract. In this process, the AI layer is able to fine-tune the coordination, optimization and control of the computational grid

Smart Global Computational Grid

IAGON’s implemented Smart Computing Grid is effectively the logistics layer that handles the complex nature of decentralised computations hosted by a network of computing clusters hosted on both the Cloud and Grid environments. In this context the equivalent analogy of solar power integration to an electrical grid is quite appropriate as it shares common characteristics. For example, it connects multiple solar producers (e.g. Miners) to the user/customer; the grid fulfils the source demand for the required processing; it transfers unused resources to customers in need (i.e. CPU/GPU processing power and storage space) and lastly, it rewards the miners providing processing power and storage space (via Proof-of-Utilitarian) to the grid without requiring efforts when their infrastructure are not used locally.

Systems integration in this context is one of the biggest challenges during the genesis of any cloud based distributed computing platform, yet alone for one that is likely to end up being one of the most complex hybrid combination of blockchain and tangle. The key to optimizing the Smart-Grid is the presence of an advanced and pioneering AI that draws upon a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models and Markov decision processes that all together integrate to form the AI-Tracker system.

AI-Tracker is the “brain” behind IAGON ’s Smart-Grid and is simply a complex of dynamic learning systems that continuously analyses past and current data streams that reflect the availability of storage space and processing capacities of miners. The AI-Tracker carries out a number of tasks that include encryption registration of the file slice identities attributes and locations on the blockchain (i.e. a file ledger); optimally allocating encrypted file slices to the miner’s free storage spaces to computational tasks to the miner’s with CPU/GPU idle states, identification of rogue nodes (i.e. clandestinely integrated back doors) that should be blocked and removed from the grid, and finally continuously streamline the grid ’s attributes (e.g. measuring the elapsed execution times to assess its scalability and speedup) to optimize its performance at any time.

IAGON’s AI powered Smart-Grid creates simplicity out of complexity, managing the complex nature of decentralised storage and computational task optimization through operational research. Built with an advanced AI and data driven instruments, indicators and services it is a powerful hybrid of AI and network management.

Platform Architecture

IAGON’s open-source platform architecture is a layered collective of unique functionality consisting of a Server Layer, Artificial/Intelligence or Machine Learning algorithms, Blockchain Layer and Miner Nodes and Encrypt / Decryption Protocols. These interconnected capabilities are structured to provide a framework able both to offload efficient computation on multiple Miner CPU/GPUs and to handle the execution of the learning algorithms on a computing clusters, hosting in both Cloud and Grid/HPC environments.

When a user uploads a file and processing request to IAGON’s server, the machine learning algorithm sends blocks of the corresponding data randomly over to the miners to process in order to locate matching signatures. These blocks of data are then returned back over the blockchain, validated along with an output which the machine learning algorithm will use to identify a node. In this environment the blocks are distributed evenly to miners by utilizing the Proof-of-Variance (PoV) algorithm and a further crucial feature is that it does not store any of the data within their local system. Because of this implementation, it is guaranteed that the data is processed anonymously with each individual node being effectively unknown to each other except through the machine learning algorithm.

Blockchain is an innovative feature to add to this mix that permits files to be broken down into block form for distribution among the nodes and respective miners. In simple terms the blockchain utilizes the SHA-256 Cryptographic algorithm (a kind of ‘signature’ for the data file) and hashes (i.e. maps) each block with it’s previous to create a link in a chain. When data is received back from an individual node, the output will be matched against the hash of its corresponding block and validated against its header to determine if the output data is valid. This way of processing provides a unique method towards distributed processing as it provides a layer of integrity to the data being processed and to determine if the output has been compromised in any way. In the event that any of the miners have manipulated data the returning block will be rejected and the original block will be sent over to a different node to be reprocessed.

IAGON’s encryption and decryption protocol means that data files (e.g. file size and contents) are provided with secure storage within the internal server and external platforms. Data is encrypted using Hash cryptographic trees, fragmented using sharding techniques, copied for redundancy and distributed among several P2P miner nodes. Additionally, the protocol also provides a conduit for decentralization as any external API enabled platform can simply be integrated to IAGON and utilize its services. Because of the decentralisation functionality a variety of secure SQL/ NoSQL and Big Data (e.g. Massively Parallel Processing and Hadoop) database types, private/public blockchains (e.g. Ethereum, NEM and Hyperledger) can be applied.

Machine Learning

IAGON’s AI, is constantly learning and assimilating the limitless supply of available performance grid metrics available from the Smart Computing Grid in order to create a highly organised real-time data structure which can be optimised. To achieve real-time decision making and optimization (i.e. optimal control), Reinforcement Learning together with Markov Decision Processes have also been adopted as a part of the multi-model approach as it particularly well suited for dynamic and distributed computational environments.

Reinforcement differs from standard supervised learning in that correct input and output pairs need not be presented, and sub-optimal actions need not be corrected. Instead the focus is on performance, which involves finding a balance between the exploration (of new data structures) and exploitation (of current knowledge). The exploration vs. exploitation trade-off using Markov Decision Processing (based on probability theory) tries to determine a set of optimization actions that maximise performance and future rewards. In simple terms this semi-supervised learning doesn’t need to be solely trained on datasets that include labels added by a machine learning engineer or data scientist that guide the algorithm to understand which features are important to the problem at hand. What’s more, the combination of labeled and unlabeled data is beneficial as the training process actually tends to improve the accuracy (removing imposed human bias) of the final model while reducing the time and computational cost spent building it.

Technically, IAGON’s leaning component provides a set of neural network models alongside the possibility to create deep architectures and configurable hyper-parameters. This approach enables the user to focus on the specifics of the computational problem, tuning the neural networks (outside pre-set) according to the requirements, while the framework takes care of the rest. The framework has some particularly innovative features that can take into account the pre-processing of unstructured Big Data and at the same time has the capability to recognize and classify the data it has been previously trained. Both the training, the classification and the pre-processing operations are executed in a distributed fashion using the Map-Reduce paradigm (i.e. filtering and sorting) that naturally fits with a distributed storage layer decoupling the data partitioning over a set of distributed nodes from the processing logic. Finally, the system can also leverage the underlying CPU/GPUs by supporting nodes with multiple graphics cards and automatically offloading and balancing the computation on these devices.

Blockchain Protocol and Tangle

IAGON’s advanced Blockchain Layer is a hybrid of Blockchain and Tangle Technologies (optimized through AI) configured to serve not only as a secure platform to integrate with existing blockchains but also utilize its data mining feature to process data.

As a Blockchain interface layer it allows data to be securely stored within open-source, public and even private blockchains used by enterprises (e.g. Hyperledger, ETH and NEM), and vice versa. Using machine learning algorithms and encryption/decryption protocols, IAGON is able to provide a secure method in storing data across platforms. Additionally, the strategic integration of Tangle assists in resolving some of the issues associated with the implementation of the Blockchain technology for a large scale of operations, including the difficulties to scale the blockchain, to achieve consensus on the validity of blocks when the new blocks continuously arrive. By applying the Tangle technology, IAGON can offer an alternative solution for organizations with Big Data repositories that can support large scales of processing and storage management tasks.

IAGON also leverages Blockchain technology to maximum effect by maintaining honesty of nodes across Smart Grid and corresponding distributed data mining algorithm. By design, Blockchain is inherently resistant to modification of the data. It is effectively an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable manner. In this context the employed SHA256 algorithm of previous blocks (according to the analogy) maintains a chain link to its historical state (in this case data). This allows IAGON to manage both a reward system to incentivize miners on its platform to process data honestly, but at the same time guard against deliberate manipulation of the data output.

Using Blockchain, IAGON’s machine learning algorithm can quickly identify whether data output mined from a block is actually a valid part of the block. This can be easily achieved within the framework of a simple Blockchain by hashing the inputs with the hash of the previous block. IAGON’s ecosystem creates multiple back-up copies of the Genesis block (i.e. first block of blockchain) with respect to the data internally within the private blockchain. The application of a Blockchain layer is a unique and particularly well-suited approach towards sharing data across a decentralized network. In this framework, data can be stored, processed and validated by a network of nodes, or it can be stored and validated within an internal facility where the processing is outsourced to a decentralized network of nodes. Additionally, Blockchain allows consistency to be maintained throughout the entire data structure.

The adoption of hybrid Blockchain and Tangle technology that combines data mining features to process data and encryption is a perfect synergy and delivery system that has the capacity to support large scales of processing through the Miner Application and Distributed Storage Management tasks.

Twitter:

 

 

 

Leave a reply

Your email address will not be published.