The World of Computing on the Blockchain

0
444

Many consider 1971 as the start of the digital age. This was the year the Intel 4004, the first commercially available microprocessor, was released. Despite the apple iPhone 4 having almost three times the amount of computing power as a supercomputer in 1985, the demand for computing power is humongous with many seeking to apply this computing power to fields which require large amounts of it such as artificial intelligence and prediction modelling (1). This is an industry ruled by large multinationals with Google, Amazon, Alibaba, Salesforce, IBM, and Microsoft offering cloud computing with users being able to pay on-demand for this computational power without investing in the large infrastructure requirements needed to generate it.

This is a centi-billion dollar industry (2). With centralization comes risk. Companies using these services are completely exposed to this one single point of failure. They may also have privacy concerns with the organizations offering the cloud computing services having the most control and access.

If the single point of failure were to go down, it could result in billions of economic losses across a variety of industries. With the introduction of the Bitcoin network around one decade ago, decentralized technology is believed by many to have the potential to disrupt some of the largest industries. We will examine in this post some of the largest players in the decentralized computing space and whether it has the potential to disrupt another largely centralized industry.

Decentralized Computing:

When it comes to decentralized computing, Ethereum has earned the first mention. With a vision of becoming a decentralized world computer, Ethereum started with an initial coin offering in 2014. The initial price for the ICO was 2000 Ether (the cryptocurrency to operate on the Ethereum network) per bitcoin funded (3). The ICO was met with scepticism in the cryptocurrency community to say the least as can be seen from the Bitcointalk graphic below. Today, Ether is the second largest cryptocurrency by market cap and is worth several hundred dollars.

The client server model rules computing today and the world computer vision of Ethereum envisioned a decentralized computing model. Ethereum is a Turing complete virtual machine (VM).

So how does the process work when it comes to computing? The Ethereum blockchain is programmable and can be programmed to do complex computations. Decentralized applications (DApps) can be developed on the blockchain and smart contracts can be coded.

What is a smart contract? The term was coined by Nick Szabo in 1996 and is essentially a piece of code that executes when certain criteria are met. You can think of a smart contract as an agent of whoever deployed them and will act in the way coded. For example, once Jack submits details x, y, and z that meet requirements a, b, and c, then release a payment of €50 from personal account to Jack’s account.

The Ethereum virtual machine (EVM) is isolated from the rest of the blockchain and is used to execute the contracts within the network. There are two types of accounts on Ethereum, externally owned accounts which are essentially users, and contract accounts which represent smart contracts. Ether is the cryptocurrency of the network. When Ether is sent to a contract account, the code within the smart contract is executed. Gas is the execution fee for a transaction on the network and is priced in a small amount of Ether. The result of every transaction gets stored on the Ethereum blockchain.

What is Ethereum

What’s the key difference between the Bitcoin and Ethereum network when it comes to computing. The difference lies at the programming language. Let’s say I want to programme a simple smart contract into the blockchain that says Jack pays Jill x amount if event y happens. The programming language for contracts in Bitcoin is Ivy-Lang and is extremely difficult to code this type of logic into, making it nearly impossible to describe complex transactions. Ethereum uses Turing-Complete programming language Solidity, and such a transaction as described above would be complete in just a few lines of code.

However, there lies a problem at the core of computing on Ethereum. Although the vision has been set in place, decentralized technology is still in its early years and having decentralized technology involves a trade-off with safety and scalability. Ethereum developer Vlad Zamfir, whom many consider the lead behind Ethereum’s Proof-of-stake (PoS) Casper protocol, summarises the issue in a tweet posted in March 2017.

At the crux of the computing issue is the fact that Ethereum can currently only process 15tps which will be unable to keep up with any significant demand for an application. Compare this to Visa’s 45,000tps and we get an idea of how miniscule this is (4). Any decentralized applications (DApps) built on top of Ethereum have limited computational power due to the limitations of the Ethereum VM.

Taking into account these throughput limitations, what if computing power could be decentralized? What if there was a global marketplace of computing power for anyone to access? Golem aims to create this marketplace where you are monetarily incentivised to rent your CPU and GPU power. The project is developed by Imapp who are also the team which developed OmiseGo. Imapp specialise in distributed computing and technology.

Golem proclaims itself a distributed supercomputer, tapping into unused computing power across it’s decentralized network resulting in increased speeds for computing tasks. Golem’s native token, GNT, is the cryptocurrency to use the network. If you have a computing task that needs to be complete and you use the golem network, it will find the computers suitable to complete the task and they will be rewarded in the Golem cryptocurrency.

Software developers can also build DApps on top of the Golem protocol. The different participants and their incentives to participate are illustrated in the table below.

Golem Ecosystem

Computational tasks will be partitioned and spread to computers across the network, similar to data sharding, enabling faster computation. This is important as Golem is actually built on the Ethereum blockchain so if it were to bring on the throughput limitations of Ethereum, it would defeat the goal of fast computation.

Golem network is also hardware agnostic, meaning that computers don’t require any special hardware to be part of the network. This enables a free market and aims to also serve the goal of making the renting of computational power on the network more affordable when compared to the tech giants such as Google and Amazon. Prices will be determined by market demand and supply.

The first application of Golem is CGI blender rendering. Computers renting their computational power will be paid for completing tasks relating to this. In the future, Golem plans to expand to more intensive computational tasks such as machine learning, natural language processing, and artificial intelligence.

Software developers can also rent their software in the system. The whole ecosystem can be seen in the below diagram.

Golem Ecosystem

The Ethereum blockchain comes in as a payment settlement layer, and also for task initiation. Data is exchanged via a number of peer-to-peer protocols but the contract accounts on Ethereum mentioned above are used to initiate the tasks with smart contracts.

A possibility for bad actors does come into play with Golem. When the tasks are partitioned and spread across the network, how can we be sure that the providers and software developers will execute the task correctly? As with all distributed technologies, a probabilistic solution is provided where the network participants are monetarily incentivised to act for the good of the network. These incentives are not implemented yet but are proposed to be in the form of a reputation system where GNT are staked with bad actors losing the deposited GNT along with the ability to accept new tasks.

Currently, the Golem network is still in a very early stage with many of the proposed applications not yet implemented. The project has been ongoing for about two years and in will need to develop a market of both requestors and providers to really provide value and affordability. Golem is also exposed to a lot of risks. Any risks and scalability issues with the Ethereum blockchain are also inherent in Golem as GNT is currently an ERC20 token. The peer-to-peer networks Golem uses for data sharing is also a nascent technology and will exposes Golem to any of the challenges it might face as a new technology. One of the key benefits of decentralization, privacy, is not a feature with Golem’s computational subtasks. To take on the centralized superpowers of the present digital landscape, every benefit of decentralization will need to be leveraged to adopt users. Privacy is a feature which Golem plan to fix going in to the future.

Another decentralized cloud computing marketplace is iExec. iExec focuses on providing computation power for DApps. iExec looks to leverage off-chain computation off of the Ethereum blockchain.

At the moment, DApps mostly utilize the Ethereum blockchain for ICO sales and simple transactions but as DApps become more fully implemented, demand and computational requirements will increase.

While Golem implements a technology similar to data sharding to make their computational marketplace feasible, iExec use off-chain computing. Off-chain computing takes computations off the main blockchain into a separate protocol to achieve higher throughput.

Currently, the DApps on Ethereum are just capable of completing computational tasks which require simple transactional process. iExec looks to leverage off-chain computation to drastically increase these capabilities.

Similar to Golem, iExec also envisions a decentralized marketplace where free market economics will lead to affordable pricing lower than its centralized counterparts. A reputation system will also be in place to incentivise those supplying computational power to be reliable. iExec’s native token, RLC, will be the cryptocurrency for the ecosystem.

The ecosystem will focus on three areas. There will firstly be a DApp store, with Google Play Store and Apples App Store being the comparable centralized versions. The second part of the technology is the marketplace for computational resources, and the third is a marketplace for data.

iExec will be using a consensus mechanism coined proof-of-contribution to determine who provided what computational resources and reward them accordingly. With this being an experimental consensus mechanism, it may represent one of iExec’s biggest challenges.

Other challenges would include adopting enough users that the marketplace can be affordable and low-cost, and executing off-chain computation in a way which is secure. Along with the consensus mechanism, experimenting with off-chain computation is only in its beginning stages and its efficacy remains largely unproven.

With these decentralized computational marketplaces being built on top of the Ethereum blockchain, does Bitcoin have any role to play? Similar to off-chain computation, side chains seek to overcome the limitations of the main blockchain by setting up a separate protocol which involves certain trade-offs. Built as a sidechain to Bitcoin, Rootstock is a Turing-complete VM for Bitcoin providing faster transactions and greater scalability.

Similar to the drawbacks of iExec and Golem, Rootstock is experimenting with a new and unproven technology in side-chains, so it also brings on the risks inherent in this. Many consider the Bitcoin network as more secure than Ethereum so this can be seen as one advantage to Rootstock.

The focus from the Rootstock team is clearly on security. While many projects publicise about increased throughput, Rootstock have security as their number one priority. In terms of scalability, Rootstock aims to reach the level of PayPal, at 100tps. This level of scalability can transform the Bitcoin network from a payment system into a system where you can fully deploy smart contracts and DApps.

Side chains like the one being applied by Rootstock are also being leveraged by Lisk. Side chains are used to make it more attractive to build DApps on Lisk where the side chain can have the dynamic and customisable features suited specifically for your DApp. Each sidechain will be hosted on the main Lisk blockchain.

By utilising the side chain technology, DApp can utilise increased computing power with greater throughput. In the Lisk ecosystem, the sidechains can be thought of as your DApps own personal blockchain that can be built to be the perfectly optimised for your DApp and its requirements.

Lisk uses delegated PoS (dPoS) as a consensus mechanism where token holders vote for block producers who are responsible with validating transactions. dPoS is a largely unproven consensus system and there is a lot of question over its effective operation in Lisk with some questioning whether Lisk essentially picks and funds the candidates to be block producers.

One advantage on Lisk is JavaScript can be can be used to program, which increase the number of developers that can build upon the platform. This is one attractive aspect when compared with Ethereum where developers have to learn Solidity specifically for building on the EVM.

However, there are a lot of disadvantages to the Lisk technology. Firstly, it does not protect against non-deterministic behaviour. This means that an algorithm can produce different outputs on different runs for a given input. Ethereum apps have no possibility of generating non-deterministic behaviour.

Lisk also does not have the ability to prevent infinite loops. This a piece of code which lacks an exit and repeats continuously. Lisk lacks the ability to measure total computation or measure memory consumption. It lacks the ability to prevent memory growth also.

In a competitive environment where a lot of revolutionary teams are working morning to night to come up with innovative solutions to the problems decentralized technology is facing, it seems that Lisk is taking a step backward instead of forward due to its lacking features.

Computing with a permissioned system:

What about private blockchains and their role in computing? Some would argue that these are not a blockchain at all as having a blockchain where nodes have to gain permission completely defeats the purpose of decentralization.

It may be better described as distributed ledger technology (DLT), a system where multiple nodes keep a copy of the ledger as opposed to a single entity as is the case in conventional data storage. Two DLT systems looking to make an impact on the computing space are IBM’s Hyperledger Fabric (Fabric) and R3’s Corda (Corda). Fabric is aiming to be a generic DLT where smart contracts can be applied whereas Corda has a specific focus on the financial services industry.

Frankfurt School Blockchain Center

Private blockchains have a completely different vision when it comes to computing applications. The system architecture is designed to prioritize throughput over decentralization.

How these systems are designed to reach consensus is at the essence of the decentralization subject. Fabric has three types of nodes that have different roles known as clients, peers, and orderers. This is already vastly different from a public blockchain where all nodes have equal tasks. Clients receive messages from the end user and create transactions. Orderers act as the communication channel between clients and peers, with peers being responsible for maintaining the ledger. A subtype of the peer node, known as endorsers are responsible for ensuring transactions are valid. The consensus algorithm applied is known as practical byzantine fault tolerance (BFT). BFT is the ability of a distributed computing system such as a DLT to be able to function effectively and establish consensus despite some malicious parts.

Smart contracts are used in the consensus system for Corda where the smart contracts check for transaction validity in the form of a signature and also for uniqueness (no two transactions can ever be the exact same as they will have different time stamps, or amounts or, senders, etc.). Smart contracts are known as chaincode within Fabric and can be coded in Go or Java. Smart contracts on Corda can be coded in Kotlin or Java and can also contain legal prose. This serves to root the code in relevant legal prose and makes it more applicable to the highly regulated environment of financial services.

Fabric has designed their system architecture to be applicable to a number of industries whereas Corda is extensively focussed on the financial services industry. Similar to the way digital tokens can be created on the Ethereum network by deploying smart contracts with certain standards and rules, it is also possible to have a digital token on Fabric by deploying chaincode smart contracts in a certain way. This is not possible on Corda. There has been speculation that Corda will be integrated with Fabric going forward. Corda has been noted for a more straightforward and seamless user experience due to its focus on one industry.

The computing element of Fabric and Corda essentially come down to whether entities consider these DLT’s attractive to deploy their smart contracts. Smart contracts can be useful in many circumstances and the scalability issues of the Ethereum public blockchain are circumvented by having permissioned nodes. Many would argue that this involves an unacceptable trade-off with security and that it also completely defeats the purpose of decentralization. There are points to this argument, but Fabric and Corda have been adopted by customers which show there are cases where it is providing value. In terms of security issues, if the system is flawed, time will reveal its weaknesses.

Computing with a Consensus Mechanism System:

We have seen that different blockchains and DLT’s are designed with varying focuses and values. Some are designed to optimise for speed while others are designed to ensure maximum security and decentralization. Interoperability of different blockchains is looking to be an important area going forward but the question remains that if blockchains have some key limitations in terms of scalability, how will the powering of a network of blockchains be able to function any better.

Cosmos is a decentralized network of parallel blockchains. The network is organized into hubs and zones. The zones plug in to a central hub and each zone maintains its own governance.

Cosmos Zones & Hubs

Cosmos is powered by Tendermint which is middleware designed to provide the network and consensus layers in a blockchain system. Similar to Fabric, Tendermint uses a BFT consensus algorithm. The simplified architecture for a blockchain system consists of a network layer of peer to peer nodes, a consensus layer, and an application layer.

In terms of creating own applications, Tendermint makes the process a lot easier and attractive to developers. To create an Ethereum DApp, developers would have to both learn solidarity to do it and also conform to the standards of the EVM. Tendermint core enables developers to create their own applications on any sort of deterministic state machine (i.e. does not have to conform to the standards of the EVM), and developers can then run it on top of tendermint core which handles both the network and consensus layers. Tendermint core interacts with your application through the application blockchain interface (ABCI).

Tendermint has evolved to a point where it can be used as a general consensus engine for the state machines of other blockchains. You can take state machines that are built on top of other consensus algorithms and place them on top of Tendermint core. The EVM on top of Tendermint core is one example, known as Ethermint.

There are many advantages to Tendermint in terms of computing. It can be used for both private and public blockchains. In the public chain, it will use PoS while in a private system, it will use permissioned nodes. It has instant finality with nodes preferring consistency over availability. This means the chain will never fork. It enables developers to create DApps in any programming language as it separates the consensus and network layer of the blockchain from the application layer. Due to this separation, it is important to note that the development work needs to be deterministic meaning that all nodes calculate the exact same next state given a set of inputs or transactions.

The main difference with the standard PoS and the Tendermint PoS is there are a number of ways that block producers committed stakes can get slashed further incentivising good behaviour and effective operating of the network. When you forge fake transactions or do not participate in the building of new blocks, nodes will get their stake slashed. This also solves the nothing-at-stake problem which is a key criticism of the standard PoS consensus mechanism. The nothing-at-stake problem is rooted in the issue that validators in traditional PoS are incentivised to build on all chains as they are rewarded for validating and are at no risk of being slashed unless they act maliciously.

The main drawback is the security of the network with the Tendermint only being able to tolerate 33% of bad actors in the network. Many blockchain professionals would not consider this a secure network. PoS is also still an early consensus mechanism system and has yet to be proven on a wide scale. However, the slashing of stakes is likely to improve it as a consensus algorithm and these incentives are looking to be implemented by Ethereum’s Casper protocol when it is implemented.

Decentralized Computing Going Forward:

Cloud computing is another industry dominated by the centralized technology superpowers. It is a centi-billion dollar industry providing valuable computing resources to companies, saving them the large infrastructure investments. Decentralized cloud computing proposes a model whereby users can rent cloud computing power from users across the network.

To compete with the centralized superpowers, the innovators will need to find a way to fully leverage the benefits of decentralized technology. Affordability could be a strong benefit to using decentralized computing, but this will require mass adoption which has not been achieved so far.

Computation will have to far surpass the limitations of the EVM. To achieve this so far has required a trade-off with security and decentralization. With the goal being to leverage the benefits of decentralization, the technology will need to keep being experimented with until these trade-offs become verifiably miniscule or non-existent.

Source