In the previous article we’ve seen how the blockchain technology has an impact on economic and social environments. We’ve implicitly compared the blockchain with the Nakamoto Consensus, but not all blockchains follow this model. In these last two articles we’ll see the problems of the Nakamoto Consensus and the main solutions, including those that revisit the consensus rules. The first solutions we review are those adopted or proposed in the Nakamoto Consensus context.
We won’t go deeper into technical details of the various solutions, but we’ll leave readers the chance to do so.
NAKAMOTO CONSENSUS PROBLEMS
In the blockchain community, depending on who you talk to, you’ll be given a different idea of what are the Nakamoto Consensus problems. Despite this, we can still identify the most debated ones and those with a stronger accord. These are: scalability, centralisation, environmental sustainability and fungibility.
Scalability is the capacity of a system to grow smoothly when needed. Numerous times a basically unknown web service has become very popular overnight or in a few days and was not able to sustain the traffic generated by the new users. The same applies to blockchain systems, which can end up congested if the number of incoming transactions exceeds those that can be handled. In this case, a blockchain reaches its limit and becomes less useful, ending up losing users and therefore its economic value.
The problem of scalability imposed itself early in the blockchain community, starting from Bitcoin. In October 2010, two years after the launch of Bitcoin, Satoshi Nakamoto put a 1mb limit on the size of a Bitcoin block. Let’s remember that a block is a group of confirmed transactions.
Born as a solution to network congestion through sending a large amount of blocks (DoS or DDoS attack), this limit, together with the limit of only one confirmed block every 6 minutes that has always existed in Bitcoin, resulted in a limitation on the amount of transactions per seconds that could be processed by Bitcoin. Today, that limitation makes Bitcoin able to process a theoretical maximum of 7 transactions per second that in reality translates to 3 transactions per second, which are currently diminishing. VisaNet, the Visa system that handles its transactions, is able to process up to 56000 transactions per second. This problem affects all the blockchains based on the Nakamoto Consensus, but some of these implemented some measures to reduce it.
The problem has been addresses both off-chain, meaning with solutions outside the blockchain, as well as on-chain, directly changing the rules of the blockchain. The off-chain solution consist of having servers or nodes that first confirm transactions without registering in the blockchain, and then proceed to register those transactions or aggregated transactions. This way the blockchain network won’t risk congestion and transactions get confirmed in a reasonable amount of time. On-chain solutions are the reduction of the time to confirm a block and the increase of the size of a block. While off-chain solutions require what is called a second level solution, the on-chain are first level solutions because they affect the protocol directly. To the latter can then be added second level solutions, to further increase scalability. An intermediate solution is to make use of special nodes, considered more “reliable”, from inside the blockchain network.
Lighting Network by BlockStream is an example of an off-chain solution to low scalability. In this system special nodes validate and register transactions off-chain and then register aggregated transactions on the blockchain. Examples of on-chain solutions, based on block confirmation times reduction, are Litecoin and Ethereum, the former adopting 2.5 minutes to confirm a block and the latter 12 seconds. Blockchains with a dynamic limit for block size are Bitcoin Unlimited, Ethereum itself and Monero.
Blockchain technology is used to design decentralised systems, but many factors can increase their centralisation over time. The mining with proof-of-work, the process to create currency used by Bitcoin and others, becomes harder with the increased use of the currency. The consequence is that less and less nodes are able to be operative.
The main danger is that the security of the network gets compromised. In a blockchain with proof-of-work the miner or group of miners with more than 50% of computational power can successfully attack the system by adding erroneous blocks. This has already happened with Bitcoin in 2014, when GHash.io, an english pool of miners, reached 50% of the total computation power and had to reduce it in order to maintain trust in the system. From then, the computation power has been shifting to China, where electricity costs are low, creating a new threat for centralisation, made worse by ambiguous chinese policies about cryptocurrencies.
The centralisation trend affects all the blockchains based on the Nakamoto Consensus, and thus proof-of-work. Despite this, it’s possible to mitigate its effect. The most commonly attempted countermeasure is to use mining algorithms that are not based on the use of CPU (cpu-hard) but on the use of memory (memory-hard). While the upgrade of CPUs requires big economic investments, that same thing doesn’t apply to memory. Furthermore, ASIC, the processors currently used for CPU mining, can’t operate above 3mb of memory. This would make current tools used in mining non-competitive and would also allow miners from countries with higher energy costs to compete.
The problem of environmental sustainability is directly connected to that of centralisation. The hard-mining algorithms require an high amount of electricity, energy that gets used for something hardly useful. In 2015 the network used more for power for a single transaction than a medium sized american household for a day, while forecasts say that the network might use as much energy as Denmark by 2020.
So, even if this is not perceived as a technical problem, it still is an image problem. The solution to this is the same proposed to solve the problem of centralisation, which is the use of mining algorithms not based on cpu-hardness.
Blockchain transparency is one of its strengths. This allows to verify the history of transactions and any other data that gets registered. Despite this, it’s a double-edged weapon when this informations get used to undermine the security of users. Even if identities in the blockchain are pseudonymous, an attacker can often find the real identity of a user that committed transactions, for example searching online to which identity an address is associated o through other means of investigation, both online and offline.
Ideally, all the coins in the blockchain should have the same trade value. This property is called fungibility (watch this video-interview), and has always been ensured by cash, but it’s undermined by the current structure of the blockchain. This is being a real problem is subject of debate, but a higher fungibility would surely increase the use of blockchain where there are people subjected to rights violations and individual freedom limitations by governments or other non-democratic entities.
This problem is not easy to solve, but since years there are ways to mitigate it or, more recently, to eliminate it. Fungibility can be guaranteed with off-chain solutions, like mixers (tools that mix coins to make the owner untraceable) like Tumblebit or ZeroCoin, or with on-chain solutions that, thanks to the use of a new cryptographic technique guarantee anonymous but verifiable transactions, like Monero, ZCash and Mimblewimble.
In this first article on blockchain technology innovations we’ve seen which are the main problems that affect this technology and what are the main solutions used for blockchains that use the Nakamoto Consensus. In the next insight article we’ll dig deeper in the innovative solutions that have been proposed by who sees the main root of the problem in the Nakamoto Consensus.
To become part of our tech community and get all news on innovation, technology and much more, subscribe to our newsletter!