The Many Faces of Blockchain Scalability

JaiChai
6 min readDec 20, 2017

By JaiChai

Source

Maybe you have heard the term “scalability” and that it’s the cause of slow and expensive transactions?

Source

Or maybe you think that “scalability” is simply the ability for a cryptocurrency or platform to be faster or to have cheaper transactions than another?

Even further, maybe you’ve been hearing about those platforms with “Super Scalability” — enough to change the way we’ll think about the internet?

For the most part, those assumptions are true, but…

Source

“Scalability” — or lack thereof-as being the sole cause of slower transactions is a common, but misleading belief among the majority of cryptocurrency users.

It’s a gross over-generalization .

Taking this broad stroke approach leaves out significant parts of the whole picture that’s needed to better understand what’s really happening.

Most of all, it wrongly implies that a “scalability” standard can be uniformly applied to all platforms.

This article will shed some light on those issues.

Don’t worry if you’re not an algorithm guru or blockchain coder extraordinaire.

Prior knowledge of complicated consensus mechanisms is not necessary.

And as best as I can, I will explain things in plain language; simple enough for almost anyone to understand.

So, with that in mind, let’s charge on.

Source

The most common measure of scalability is the number of transactions per second (tps); meaning: raw speed, amount of achievable throughput.

Bitcoin and Ethereum are tortoises compared to other platforms.

Several platforms — even credit cards — can process many more transactions per second; and in some cases, achieving thousands of tps.

For example, IBM’s HyperLedger Fabric handles an average of 700 tps, Ripple routinely does 1,500 tps — with a max capability of 10,000 tps, HashGraph boasts 250,000 tps, Red Belly clocks in at 400,000 tps, and IOTA currently claims bragging rights at 500,000 tps (theoretical maximum).

Even PayPal averages 120 tps; while VISA averages 1,500–2,000 tps and supposedly has a max peak load capacity of 56,000 tps.

Source

Scalability and Number of Users

Some people measure scalability as the maximum number of users the platform can handle without degradation of network performance.

Just because humongous tps can be achieved DOES NOT mean that caliber of raw speed can be maintained ad infinitum.

The current tps performance relies on an “X” amount of users.

Consequently, if the number of users significantly increases past its maximum capability, an “X” amount of speed (tps) is lost — commensurate to the % increase beyond that threshold, the point where the platform starts to show stress and become unreliable.

In worst case scenarios, a network without a sufficient node number adaptation mechanism in place will be rendered useless and vulnerable to all sorts of customer data loss, corruption and compromise.

Source

In other words, imagine you are meeting a friend at your local, neighborhood diner for a nice meal.

It’s fairly easy for the waitress and the cook to quickly serve you and your friend some delicious, home-style dishes.

But it’s quite a different story if a bunch of high school football players, cheerleaders, girlfriends and family supporters suddenly show up at the diner.

To make matters worse, the horde of new customers are all still hyped from the game, impatient and starving.

What usualy happens?

The over burdened staff provides slow, erratic customer service, and most likely, some pretty crappy food ends up on the plates.

Source

Some believe scalability should be measured based on the platforms capacity to add nodes.

Since the number of nodes and the efficiency to perform finality verifications are directly related, the ability to easily accommodate additional nodes can significantly affect platform efficiency.

Most platforms have a set limit on the number of trusted nodes it can effectively handle (or operate as validators) at any one time.

Others penalize lazy nodes and can not replace them quickly.

If this condition persists, the only alternative to replenish the number of nodes is to either lower the requirements for operating as a valid node (a risky venture altogether), or hope the remaining nodes can take up the slack; all the while praying that the customers will accept the degradation of network processes in the interim.

In the previous neighborhood diner scenario, if the additional waitress staff and assistant cooks can not easily be obtained, the customers have no choice but to suffer a long delay — or look for another restaurant that can handle the size of their party.

Source

With regards to scalability, context is crucial.

For example, in public blockchains like Bitcoin and Ethereum anyone can function as a node and also cease being a node at anytime.

No prior authorization is needed to participate in the consensus process.

These networks are called “non-permissioned”.

By contrast, “permissioned” networks must authorize all nodes beforehand and therefore, the network knows all the identities of its nodes.

Non-permissioned networks are forced to add steps in their protocol to guard against threats that all their unknown, untrusted nodes represent (mainly: Sybil attacks where a node replicates imposters and attempts double-spend thefts, causes multiple forks, and many other kinds of attacks on the network).

Since the nodes were vetted and are known in the “permissioned” networks, all nodes are considered as “trusted nodes” and there is no need to perform those time-consuming, extra steps.

This contributes greatly to tps speed. In fact, all the fast platforms mentioned earlier run on “permissioned” networks.

It’s yet to be seen if the fast platforms can maintain superior tps in a “non-permissioned” setting.

Source

As you can see, the term “Scalability” can mean different things to different people — and within different operating environments.

In one setting, “Scalability” is defined as raw speed — the number of transactions per second.

In some circles, “Scalability” is regarded as the number of users that can effectively be handled.

In yet another scenario, as in the case of Zilliqa, “Scalability” means the ability to quickly accomodate additional nodes.

So, if you happen to hear someone touting “Scalability” comparisons, you can simply ask that person:

“Are you sure you’re not comparing Christmas cookies to unicorns?”

By JaiChai

About the Author:

JaiChai has been in the Disruptive Technology, Computer Science and Cryptocurrency spaces for many years.

Originally published at https://steemit.com on December 20, 2017.

--

--

JaiChai

I'm retired (U.S. military) and living on an island paradise with my girlfriend, teenage daughter and two dogs.