There’s been a lot of discussion around Big Data lately: the exponential progression, together with the ability to process unbelievable quantities of data, allowed science and businesses to bring the data computing and storage power towards levels that, as we will see later, can be managed but cannot be fully comprehend.

We move easily between terabytes and gigahertz, and even the most well-known IT company, Google, got its name from “Googol”, a titanic number formed by a 1 followed by a hundred zeros.

However, is our mind really capable of comprehending the vastness of giga, tera, peta, hexa and other inapprehensible measures that evoke the nigh on impossible riches of Scrooge McDuck? The answer is, definitely not. Our brain can understand quantities limited to the thousands, but when the numbers exceed that limit, things get a little complicated.

Let’s test this.

Each of us has in their PC a hard drive that has the storage capacity of a terabyte, at least; that is 10^12 bytes, therefore bytes, writing it down.

Alright, but how big is it, really?

Let’s suppose we write each byte contained in our hard drive on a 1cm long sheet of paper. If we align these sheets, one after the other, how far can we go?

Well, that would result in a paper “snake” that could cover the distance between Earth and Moon fourteen (14) times.

Or it could envelop the circumference of the Equator 784 times.

If we put a human for every byte, with 10^12 people we could fill 142 different solar systems like ours.

If you’re incredibly bored and decide, strangely enough, to pass time writing these bytes on paper, you would need the same amount of time that the human race has needed to evolve from its primitive state (around 32 thousand years). I doubt that, once you’ve finished, you would also want to see whether you wrote them right, given that every existing drop of ink would not be sufficient to complete this task.

Nevertheless, such a huge quantity of information can comfortably seat inside one of your pockets; and talking about terabytes doesn’t give anyone vertigo today. Maybe that is because, since we are protected by the fact that we cannot fully comprehend the vastness of these numbers, we’re not scared of it. It’s like seeing just a small number of trees, without perceiving the forest.


Let’s get back to our Big Data systems. Following the definition, they are systems that work with amounts of data that cannot be contained in a standard computing machine.

Therefore, more than a terabyte worth of informations.

Let’s see then two of the most famous systems: some estimations suppose that Google and Youtube manage, respectively, 15 Exabytes and 75 Petabytes of data. An exabyte is equivalent to 10^18 bytes or bytes.

Thinks about this, if Google bytes were specks of sand, there would be enough to fill two planets as big as the Earth.

Or, to make another type of comparison, Google contains around 4500 different books for every human on Earth and, since the history of each individual can be told in fewer books, we can deduct that Google is capable of hosting every kind of information, even the most irrelevant, of every human being born on this planet from the Big Bang until today.

Scary, isn’t it?

The size of the informations processed by Youtube is, without doubt, much smaller, but still boundless.

If you decide to watch every video on Youtube, you would need to spend 5700 years in front of a screen, which is a lot more time than the age of the great sphinx of Giza.

Should you want to convert every video and transfer them on a Super 8 film, you’d get a strip that can cover the distance between Mercury and Venus.


If we were to take an Exabyte or a Petabyte and compare it to a Googol, the result would be so much smaller that we could easily approximate it to zero. From another point of view, the Google archive takes 0.000000000000000000000000000000000000000000000000000000000000000000000000000000001% of a whole Googol.

In turn, a Googol is minuscule if compared to a Googolplex. A googolplex is a number that can be express with a 1 followed by a googol of zeroes (10^100) and it can also be written as 10 googols.

To better explain: a googol, independently of how effectively big it is, can be written down in one or two lines of a squared notebook.

However, it’s difficult to imagine how many digits a googolplex has (a googol has 101 digits), but it’s sufficient to know that they vastly exceed the number of elemental particles present in the whole known universe, which are between 10^79 and 10^81.

The consequence of this is that, although it’s relatively simple to write down the numbers of a googol using the decimal notation, it would be impossible to do the same with a googolplex, even if every existing particle in the universe could be transformed into ink and paper or into magnetic storage.

Speculating on the possibility to save the list of digits contained in a googolplex, even the most powerful computer today would need around 3×10^81 years to do it. And the universe “only” exists from 1.38*10^10 years.

Even so, the googolplex cannot be compared to other extremely large numbers, like the Megistone and the Graham number. To state this concept, suffices to say that a googolplex can easily be expressed with an exponential notation (like 10^10^100, for instance). Instead, a number as big as that of Graham cannot be expressed with this kind of system.

But we can rest peacefully for now: unless we’re victim of a technological singularity, such numbers will not easily be part of our daily lives.