Data gravity refers to the concept that with sufficient scale, a given mass of data attracts more data, which in turn pulls applications and workflows with it. The greater the mass of that data, the more gravitational pull it will have. The term ‘data gravity’ was first coined by software engineer, Dave McCrory, who described the gravitational pull that large masses of data seem to exert on IT systems, drawing an analogy to physics, which posits that objects with sufficient mass will pull objects with less mass towards them.
To some extent, this has been at the centre of the model by which all digital infrastructure has developed over decades. Historically, data sets were monolithic, often created from one source, stored in one location, and processed in a linear manner. Data and the infrastructure supporting it grew for decades in centralised locations, often dictated by proximity to internet exchange connectivity.
Today, data creation is dynamic, growing at a scale that is hard to contemplate, is omnipresent, and perhaps most importantly is incredibly valuable. The scale and volume of data sets continues to grow exponentially with two-thirds of the world’s data created in the last three years. However, the more data we create, the greater the mass, the greater the gravity, and the harder it becomes to escape the gravitational pull of that data set. This is now creating significant challenges.
The data gravity problem
The historic proximity requirements of the digital infrastructure industry, and the data gravity that has resulted, has created an over-concentration of the digital infrastructure industry in metro locations, such as London, New York, Dublin, Frankfurt, Ashburn and Amsterdam. Some of these locations can no longer support the current, let alone future, growth being drawn to them. As data gravitates to those concentrated areas, the physical infrastructure of land, buildings, water and most importantly power, cannot keep up.
Current estimates suggest that data centers in Dublin will soon be consuming 20% of the city’s power, which is ten times the estimated global average and up from closer to 5% only seven years ago. This enormous growth has largely been driven by hyperscale cloud operators, such as Microsoft, Google and AWS expanding their data center footprints in a location in which they have been established for decades. Even worse, the cloud operators unnaturally increase the strength of data gravity with immensely high data-egress costs from their platforms. This is clearly unsustainable in more ways than one. Not only must there be a finite level of resources available for this type of infrastructure at some point in time, but Ireland’s energy production relies heavily on a predominantly carbon-generating grid, meaning that its data center industry is far from green.
Availability of power and infrastructure is one factor, as is the economics of the digital infrastructure industry, particularly the data center industry, which has also changed significantly. Due to recent events, the power prices in London and Frankfurt have risen dramatically, increasing the cost of operating data centers enormously. This has a knock-on effect for the economic viability of storing and processing data in these historically gravitationally-dense locations. Does it still make sense for a German company to process all of its applications and data in a data center in Frankfurt if the price to do so has risen 5 times in the last twelve months?
Overcoming data inertia
Data gravity creates challenges. As data sets grow and the gravitational pull of those data sets increases, more concentration and pressure has been put on the historically centralised digital infrastructure locations. Not only are those locations running at or beyond capacity, they are also not sustainable. Therein lies the Sustainable Data Gravity Paradox. The greater the data gravity of a location, the more pressure is placed on its finite resources under the current model. Something has to change, and it can.
Some data is latency sensitive, such as in high frequency trading, for example. Some must reside in in a specific country for data privacy regulations. Other data must be hyper-connected, as is the case for content distribution. For these types of data, there is likely a strong need for it to be located, stored and processed in a specific location. Cost, efficiency and sustainability may play a less important role in the location of that data as a result.
However, what about data that is less latency sensitive, does not have specific data privacy requirements, or need to be hyper-connected? Should that data also reside in resource-constrained, expensive, inefficient and less sustainable locations? One would hope the answer is, ‘no’. The fact is, if your data or applications can sit in a cloud environment, it can probably sit anywhere, as you rarely have control or knowledge over where your data resides in a cloud environment.
Data gravity can be strong, but what if the gravitational pull of lower cost, more efficient, more sustainable digital infrastructure pulled harder than the gravitational pull of the data itself? Data is by nature inherently mobile. Terrestrial, wireless and subsea networks transport massive, almost unimaginable, volumes of data around the world at increasingly fast and lower cost rates. It is possible and it is time to break the Sustainable Data Gravity Paradox.
At Verne Global, our data center customers can be more efficient and can save 75% of their data center costs by locating their digital infrastructure in our sustainable data center facilities in Iceland and Finland when compared with metro locations like London and Frankfurt. They can also reduce their carbon footprint by 98% by doing so. We help our customers take the journey towards digital infrastructure sustainability by enabling them to divide their data and applications between those that justify metro locations and those that can take advantage of more efficient sustainable locations. It is time to solve the Sustainable Data Gravity Paradox and we have a solution.