For decades, data centers have (metaphorically) lived and died by the reliability of power. No power, no data center, no work. First generation thinking revolved around power protection, making sure electricity was always available and paying extra in the form of a uninterruptible power supply (UPS) and redundant paths to electrical power. High performance computing (HPC) and acceptance of electrical interruptions during some types of applications is leading to a reconsideration of the need for power protection.
Shifting away from batteries and stand-by generators to other - or no - alternatives is coming into vogue. For instance, Bloom Energy's spin is to make its natural gas fuel cell as the primary source of data center power, with the power grid as the backup. Fuel cells with no large mechanics and few big moving parts, goes the argument, are more reliable than a diesel generator. Since power generation is local, anything that happens to the inbound grid and upstream utility are irrelevant, so long as there's not a natural gas interruption somewhere.
But why bother with backup power at all? Do your applications need it in the larger scheme of things? Adding a generator and UPS to any data center, costs money, even if you are purchasing data center as a service. UPSes and generators have upfront capital costs that need to be amortized and there are ongoing operational expenses to maintain and test generators.
If the local electrical grid is reliable enough, you may be able to skip the need for a UPS/generator combination. Iceland and some other places around the globe have built highly redundant, highly robust infrastructure to support heavy industry dependent on a steady flow of electricity. While not originally envisioned for today's world of data centers, a robust utility grid is what a data center needs with or without on-site power protection.
Which brings us to applications and data center infrastructure. High performance computing (HPC) applications such as predictive modeling/simulations and cryptocurrency generation are being run today on compute infrastructure without dedicated power protection of UPS and generator. Users have made a conscious trade between the cost and (sometimes dubious) benefits of power protection and a need to have as much computing power as possible for the dollar in order to get faster results. Cryptocurrency mining, for example, is a race to generate enough prime factor numbers as fast as possible. With proper design, if power is interrupted work lost becomes a matter of waiting for a reboot and starting back at the last logged recovery point.
Understandably, many applications and data centers can't live without power protection. Real-time transaction processing is just one example where systems cannot afford an interruption in service. But IT professionals are starting to rethink the old practice of 100 percent uptime for all things in order to get more overall computing power per dollar.