Cloud computing has revolutionised the world of tech. Again. But for all the amazing services you can now buy, and stop buying, with a few clicks, let's not forget that fundamentally nothing has changed. Data is still made up of 1's and 0's which encode numbers or text, which in turn encode other things like sales and customer feedback and images, sounds and video.
Shortly after the millenium, I remember working on grocery retailer customer data using a 200MHz laptop. Imagine that now! Manipulating a multi-billion row dataset with 1/10th of a modern processor core and so little memory you can't even buy at SD card that small anymore (64MB). How was that even possible? In many ways, the world was simpler back then. The challenge was more direct. You had to understand how the data was stored on disk, and how it was read and manipulated.
Roll forward to today. The bottleneck of data volumes and processing power has been largely removed. But have we lost something if we don't really know what the cloud service is really doing at a low level? We might be able to look up how many nodes / VM / cores / memory our service is using, and even a little about how the data processing is done. But all-to-often we just stand-up the service, let it do its thing, and pay the bill.
But herein lies a problem. Commercially, Cloud Service Providers want to drive consumption. So apart from a marketing incentive to show one service is better than a competitor's service, is there enough incentive to drive compute efficiency? This is not a concern of overt market manipulation. Rather that we are all riding the exciting wave of cloud service adoption without perhaps keeping in mind the potential pitfalls. Because until quantum computing disrupts everything, fundamentally it is still only 1's and 0's being processed through logic gates, same as it was decades ago.
Back