More IT Trends for 2025: Where Are We Going to Put All Those Servers??

Posted by:

|

On:

|

As a continuation of our future of IT conversation that was focused on AI, I wanted to talk about a few interesting infrastructure trends that we see heading into 2025.  (Disclaimer: I can’t promise not to mention AI 🙂)

CLOUD VS. ON-PREMISE

First off, did you know there’s currently a tug of war going on between “cloud” and on-premise systems?  In the age-old battle between opex and capex, is capex making a stunning comeback??

During the height of the Covid pandemic, a lot of companies accelerated their rush to migrate their full server infrastructure to the cloud.  This was partly by necessity (there were fewer on-site employees to manage servers), and partly a logical conclusion (if everyone is working at home, why do our servers need to be in any specific physical place?)

Well, fast forward a couple of years: cloud sprawl has occurred, the cloud bills are piling up, and many organizations are moving back to “hybrid” strategies.  This is not only due to cost – especially with the newer resource-heavy AI workloads – but also due to privacy, compliance, and performance concerns.  Today, organizations are more likely to keep (or migrate) resource-intensive or highly sensitive systems in house.  So this is definitely a consideration for any companies developing cloud strategies today.  

(NOTE: when I say “in-house” I’m including “private cloud” configurations where you own the hardware and run it in a private data center.)

DATA CENTERS ARE BOOMING

Speaking of data centers, good luck getting space in one!  If you need it, start planning and shopping now.  Data centers are not only making a comeback, they are running at all-time capacity and new ones are being built at an unprecedented rate.  Giant “hyperscale” data centers, which can occupy several million square feet and consume hundreds of megawatts of power, increasingly dominate the industry.

The main reasons for the current data center boom are predictable:

  1. The AI gold rush – yup, AI requires a lot of computing resources.  It needs SO much computing resources and generates so much heat (up to 5 times that of “normal” servers) that server rack density has to be reduced, requiring more physical space and much more power to run and cool.
  2. Cloud computing – the major cloud providers (Azure, AWS, Google, Meta, etc.) are not going anywhere.  In fact they are the main builders of hyperscale facilities.

ENVIRONMENTAL IMPACT AND SUSTAINABILITY

Though precise data is not available, the consensus seems to be that AI hasn’t yet surpassed cryptocurrency in global energy consumption, but at its current pace, this could happen by 2026 or 2027.  And of course AI’s energy requirements will only keep growing from there.  Sustainability is a major concern!

How will we mitigate AI’s huge thirst for power?

  • Use renewable energy – some hyperscale data centers are already powered 100% by renewable energy, such as two in Nevada run by data center provider Switch.  They achieve this by using a combination of dedicated solar power stations and partnerships with local and state utilities to purchase renewable energy.
  • Use AI itself to automate energy optimization – as with cybersecurity, AI is being called upon to combat a problem that AI itself created: while it consumes huge amounts of energy, AI systems are also used to anticipate system trends and optimize energy usage.
  • Improve hardware and cooling technologies – the major chip makers, like Nvidia and IBM, are actively working on new generation hardware that will use up to 75% less energy.  And switching from air cooling to liquid cooling technologies can reduce overall data center energy consumption by more than 10%.
  • Improve AI model efficiency – This has its pros and cons…
    • PROS: for starters, technologies like China’s DeepSeek LLM, which requires much fewer resources to train, will reduce the energy used per application. And “small language models” (SLM’s) are increasingly replacing “large language models” (LLM’s) for specialized AI applications – which, when you think about it, is most AI applications.  Specialized SLM’s require less resources to run and train than their more generic LLM counterparts.
    • CONS: on the flip side, improving the efficiencies of the models may just allow more processing to be squeezed into the same computing facilities, so this might end up being a wash in terms of energy consumption.
  • Repurpose the heat – it’s possible to transfer “waste heat” from data centers directly into urban heating systems.  As you might imagine, this is especially useful in cold places, like Helsinki, Finland, where it’s already being done.

And finally, this is the part where I let you know I can help. Just reach out if you need advice on infrastructure strategies — or any growth or IT org strategies — for your small company.