Equans Data Centers delivers missioncritical infrastructure for Europe’s hyperscale and colocation leaders. We build high-performance and reliable data centers with unmatched quality, speed, and future-proof scalability.
6 News
Aligned acquired, Meta to spend $600bn, Stargate launches first data center
17 Atoms for data
Are small modular reactors the answer for data centers’ power needs?
27 Colt romances hyperscalers in Paris
We visit Colt DCS to talk about its French expansion
30 A sticky business
Csquare CEO Spencer Mullee on why he came out of retirement
33 The Networking supplement
Gotta connect them all
49 A wonderful life
TV’s Kevin O’Leary, aka Mr Wonderful, on why he backs data center firm Bitzero, and why AI bubble fears are misplaced
53 The data center hearing (almost) no one showed up to Visions of the past and future in Slough, UK
59 The Awards winners
Who won the DCD Awards, the best show of the year
65 Storage wars
Could the shortages facing the HDD market ultimately lead to the death of hard drives in the data center?
69 A brief history of time
How clocks keep data centers ticking
76 In perfect harmony: How Emerald AI is turning data centers into flexible grid assets
DCD speaks to Dr. Varun Sivaram, CEO of Emerald AI, on how it is using AI to redefine utilities’ relationship with data centers
80 Is the D2D trend bringing innovation to the L-Band by satellite?
Space weather could pose an existential threat to the satellite industry unless its impact can be better understood
84 The next era of silicon
NextSilicon joins the growing list of Nvidia competitors
88 MVNOs: A niche or a nuisance?
AI is running out of time to prove its worth 65
Mobile virtual network operators are shaking up the telecoms market
92 Fun with stamps
94
Satellites and philately
The woods of IT are dark and deep
CIO Tony Scott’s journey through the forest of Disney, Microsoft, and the US Government
98 Inflection point
#PeopleFirst
At Kirby, we truly value our People and invest in their development.
Data centers take the nuclear option From the Editor
Nuclear is making a comeback. Thanks to the insatiable demand of data centers, and the challenges faced by the grid, the industry is willing to pay extra to get the power it needs, and putting its hopes on unproven technology.
The new Manhattan project
For our cover feature (p17), we talk to number of small modular reactor (SMR) developers and the data center companies backing them.
From Radiant, to Oklo, to Rolls-Royce, to Stellaria, we profile the different
Atomic dreams for AI hallucinations
technologies and timelines that could solve the power bottleneck.
Plus, a look at the realistic challenges and doubts that mean we can't rely on nuclear just yet.
Colt's Paris push
We travel to France to understand why Colt is betting big on Paris. We tour the company's data center and see its plans for a hyperscale expansion across multiple new facilities (p27).
Just when I thought he was out
Spencer Mullee has been pulled back in to the data center market. The former DCI Data Centers executive is back, this time with Csquare. Built out of the ashes of Cyxtera and Evoque, the
Brookfield-backed business plans to expand across North America. With a collection of older facilities, the company is targeting traditional workloads - but also has its eye on lucrative AI deals (p30).
Mr. Full of 'Wonder'
Fresh from makeup, TV’s Kevin O’Leary tells us why the AI bubble isn't real, and why he's backing a crypto/AI data center business and a data center megavalley in Alberta, Canada (p49).
So much for anti-data center sentiment
We head to a data center hearing in Slough, and barely anyone turns up. While planning meetings are getting increasingly feisty, this UK town is at peace with the industry (p53).
Winner, winner, company dinner
Here's who won at the DCD Awardscongrats (p59)!
Storage wars
HDD shortages could lead to SSD's moment (p65).
Time to read this
How keeping time is critical to data centers' operations (p69).
MrBeast's required reading
MVNOs are growing in popularity, but do they risk damaging the telcos on which they rely (p88)?
Plus:
A networking supplement, Disney's ex-CIO, NextSilicon's CEO, AI bubbles, L-band satellites, fun with stamps, and much more!
The number of kilowatthours of power the US gets from nuclear power (2023), around 19 percent of total consumption.
Publisher & Editor-in-Chief
Sebastian Moss
Managing Editor
Dan Swinhoe
Senior Editor
Matthew Gooding
Telecoms Editor
Paul 'Telco Dave' Lipscombe
Compute, Storage, & Networking Editor
Charlotte Trueman
Cloud & Hybrid Editor
Georgia Butler
Energy & Sustainability
Senior Reporter
Zachary Skidmore
Junior Reporter
Jason Ma
Head of Partner Content
Claire Fletcher
Partner Content Manager
Farah Johnson-May
Copywriter
Erika Chaffey
Designer
Katherina Bradshaw
Media Marketing
Stephen Scott
Group Commercial Director
Erica Baeta
Conference
Director, Global
Rebecca Davison
Live Events
Gabriella Gillett-Perez
Tom Buckley
Audrey Pascual
Joshua Lloyd-Braiden
Channel Management
Team Lead
Alex Dickins
Channel Manager
Kat Sullivan
Emma Brookes
Zoe Turner
Tam Pledger
James Raddings
Director of Marketing Services
Nina Bernard
CEO
Dan Loosemore
Moss Editor-in-Chief
The
Aligned Data Centers acquired for $40 billion
US data center firm Aligned was acquired in October, in a deal blowing out all previous records in the data center M&A space.
Macquarie Asset Management, which currently owns colo firm Aligned, is selling it to a group comprising the AI Infrastructure Partnership (AIP), MGX, and BlackRock-owned Global Infrastructure Partners (GIP).
The $40 billion deal is expected to close in the first half of 2026.
That valuation makes the deal by far the biggest ever acquisition of a data center company, beating the $16.6bn Blackstone and its partners paid for APAC operator AirTrunk in 2024. On that occasion, the seller was also Macquarie.
Texas-based Aligned has campuses in Chicago, Illinois; Dallas, Texas; Salt Lake City, Utah; Phoenix, Arizona; and Virginia.
The company has further sites in development in Maryland, Ohio, Illinois, and Virginia, as well as across Latin America. In total, its active and planned capacity is 5GW.
Macquarie first invested in Aligned in 2018, joining BlueMountain Capital Management. Mubadala, Patrizia, and CenterSquare had also previously invested in the firm.
Founded in 2006, BlackRock-owned GIP manages more than $100bn in client assets
across infrastructure equity and debt, with a focus on energy, transport, water and waste, and digital sectors.
The company’s digital investments include data center firm CyrusOne and Vodafone tower company Vantage Towers (both in partnership with KKR). The firm has also previously provided loans to data center firm Vantage (unaffiliated with the tower company). GIP recently invested in construction firm ACS’ new data center venture, which is developing a site in the US and Europe to lease to hyperscalers.
BlackRock also owns European data center operator Mainova Webhouse and recently invested in a new UK data center venture known as Gravity Edge.
MGX is an AI investment vehicle owned by Abu Dhabi’s sovereign wealth fund, Mubadala Investment Company. It is a partner in Stargate, OpenAI’s sprawling data center build-out that is looking to add 50GW of capacity to support the AI lab’s work.
Rather confusingly, AIP is a group of investors that also includes BlackRock and MGX. Founded in September last year, it is also backed by the likes of Microsoft and Kuwait’s sovereign wealth fund, the Kuwait Investment Authority.
Nvidia and Elon Musk’s xAI have also joined the AIP consortium.
NEWS IN BRIEF
Crown & ATC sue Dish Tower firms Crown Castle and American Tower are suing mobile operator Dish. Dish owner Echostar has sold large chunks of spectrum to AT&T and SpaceX, and is looking to exit its tower lease contracts, claiming force majeure.
Microsoft plans $17.5bn AI infrastructure investment in India
Microsoft will spend $17.5bn in India over four years from 2026, across data centers, chips, and other AI infrastructure projects.
SoftBank reportedly in talks to buy DigitalBridge
Japanese investment and telco giant SoftBank is in talks to acquire DigitalBridge, the digital infra investor with stakes in AtlasEdge, DataBank, Switch, Takanock, Vantage, and Yondr.
858TB of data lost after South Korea data center fire
South Korea’s government may have permanently lost 858TB of information after a crucial hard drive was destroyed in a fire at a data center in Daejeon. A battery fire occurred at the National Information Resources Service (NIRS) data center, located in the city of Daejeon, on September 26.
Musk proposes using Teslas as mobile inferencing fleet
Elon Musk said idle Tesla cars could be used as a mobile inferencing fleet. On a quarterly earnings call, he said that if you have tens or hundreds of millions of cars in a fleet that had 1kW of inferencing capability, “that’s 100GW of inference distributed with power and cooling taken care of.”
Google to put chips in space
Google plans to launch its TPU AI chips into space.
The company will partner with Planet Labs on ‘Project Suncatcher,’ with an initial two satellites launching by early 2027 to explore the potential of largerscale space data center clusters.
A paper laid out plans for a potential 81-satellite cluster but noted that significant technical and logistical hurdles still exist.
site, with Meta committed to the site as a tenant for at least 16 years.
Meta is set to lease the entire campus once construction is complete. These lease agreements will have a four-year initial term with options to extend.
However, Meta also provided the joint venture with a guarantee for the first 16 years of operations, where the social media firm would make a cash payment to the JV upon non-renewal or termination of a lease.
The social media giant contributed land and construction-in-progress assets for the campus development that were previously classified as held-for-sale.
Meta to spend $600bn on US data centers by 2028
Meta claims it will spend $600 billion on digital infrastructure in the US over the next three years.
Data centers are “crucial” to helping the company reach its goal of “building superintelligence for everyone” and “helping America maintain its technological edge,” the company said in a November blog post.
“That’s why we’re investing in building industry-leading AI data centers right here in the US. We’re committing over $600 billion in the US by 2028 to support AI technology, infrastructure, and workforce expansion.”
Quite how Meta intends to fund this is unclear. The company posted annual revenue of $62.3bn in 2024, and $600bn figure is more than double the amount the firm has made
during its 15 years as a public company.
It is also more than OpenAI has said it intends to spend building out its Stargate network of data centers in the US and around the world.
Meta’s capex spend for 2025 is expected to be $72 billion, while CFO Susan Li told investors in October that this figure was likely to rise “significantly” in 2026, with the company set to “invest aggressively” in data centers.
This quarter also saw Meta form a joint venture with Blue Owl Capital to finance its $27 billion data center in Louisiana, dubbed Hyperion.
Under the arrangement, Blue Owl will own 80 percent of the gigawatt-scale data center
Singapore open for 200MW of new DC capacity
Blue Owl Capital has made a cash contribution of approximately $7bn to the joint venture, and Meta received a one-time distribution from the joint venture in the amount of approximately $3bn. A portion of capital raised by Blue Owl will be funded by debt issued to PIMCO and select other bond investors through a private securities offering.
Meta has around 30 data center campuses in operation or development globally. News on the 2GW Louisiana development – now known as Hyperionfirst broke back in November. In December, the firm announced its intention to construct the mega campus on 2,250 acres between the municipalities of Rayville and Delhi, about 30 miles (48.2km) east of Monroe.
Renderings of the Hyperion campus suggest up to nine buildings totaling some four million sq ft (371,610 sqm) are planned.
Singapore is requesting proposals from operators to develop more data centers in the city-state.
“DC players that are seeking new data center capacity in Singapore are welcome to apply,” The Singapore Economic Development Board (EDB) and the Infocomm Media Development Authority (IMDA) announced in December. “At least 200MW of data center capacity will be made available, with potentially more through the adoption of new and innovative green energy pathways.”
The call aims to partner with industry to develop data centers that “strengthen Singapore’s international standing as a trusted hub for AI and data center investments,” while also enhancing its resilience and international connectivity. The project proposals should also make “significant contributions” to Singapore’s economic objectives and accelerate its use of green energy.
Facilities proposed in the applications should be “bestin-class” in terms of energy and IT efficiency, and at least 50 percent powered by ‘green energy’, including biomethane, low-carbon ammonia, low-carbon hydrogen, novel fuel cells with carbon capture and storage technology, or on-site solar.
The DC-CFA2 application window will close on 31 March 2026. More information can be found on the application page. Singapore has had an ongoing moratorium since 2019, but awarded 80MW of new capacity to four operators in 2023.
Fermi makes stock market debut, valued at almost $15bn based on plans for 11GW DC campus
Such is the demand for data center capacity that grand plans and some non-binding letters of intent are now enough to create a company with a valuation on par with Qantas Airways.
Fermi America, a company co-founded by former Texas governor and energy secretary Rick Perry, went public in October, raising billions of dollars.
Fermi s planning to develop a massive 11GW data center and energy campus near Amarillo in Texas on land owned by the Texas Tech University.
If realized, Project Matador will span 18 million sq ft (1.67 million sqm) and would utilize a combination of on-site energy sources, including natural gas, solar, wind energy, and nuclear energy.
The company made its Nasdaq debut, valued at almost $15 billion, raising $682.5 million as part of its initial public offering (IPO).
Before this, the company secured $100 million as part of a Series C equity funding round led by Macquarie Group, as well as a $250m senior loan facility, funded solely by Macquarie’s commodities and global markets business.
The project, initially announced in June, is still in its very early stages of development. According to the company, it has commenced with geotechnical work and “expects” to deliver 1GW by the end of 2026.
Some 200MW of that will come from the Southwestern Public Service Company (a contract has yet to be signed), and natural gas power from Siemens Energy and GE turbines that the company has promised to buy.
Future power will come from more gas, solar, and up to four AP1000 Pressurized Water Reactors developed by Westinghouse. Mobile gensets and BESS systems, solar, grid connections, and more, are also promised by the end of 2026.
Despite the valuation, no customers or end-users have been announced. Fermi’s market cap at time of writing is around $9.5 billion, with its stock price almost halving since its debut.
Cipher Mining secures AWS as a customer in Texas
Cryptoming data center firm Cipher Mining has secured Amazon Web Services (AWS) as a customer.
November saw the firm, which is pivoting to host AI and HPC data centers at its sites as well as cryptomining, secure a $5.5 billion, 15-year deal with Amazon.
The agreement will see Cipher deliver 300MW of capacity in 2026, including both air and liquid cooling to the racks.
The capacity will be delivered in two phases, expected to begin in July 2026 and complete in the fourth quarter of 2026, with rent commencing in August 2026.
Which of Cipher’s sites would host the capacity for AWS is unclear.
Cipher was spun off from Bitcoin mining hardware giant Bitfury in March 2021 and went public via a merger with blankcheck company Good Works Acquisition Corp.
Cipher’s current online capacity totals 477MW, with another ~150MW due to come online by 2026. The company has some four sites live in Texas, with another seven in planning and development across the Lone Star State.
In September, it struck a $3bn, 168MW deal with AI cloud firm Fluidstack. The deal was backed by Google, which will take on the lease if the AI cloud startup goes bust. Google also took a stake in Cipher.
US technology firm Palantir, best known for supplying technology to police forces and militaries around the world, is reportedly considering developing a data center at the site.
Fermi’s plans are ambitious, and will likely require billions more dollars in investment to succeed.
Whether or not Fermi will be able to see its planned nuclear capacity come online in time and on budget when such projects are notorious for the opposite is also still an open question.
Established in 1996 and headquartered in Lubbock, the Texas Tech University System consists of five universities, 21,000 employees and 64,000 students.
Google says TPU demand is outstripping supply
Demand for Google TPUs is so oversubscribed that the company is having to turn customers away.
Speaking on a panel in October, Google’s VP and GM of AI and infrastructure, Amin Vahdat, said that Google currently has seven generations of its TPU hardware in production and that its “seven and eight-year-old TPUs have 100 percent utilization.”
Google’s TPUs - or Tensor Processing Units - are chips optimized for AI workloads. The latest generation, dubbed Ironwood, was unveiled in April of this year.
“This tells me that demand is tremendous,” Vahdat said in response to a question about the demand signals currently driving companies’ capex spend cycle. “We’re early in the cycle… certainly relative to the demand we’re seeing.”
Known customers of Google’s TPUs include Apple, Anthropic, and Safe Superintelligence. OpenAI is reportedly close to adopting them in part in a rare move away from Nvidia GPUs.
Google used TPUs to train its latest Gemini 3 LLM.
Grundfos Data Center Solutions
Keep your cool
Efficient water solutions for effective data flow
Meet your efficiency and redundancy goals
Smart pumps offering up to IE5 efficiency, redundant solutions that meet up to 2N+1 redundancy – whether you’re planning a Tier 1 or Tier 4 data center, you can rely on Grundfos to keep your data center servers cool with a 75-year history of innovation and sustainability at the core of our corporate strategy.
Your benefits:
• High-efficiency cooling solutions saving water and energy
• Redundancy meeting up to Tier 4 requirements
• End-to-end partnership process
AWS hit by outage of Virginia cloud region
Amazon Web Services suffered a major outage in October.
Between 11.49 PM PDT on October 19 and 2.24 AM PDT on October 20, AWS experienced increased error rates for AWS services in the US-East-1 Region in Northern Virginia.
As well as Amazon’s own operations, those affected included Perplexity, Snapchat, Fortnite, Airtable, Canva, Amazon, Slack, Signal, PlayStation, Clash Royale, Brawl Stars, Epic Games Store, and Ring Cameras.
The cloud company said that the event was the result of DNS resolution issues for the regional DynamoDB service endpoints.
“The root cause of this issue was a latent race condition in the DynamoDB DNS management system that resulted in an incorrect empty DNS record for the service’s regional endpoint that the automation failed to repair,” AWS said.
US-East-1 is one of AWS’ original cloud regions and where Amazon hosts many of its worldwide services from. When that region has issues, the effects are usually felt around the world, even if companies don’t have services in that region.
Similar outages in 2021 and 2020 caused major issues for companies globally, including Amazon subsidiaries IMDb and Ring as well as online games including Player Unknown’s Battlegrounds.
Cloudflare suffers major outage after config error
Content Delivery Network provider Cloudflare suffered two outages in November and December that brought down many companies.
On 18 November 2025 at 11:20 UTC, Cloudflare’s network began experiencing significant failures to deliver core network traffic.
The firm said its issue was triggered by a “change to one of our database systems’ permissions,” which caused the database to output multiple entries into a “feature file” used by its Bot Management system.
“That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network,” the company said in a blog.
The software had a limit on the size of the feature file that was below its doubled size, which caused the software to fail.
Dan’s Data Point
The company said it initially thought it was caused by a “hyper-scale DDoS attack,” but once it realized the issue, Cloudflare rolled back to an earlier version of the file that fit within the required dimensions.
Cloudflare noted it was the company’s worst outage since 2019.
Work to prevent the issue happening again has started. Cloudflare said it would be hardening ingestion of Cloudflare-generated configuration files, enabling more global kill switches for features, eliminating the ability for core dumps or other error reports to overwhelm system resources, and reviewing failure modes for error conditions across all core proxy modules.
Around 20 percent of websites globally are thought to use Cloudflare services in some form. The issue took down a number of services, including ChatGPT, X, Bet365, Grindr, and League of Legends.
DCD and sister site SDxCentral were also brought offline.
“We are sorry for the impact to our customers and to the Internet in general. We know we let you down today,” Cloudflare said.
“We’ve architected our systems to be highly resilient to failure to ensure traffic will always continue to flow. When we’ve had outages in the past it’s always led to us building new, more resilient systems.“
Just two weeks later, however, Cloudflare suffered another issue, albeit smaller and shorter than the first one.
The incident, on December 5, caused a portion of Cloudflare’s network to experience significant failures. Thankfully, however, the outage only lasted around 25 minutes.
Q3 2025 saw a record 7.4GW of data center leasing, seeing more capacity leased by hyperscalers in the US in one quarter than in all of 2024. The TD Cowen report that figure comes from suggested leasing for the first three quarters of 2025 totaled 11.3GW.
CyrusOne outage hits
A chiller plant failure at CyrusOne data center in Chicago, Illinois, resulted in a major outage.
The issue at the CHI1 data center meant that leaser CME Group was forced to halt futures trading for nearly 10 hours in late November.
The world’s biggest exchange operator experienced a multi-hour outage, which halted trading of stocks, bonds, commodities, and currencies.
“CyrusOne has restored stable and secure operations at its Chicago
1 (CHI1) data center in Aurora, Illinois,” the company said.
“To further enhance continuity, we have installed additional redundancy to the cooling systems.”
CyrusOne acquired the data center from CME in 2016 for $130 million. Owned by KKR & Co. and Global Infrastructure Partners, On-site staff and contractors at the facility reportedly failed to follow standard procedure for draining cooling towers ahead of freezing temperatures in the area.
TOMORROW’S DATA CENTRES TODAY
Smart automation, integrated safety, and scalable security for resilient energy-efficient data centres
As AI, cloud, and edge computing grow, so do the demands on your data centre’s resilience, efficiency, and security. Honeywell integrates building management, fire safety and physical/cyber security into one seamless system, offering:
Unified access control to mitigate evolving threats
Energy and emission control to help you meet EED and net-zero targets
Advanced fire detection and remote compliance tools
Redundancy and uptime to meet Tier 4 standards
Honeywell has deployed systems for top-tier hyperscale and colocation data centres in six continents. By leveraging advanced building intelligence, we can help you increase infrastructure and accelerate velocity at scale.
Elevate your data centre operations today
First Stargate data centers launch in Texas
OpenAI’s flagship Stargate data center campus in Abilene, Texas, is now live.
At the end of September, developer Crusoe said that the first two buildings on the planned 1.2GW site are now operational and being used by Oracle Cloud Infrastructure (OCI) for OpenAI. The two buildings span 980,000 sq ft (91,045 sqm) and support 200MW+ of IT.
Site construction began in June 2024, with the first Nvidia GB200 racks delivered the following June.
In March, Crusoe began work on the second phase of Abilene on the Lancium Clean Campus. This expansion plans to add six more buildings, bringing the total to approximately four million square feet, and a total power capacity of 1.2GW. It is expected to be completed in mid-2026. Crusoe topped out the eighth and final building at the campus in November. It would appear the whole campus will
be occupied by Stargate, with Oracle’s Larry Ellison revealing at his firm’s AI World conference that Oracle was deploying 450,000 Nvidia chips for OpenAI.
Abilene is the first of multiple data center sites OpenAI is leasing under its $500 billion Stargate initiative.
In Michigan, OpenAI and Oracle are set to lease a 250-acre campus being developed by Related Digital, The Barn site will consist of three 550,000 sq ft (51,097 sqm) single-story buildings, with construction due to start next year.
Vantage is developing two sites for Stargate in Texas and Wisconsin, while Stack is developing a campus for the project in New Mexico.
Stargate was launched by OpenAI in January, in partnership with SoftBank, Oracle, and Abu Dhabi’s MGX. The company has since said it has some 7GW of capacity secured under the program.
Further afield, OpenAI has also announced Stargate data centers in Norway, the United Arab Emirates, Australia, and the UK. An Argentine data center has been proposed, and the company has signed an MoU with SK Telecom to develop an AI data center in the southwest region of South Korea. It has also discussed the possibility of a data center in Canada, and is reportedly in talks with Tata Consultancy Services (TCS) about a potential Stargate data center in India.
OpenAI is committed to spending about $1.4 trillion over the next eight years on data centers and cloud services, CEO Sam Altman said. Meanwhlie, HSBC analysts have said OpenAI needs to find $207 billion in new funding by 2030 to pay for its ambitious expansion plans.
OpenAI has reportedly signed a contract for cloud services from Oracle worth $300 billion over five years. It also has a cloud contract worth $250 billion to buy services from Microsoft, and has promised to shell out $38bn with AWS over a seven-year period. It also has a $6.5 billion contract with CoreWeave, as well as deals in place with Nvidia, AMD, and Broadcom.
Fulfilling all these contracts is likely to be a tall order, and HSBC’s report said OpenAI “would need $207bn of new financing by 2030.”
The company is predicted to see its users jump from 800 million to 3 billion in 2030. Its annual revenue is estimated to be around $12bn.
A recent report from Ed Zitron suggested OpenAI spent $5.02 billion on inference alone with Microsoft Azure in the first half of 2025.
US colo developer Digital Realty has signed a deal with Current Hydro LLC for 500GWh of hydropower from three projects on the Ohio River.
Under the terms of the agreement, Digital Realty has agreed to purchase the energy and associated environmental attributes from the 19.99MW New Cumberland (Hancock County, West Virginia), 19.99MW Pike Island (Belmont County, Ohio), and 28.5MW Robert C. Byrd (Mason County, West Virginia) hydroelectric projects.
The portfolio has a combined capacity of 68MW. It is expected to become operational in 2029, with the energy delivered into the PJM Interconnection market and directed to Digital Realty’s data center portfolio in Northern Virginia.
Current Hydro is a hydropower developer, owner, and operator
that focuses on non-powered dams and optimizing run-of-river systems to deliver hydropower. The company, based in Pittsburgh, Pennsylvania, currently has six projects in its pipeline, including three contracted to supply power to Digital Realty.
While hydropower is commonly used to power data centers in certain areas, such as the Nordics, few data centers have signed direct supply agreements with hydroelectric power providers.
Digital has previously signed a hydroelectric deal in Germany. Digital Edge (Philippines), Iron Mountain (US), and Aruba (Italy) are other examples.
Google became one of the first hyperscalers to contract energy supply from hydropower earlier this year, after signing a 3GW hydroelectric deal with Brookfield in the US.
AWS pays $700 million for DCzoned land in Virginia
Amazon has reportedly acquired land in Bristow, Virginia, that has been zoned for data center use.
First reported by local press in November, Amazon Data Services has acquired the site of the planned Devlin Technology Park in Prince William County.
Sources say the company bought the 270-acre site from Stanley Martin Homes for $700 million in a deal that closed late last month.
Devlin Tech Park, along Devlin Road south of Interstate 66 and north of Linton Hall Road, has been zoned to allow for up to 3.5 million square feet of data center space and up to three substations.
The deal equates to around $3.7 million
an acre, well above what is already a very high average in Virginia for data centerzoned land. Stanley Martin reportedly paid less than $60 million for the land.
Stanley Martin declined to comment to the publication.
The real estate developer started acquiring land that would become the tech park back in 2021, and first filed to rezone the site for data centers in 2022. After much opposition from locals and at least one long-running legal battle, the Prince William board of county supervisors granted the rezoning request in November 2023.
News that Amazon might be interested in taking over the Devlin site surfaced in September.
Amazon has a major and growing presence across Virginia. Loudoun County hosted Amazon Web Services’ (AWS) first data centers when the book company launched its first cloud facilities in 2006.
Today, the company is known to own or lease data centers around Haymarket, Manassas, Ashburn, Sterling, Chantilly, Warrenton, and McNair, to name a few, spanning across Loudoun, Fairfax, Prince William, and Fauquier Counties.
The company’s exact footprint isn’t known, but totals more than 50 data centers across the region, with dozens more in development. Greenpeace estimated the company had 1.7GW of capacity back in 2019, having more than doubled that figure since 2015. Amazon’s US-East Northern Virginia cloud region has been described as the largest single concentration of corporate data centers in the world.
As well as growing its existing footprint in Loudoun, Fairfax, and Prince William with new facilities, DCD has seen AWS active with plans for new projects in Fauquier, Culpeper, King George, Spotsylvania, Stafford, Louisa, Orange, and Caroline Counties.
Land in Northern Virginia viable for data center development has long been at a premium. But as local officials look to further stymie data center development in saturated areas, land prices are still rising.
In November, digital infrastructurefocused investment firm SDC Capital Partners paid $615.05 million for 97 acres of land in Leesburg, Virginia, zoned for data center development. The site was bought from local landowner and developer Chuck Kuhn’s JK Land Holdings.
Three arrested at data center hearing in Wisconsin
Although the meeting did not concern the proposal for the 902MW, so-called ‘Lighthouse’ data center, which will be developed by Vantage Data Centers as part of OpenAI and Oracle’s Stargate infrastructure project, the first thirty minutes of the meeting were taken up by public comments in opposition to the facility.
Video footage of the incident shows police arresting three women, among them Christine Le Jeune, a member of the nonprofit organization Great Lakes Neighbors United.
Le Jeune had made a speech criticising the project and the Port Washington Common Council, but she was forced to conclude early after going over the three-minute time limit on public comments. This was met with applause and brief shouting, which subsided before the next speaker began her speech.
A police officer can then be seen speaking to Le Jeune and two others. More officers then gather around, and video footage eventually shows officers dragging all three individuals across the floor before handcuffing and escorting them out. Police told Fox that two of the women were arrested because they dropped to the ground and grabbed Le Jeune to prevent her from being taken out of the premises.
Earlier in the meeting, a member of the Council had asked members of the audience to “refrain from interrupting anyone during their public comment portion.” Otherwise, they would be removed. Police reportedly multiple noise warnings to the crowd.
Three people were arrested in Port Washington, Wisconsin, after they interrupted a Tuesday meeting held by the local authority to discuss a potential data center project.
Liquid Cooling Made Easy
Liquid cooling has become essential for high-performance accelerated computing. Air cooling was practical when chip densities were lower. Yet, these densities have skyrocketed, placing more pressure on traditional air cooling until it become increasingly unmanageable. So, new approaches for heat removal are needed to avoid the risk of hot spots that lead to equipment failure and downtime.
Direct liquid cooling is not a product – it is an architecture supported by critical systems including coolant distribution units. Direct liquid cooling however is a bit more than the name implies, as it includes the air cooling and heat rejection units (e.g., chillers) you are already familiar with and prevalent within data centers today.
Complementary Air Cooling
Air cooling complements
liquid cooling and is needed to reject heat from the air-cooled components in the IT space.
This includes but not limited to computer room air conditioning and air handler, fan wall, traditional perimeter, InRow, and rear door heat exchanger.
Liquid Cooled Servers
Direct-to-chip is the preferred method today, where liquid coolant is pumped through a cold plate attached directly to the chip. Cold plates can also be attached to other hot components such as memory.
Immersion is another method where components are fully or partially immersed in liquid coolant.
Heat Rejection Units
Heat rejection units, including chillers, dry coolers and cooling towers, are used to reject heat in Technical Cooling System loop to the outdoors.
Navigate liquid cooling
Coolant Distribution Unit (CDU)
CDU isolates the Technical Cooling System loop from the rest of the cooling system and controls temperature, flow, pressure, fluid treatment, and heat exchange.
CDUs vary in type of heat exchange, capacity, and form factor (rack- vs. floor-mounted).
Chip-to-Chiller
3 main elements in a liquid cooling ecosystem
Heat Capture Within the Server
Heat Exchange Inside the Data Center
Method of Rejecting Heat to the Outdoors
we are at the precipice of a new nuclear age, driven not so much by individual governments as by the insatiable energy needs of data centers. Hyperscalers have been quick to snap up all available power from large-scale reactors, leaving many in the sector looking to a smaller solution to meet their nuclear ambitions. What began as Eisenhower’s vision of “atoms for peace” has evolved into a 21st-century race for “atoms for data,” as the energy needs of AI reshape nuclear’s role in the world.
Zachary Skidmore Senior Reporter, Energy and Sustainability
Are small modular reactors the answer for data centers' power needs?
A small modular solution
For data center developers grappling with unprecedented energy demand from artificial intelligence (AI) data centers, which the International Energy Agency projects will double to 945TWh by 2030, Small Modular Reactors (SMRs) are an attractive concept.
Described by Dr. Tim Gregory, nuclear scientist and author of Going Nuclear, as the “flat pack furniture of the nuclear
world,” SMRs are miniaturized nuclear fission reactors, with capacities ranging from 1MW to more than 450MW.
At present, more than 127 SMR designs worldwide are at various stages of development, offering a range of capacities and reactor types. Designs differ substantially, with the central split between Generation III reactors, based on miniaturized versions of current light-water reactor (LWR) designs, and Generation IV, which encompasses nextgeneration reactor concepts cooled using sodium, lead, molten salt, or gas.
Gen III designs benefit from proven reactor design and a robust fuel and component supply chain. In contrast, Gen IV designs employ advanced fuels and coolants, which can offer higher inherent safety features, greater thermal efficiency potential, and multi-purpose applications.
Gen III reactors, adapted from lightwater reactor designs, cannot scale down to the smallest sizes seen in some Gen IV concepts. Most Gen III designs are at the higher end of the SMR power range, up to 470MW. In contrast, Gen IV SMRs use entirely novel designs, enabling scalability to very small sizes, even down to 1MW.
SMRs are not a new concept. The US first pioneered the technology in the 1950s to power the USS Nautilus, the world's first nuclear-powered submarine. What is new, Gregory explains, is the private sector’s modular, factory-built approach to the concept, which he says could dramatically cut costs, shorten construction timelines, and reduce political risk by allowing nuclear projects to fit within electoral cycles. In doing so, Gregory says, it could potentially “transform the industry.”
For data center developers, this makes
SMRs a potential godsend for their power and siting needs, offering a reliable, dispatchable, and flexible energy source, with energy density far exceeding that of other low-carbon alternatives. For example, SMRs can theoretically deliver about 50 times as much power per square foot as rooftop solar panels.
Despite still being years away from commercialization, the promise of SMRs has already led data center operators to flock to sign deals to secure future capacity. Hyperscalers including Google and AWS, as well as colocation providers such as Switch, Endeavour, and Data4, have all penned SMR deals, but no company has been as aggressive as Equinix.
Quiver of energy options
Equinix is the proverbial cross-bencher of the data center sector, operating both a retail colo-based business and a hyperscale xScale division. Its dual structure has created a uniquely diverse power demand profile, ranging from small single megawatt sites near urban areas to facilities drawing hundreds of megawatts across immense acreage.
As a result, the company faces a serious challenge: finding an energy source flexible enough to power its portfolio of more than 270 operational sites. The challenge has been exacerbated by prolonged grid-connection timelines across major markets, particularly in the US and Europe, and by demand growth that is now outpacing the power system’s ability to keep up.
To meet this daunting challenge, Equinix required what its SVP of Global Energy, Adrian Anderson, dubs “a quiver of energy arrows” comprising a range of
technologies tailored to each data center site and grid constraint.
The company has since filled this “quiver” with a mix of energy sources, from renewables to fuel cells. However, with traditional renewables unable to deliver firm baseload power and fuel cells still reliant on natural gas, Equinix has increasingly turned to SMRs as a possible panacea for its long-term energy needs.
Deal count
While most data center companies have partnered with a single SMR provider - Google with Kairos Power and AWS with X-energy - Equinix has signed agreements with no fewer than four different companies to date.
Its first foray into the wide world of SMRs came in April last year, when it announced a non-binding agreement with US SMR developer Oklo for up to 500MW of future capacity. It followed this up in August with a trio of agreements with Radiant Industries, ULC Energy, and Stellaria, which brought the company's total commitments to more than a gigawatt of future SMR capacity.
What made these deals notable is not only the sheer capacity involved but also the diversity of reactor types, reflecting Equinix’s need for varying power options. The agreements spanned more conventional reactor types, such as Rolls-Royce’s 470MW SMR, which utilizes pressurised water reactor (PWR) technology, to Stellaria’s 250MW Gen IV fast-molten-salt reactor (FMSR), and Radiant’s 1MW microreactor design.
“Microreactors or small SMRs could work well for localized deployments, while larger SMRs could support our hyperscale campuses,” says Anderson. “It’s about matching the right technology to the right context.”
Another notable aspect of the agreements is their international scope, spanning US and European markets and multiple reactor types, highlighting that deployment strategies and development philosophies differ across regions.
According to Anderson, Equinix’s agreements were not just power purchases, but efforts to incubate the SMR sector. He argues this responsibility should not rest only with the hyperscalers, and says Equinix aims to show that colocation providers can also foster innovation and help democratize access to advanced, carbon-free power.
Anderson identifies three main levers that he says could have a tangible impact on supporting the technology's commercialization, namely: “Signaling the market through long-term offtake commitments, enabling financing with PPAs as bankable annuities, and fostering collaboration through targeted investments and partnerships.”
Taking a closer look at each agreement and deployment model of each SMR developer provides insight into how the company expects these technologies to evolve and how they may ultimately power the sector.
AI Era
Unlike traditional data centers running task-specific software, AI training clusters experience sharp, unpredictable spikes in consumption, necessitating a stable, energy-dense, always-available power source. For Equinix's xScale division, this is a clear and present need, exacerbated by constrained grids and the inability of intermittent renewables to meet these requirements.
This need led Equinix to Stellaria, a French SMR startup founded by Technip Energy and Schneider Electric in 2023 to deliver an SMR ideally suited for the AI data center market. Its partnership with Equinix was not only one of the largest deals in the space, representing a 500MW pre-Power Purchase Agreement (PPA), but also one of the first binding deals in a sector where non-binding but headlinecapturing MoUs and LOIs have become commonplace. The companies followed
this up in November, signing a pre-order agreement, with Equnix securing the first energy capacity reservation from Stellaria’s inaugural commercial reactor.
So what drew Equinix to Stellaria?
According to Stellaria CEO Nicolas Breyton, the relationship emerged through Stellaria’s industrial network, especially its ties with Schneider, Equinix’s primary power systems supplier. That introduction quickly evolved into a collaboration focused on developing a decarbonized, high-density baseload power solution capable of responding to the sharp fluctuations in GPU-driven workloads.
In response to the challenge, Stellaria designed a 250MW Generation IV FMSR, which Breyton claims is “perfectly suited for xScale AI data centers that demand continuous, high-density power.”
The Stellarium reactor not only matches the power scale and continuity needed by modern AI campuses but also addresses another challenge, namely that low-carbon power sources
“Data centers will not only become a data hub, but also an energy hub thanks to the installation of the SMRs onsite,”
>>Nicolas Breyton, Stellaria
often lack grid-stabilizing inertia. Most conventional power sources provide inertia via spinning turbines that slow down the rate of frequency change during a sudden supply or demand imbalance. Breyton claims Stellaria’s SMR can provide a “shock absorber” through the reactor’s thermal inertia.
“AI data centers don’t consume power like classic ones,” he says. “GPUs surge and fall — they need inertia, flexibility, and stability. That’s exactly what molten salt reactors can provide.”
Stellaria and Equinix are evaluating several deployment models, including behind-the-meter installations and utility-operated configurations. Each site is expected to use SMRs in redundant pairs, backed by gas turbines and grid interconnection, to achieve full N+1 availability. In this setup, Breyton argues that data centers can become power producers, as well as consumers, with the ability to inject excess power back into the grid when needed.
“Data centers will not only become a data hub, but also an energy hub thanks to the installation of the SMRs onsite,” contends Breyton.
This is especially important within the European market, where many grids are reaching a breaking point, stifling data center development. Through placing reliable, abundant nuclear power at the center of industrial campuses, Stellaria aims to enable clusters of data centers and other energy-intensive industries to form self-sufficient energy islands. And it is not the only company to be pursuing this idea.
Energy Islands
Nowhere in Europe is facing the same level of power constraints as the Netherlands. Once the interconnectivity center of the world, the country’s position as a data center hub is slipping, with its grid stretched to near breaking point
As a result, grid fees in the country have risen by up to 95x for large baseload users, with large-scale data centers unable to secure firm grid connections. For ULC Energy, nuclear project developer and exclusive Dutch vendor for Rolls-Royce SMR, powering the data center sector in the Netherlands is increasingly taking a behind-the-meter flavor.
“When you look at the grids in the European mainland, the Dutch grid is probably closest to getting really constrained,” ULC CEO Dirk Rabelink says. “We have a relatively small country, a very high dependency on natural gas, and our electricity system is undersized versus the energy activity that we have.”
Consequently, ULC is aiming to deploy its SMRs as part of “energy islands” that can serve as self-contained clean-power hubs, delivering dedicated electricity to a range of offtakers. Underwriting such a large project required an anchor offtaker willing to take up the bulk of the energy through a long-term supply agreement. This is where Equinix stepped into the fold, agreeing to a (non-binding) PPA for 250MW of capacity. The agreement has benefits for both parties, giving ULC a large, creditworthy offtaker to improve plant economics and attract investment, and Equinix future baseload power for one of its facilities.
Underpinning these energy islands will be the largest SMR planned for commercial deployment. Developed by Rolls-Royce SMR, the reactor is based on established PWR technology and is expected to exceed 470MWe. For ULC, the choice came down to proven technology.
“Rolls-Royce’s SMR stood out because it combined proven light-water technology, standard low-enriched uranium fuel, and a vendor with real manufacturing depth,” says Rabenik. “For us, it wasn’t enough to have a clever design - we needed a partner that could actually deliver a full power plant with a clear supply chain and modules sized for practical transport.”
Though the deal with Equinix is likely to see ULC supply power to the company behind-the-meter, the firm ultimately aims to make each energy island a hybrid power node, connected to the grid but capable of operating independently when required. That flexibility is central to its appeal for data centers, Anderson notes. “I really believe hybrid solutions are going to be essential. Staying connected to the grid allows us to sell power into it, draw power from it when needed, and ultimately enhance its reliability and capacity,” he asserts.
Although data centers are considered the ideal anchor offtaker for ULC, the company was careful to ensure it was not the sole offtaker. Instead, ULC hopes to power a range of other industries, such as hydrogen developers, small and medium enterprises, and potentially district heating systems. This is a model endorsed by Equinix, with Anderson
“Rolls-Royce’s SMR stood out because it combined proven light-water technology, standard lowenriched uranium fuel, and a vendor with real manufacturing depth,”
>>Dirk Rabenik, ULC Energy
contending that the perfect configuration would position “the data center as the SMR’s anchor off-taker, while retaining the ability to deliver power to additional customers, the public grid, and nearby communities.”
Middle Ground
While Equinix’s deals in the European market have focused on larger SMR companies, its US agreements have reflected a smaller, more modularized approach.
Compared to Rolls-Royce’s 470MW behemoth, Santa Clara-based Oklo’s Powerhouse reactor is relatively small, boasting a capacity of 75MW. However, for the company, which has signed by far the most deals within data center space of any SMR vendor, boasting a customer pipeline exceeding 14GW of mostly non-binding capacity, it is this size that it says makes it best suited to the needs of the data center market.
The modular nature of Oklo’s fast fission reactor is designed specifically to “mirror the way data centers grow hall by hall, in 100MW increments,” said Brian Gitt, the company’s SVP and head of business development. This can enable greater flexibility, Oklo says, allowing reactors to be colocated with major data centers or connected via the grid under a PPA. Operators will have flexibility to match supply with demand and reduce exposure to grid bottlenecks, a significant concern for Equinix.
“Equinix is uniquely positioned because of its massive distributed footprint,” Gitt asserts. “As AI agents begin to interact in real time, latency and performance will define competitiveness, and that means power and compute must move closer to where people live and work.”
Safety first
We all know the warning signs of radiation. For most, it evokes a sense of foreboding, a warning to tread lightly and not expose oneself to the potentially life-changing effects that could emanate from the product that bears its mark. However, for nuclear proponents, the fear of radiation is a misnomer, worsened by several highprofile nuclear incidents - Chernobyl, Three Mile Island, and Fukushima - which have fed a view that nuclear energy is a dangerous and potentially life-threatening form of energy.
“The evidence is clear: nuclear power is one of the safest and most environmentally friendly sources of power that humanity has ever invented,” contends Tim Gregory. However, the perception persists, and while times are changing, SMR firms will have to make a concerted effort to get the message across. On this front, however, they have an advantage. Compared to traditional reactors, SMRs are expected to incorporate even stronger intrinsic safety mechanisms to improve resiliency. As a result, Gregory argues that “Fukushima-style steam or hydrogen explosions would be extremely unlikely.”
To ensure safety, SMR developers are incorporating physical and thermal design features that prevent overheating and meltdowns without active intervention. Subsequently,
most reactors meet a "walk-away safe" standard, in which containment and cooling function automatically, an essential requirement for obtaining clearance to site reactors in urban areas.
While nuclear power will always have detractors and fearmongers, the facts are clear: it is a much safer option than fossil-fuel systems, which have been deployed liberally in data centers for years. “I would feel far safer working in a building with a small modular reactor in the basement, with my office directly above it, than living near a coal-fired power station or Drax in Yorkshire,” says Gregory.
Its safety credentials are bolstered by the low-carbon nature of its energy production. Nuclear energy boasts one of the lowest carbon emissions per kilowatt-hour, second only to wind. While reactors produce waste, advancements in recycling and reuse have begun to reduce the burden on states to create large-scale installations for the safe storage of radioactive waste. Ultimately, for Gregory, much of the opposition towards nuclear is ideologically based, especially within the environmentalist movement, which remains staunchly anti-nuclear. This, Gregory contends, “discredits the environmental movement so much, because the data shows it’s orders of magnitude safer and better for the environment than fossil fuels.”
Therefore, according to Gitt, the deal with Equinix reflects the company’s practical needs, providing an option that could power both large hyperscale data centers and smaller-scale inference sites.
On the Edge
But what about sites even further out towards the Edge that require a smallerscale, reliable system to power operations solely behind the meter? While hyperscale deployments get the headlines, there is a growing school of thought within the sector that small-scale data centers supporting inference workloads will come to dominate.
Many of these facilities are planned for urban locales, where grid connection can often take up to ten years, meaning the need for a dispatchable, baseload energy source has never been greater.
Unlike most SMR developers focused on multi-hundred-megawatt designs, Radiant Industries is targeting the opposite end of the spectrum. Its 1.2MWe and 3MWth helium-cooled Kaleidos micro-reactor is small enough to fit in a container and designed to be shipped directly to an end user for rapid installation.
“We’re not competing with largescale nuclear developers; we’re solving a different problem - bringing power to places where it’s not available, or not available fast enough,” says Mike Starrett, chief revenue officer at Radiant.
A key selling point for the data center sector is the reactor's potential for quick, easy deployment. As a fully factorymanufactured, containerized solution, Starrett claims it can go from truck-topower in as little as 24-48 hours, with a
“We’re not competing with large-scale nuclear developers; we’re solving a different problem — bringing power to places where it’s not available, or not available fast enough,”
>>Mike Starrett, Radiant
“We have to remain grounded in truth and frankly, remain highly cognizant of where the challenges lie, and then actively work on how to get in front of those and tackle those challenges,"
more realistic commercial deployment timeline of a few weeks. That timeline is faster even than natural gas, which has become the go-to option for operators seeking quick, dispatchable supply. The company plans to start construction at its first reactor factory in Oak Ridge, Tennessee, in 2026, aiming to scale up production to 50 reactors per year by 2028.
For Equinix’s retail business, which operates more than 200 data centers in and around urban areas, Radiant offered an attractive solution, leading the company to pre-order 20 of its microreactors.
“Microreactors, or micro SMRs, can be highly effective where smaller-scale deployment makes sense - providing bridging capacity or powering compact data centers,” explains Equinix’s Anderson.
Radiant’s deployment model focuses on behind-the-meter energy, meaning the reactors supply power directly to the data center or nearby facilities, rather than relying on constrained public grids. Starrett clarifies that this approach can serve both existing facilities looking to expand capacity and new builds.
The small size could also have significant logistical advantages for Equinix, allowing it to scale its edge data center portfolio more rapidly than with conventional energy sources, offering a completely different approach to the deals Equinix signed in Europe.
Bottlenecks and Barriers
The promise of SMRs is undeniable. Compact, factory-built, and capable of supplying dispatchable, carbonfree power to data centers and other industries, SMRs could truly transform the energy landscape for the sector. Yet, despite these advantages, commercial deployment remains highly uncertain. Current forecasts point to the 2030s at the earliest, with progress contingent on
>>Brian Smith, Idaho National Laboratory
the development of full supply chains, licensing frameworks, and financing mechanisms.
One of the most pressing bottlenecks is licensing and regulatory approval. Current US and global processes involve years of site assessment, environmental review, and multi-jurisdictional compliance. For SMRs, these processes could be even longer, with regulators forced to adapt rules originally designed for large reactors to smaller, untested designs. As a result, calls for regulatory reform and direct state backing have intensified to prevent the “new nuclear age” from stalling.
The US Department of Energy has responded aggressively to these calls by launching several pilot projects intended to streamline early deployments. The most notable of these is its Reactor Pilot Program, which seeks to establish a DOE-led pathway for advanced reactor demonstration to streamline commercial licensing. In May, the DOE selected 11 developers, including Radiant and Oklo, to build first-of-a-kind test reactors at Idaho National Laboratory (INL).
“We’ve got reactor developers piloting their first reactors at INL, not just experimenting but actually building them to demonstrate viability,” says Brian Smith, head of reactor development at INL. “This shows the investor community that advanced nuclear is real. These reactors will be up and running at INL this decade.”
The pilot aims to support at least three reactors achieving criticality by 2030, providing proof of operational viability for investors and stakeholders. “These are not paper designs. This is real metal being deployed on the ground in Idaho and elsewhere,” Smith insists.
Two companies have already broken ground. Oklo started construction in September, and Aalo Atomics, which is developing a sodium and air-cooled reactor, in August. Radiant has completed
its front-end engineering and expects installation and testing to begin in early 2026.
Despite DOE support, full commercial deployment will require approval from the Nuclear Regulatory Commission (NRC), which could require additional years of documentation and review. To date, only NuScale Power has received NRC approval in 2023. Other projects, such as mPower and HoloGen, have faltered due to slow NRC processes and funding challenges. Oklo, too, encountered significant setbacks after its initial license application was rejected for insufficient design information.
Smith, however, stresses cautious optimism. “We have to remain grounded in truth and frankly, remain highly cognizant of where the challenges lie,
“Microreactors or small SMRs could work well for localized deployments, while larger SMRs could support our hyperscale campuses. It’s about matching the right technology to the right context.”
>>Adrian Anderson, Equinix
and then actively work on how to get in front of those and tackle those challenges.”
Regulatory uncertainty is not confined to the US; it is even more pronounced in the European market, where SMR deployments face a patchwork of licensing processes due to the lack of a single EU nuclear authority.
As a result, each European country has its own nuclear regulator, meaning that even if an SMR is approved in one country, it is unlikely to be automatically fast-tracked in another. Divergent national attitudes toward nuclear muddy the waters even further, from restrictive Germany to pro-nuclear France and the Czech Republic, meaning we are likely to see staggered approvals across the continent.
The standardized nature of SMRs could hasten the process. SMRs use repeatable, factory-built designs, allowing regulators to reuse safety cases. In addition, established vendors such as Rolls-Royce SMR, which are building better-understood Gen III designs, could benefit from an expedited regulatory process.
"We’ve spent significant time with regulators in all our target countries. The feedback we receive is consistent: our reactor is ‘boring’, and in nuclear, boring is the highest compliment," asserts Harry Keeling, head of Business Development, Rolls-Royce SMR.
However, despite confidence, Anderson remains grounded in his expectations. “The realist in me says I’ll wait and see - positive discussions are one thing, but aligning the various
regulators in practice is always far more complicated,” he claims.
Capital costs
Even if an SMR secures regulatory approval, another barrier emerges: financing. First-of-a-kind SMRs are projected to cost between €4,000 ($4,606) and €6,000 ($6,909) per kilowatt in Europe, compared to the current average energy cost in Europe, which stands at €0.2872 ($0.33) per kWh (first half of 2025). Therefore, driving down the cost curve for the reactors will be of utmost importance to support their deployment and uptake.
The real challenge for developers is the capital required to move from the prototype to the commercialization stage, where risk is highest and traditional debt is scarce.
"No bank will take on technology risk. Project financing for SMRs requires proven, reliable technology with established off-take agreements to mitigate project cash flow risks," says Ivan Pavlovic, executive director, energy transition at investment bank Natixis
Stellaria CEO Breyton says this is akin to “crossing the desert.” He estimates his company will require €600 million ($690m) to move from prototype to commercial readiness. Breyton notes that this figure is comparable to “a simple highway connection,” yet far more complex to secure because it falls outside regular political and investment cycles.
To bridge the gap, Breyton argues that large public-sector programs are essential. Stellaria is currently
pushing for inclusion in Europe’s IPCEI framework, which unlocks significant state subsidies for strategic technologies. He stresses that nuclear innovation requires “a strong policy that will push the innovation for ten years.” An issue in countries with five-year election cycles, which often stymies governments from making grand longterm investments.
However, to support the sector, governments must step up, not only by co-funding projects but also by underwriting the wider supply chain. “Public-private partnerships are essential to share early-stage costs and risks. In fact, an insurance product to cover cost overruns could make a lot of sense,” argues Equinix’s Anderson.
At present, most projects are seeking public funding; however, on the private side, data centers are emerging as the sector's most significant financial backer. ULC’s Rabenik argues that the capital outlay required to support the growth of SMRs is minimal compared to the vast amounts splashed out on multigigawatt data centers. “The data center can pretty much afford this from the outset,” he says. “Our analysis actually shows that the investment in the data center outweighs the investment in the nuclear power plant, even before the semiconductor capex is considered.”
Anderson contends that long-term PPAs from data centers are already reshaping the financing landscape for SMRs. “These contracts function almost like an annuity, allowing technology developers to secure debt or bank financing to fund their projects.”
However, the economics of nuclear power mean bigger is better, and SMRs are unlikely to see the benefits of miniaturization experienced by other technologies. Therefore, leveraging modular factory-built design will be key to achieving the cost predictability that investors and operators need.
Radiant’s Starrett likens it to “buying a catalogue of cabinets rather than renovating a bespoke kitchen”. By producing each reactor identically, he says SMRs avoid the overruns that have plagued larger nuclear projects.
As a result, optimists contend that, unlike conventional nuclear, where debt can account for half of the energy cost, taking advantage of SMRs' modularity could reduce on-site construction risks, streamline operations, and significantly
shorten project timelines, making nuclear power much more accessible.
Fueling up
Once the reactors are approved, financed, and built, developers will need to find some fuel. Availability and production of fuel are, by all accounts, the biggest bottleneck for SMRs' successful commercialization.
The reactors will require a whole lot of uranium, but, at present, around threequarters of global stocks are sourced from three countries: Kazakhstan, Canada, and Australia. At the enrichment stage, the bottleneck tightens even further, with Russia still dominating capacity.
“In the early 1990s, the US was the world’s biggest exporter of nuclear fuel. Then Russia flooded the market with cheap enriched uranium - less than half the price - and our entire industry was wiped out,” says Christo Liebenberg, CTO at Lis Technologies, a firm that is taking a novel approach to uranium enrichment.
Since 2022, the US and EU have instituted a ban on importing enriched uranium from Russia, with a waiver process currently in place. As a result, there is now an inherent requirement for a localized nuclear fuel supply chain to power future SMRs.
To address this bottleneck, the US government has committed about $3.4 billion to uranium production. Several European countries are expanding production too, with France recently announcing a more than 30 percent capacity increase at its Georges Besse 2 uranium enrichment facility.
Smith says the “government can act as the offtaker” in the US market. He explains: “We’ll buy the enriched uranium and ensure it reaches the reactor vendors who need it. Industry just has to build the facilities.”
While efforts to reshore uranium enrichment are making headway, for
“In the end, we only need maybe half a dozen successful reactor designs globally. That's it,"
>>Tim Gregory, nuclear scientist.
the vast majority of advanced SMRs, traditional standard low-enriched uranium used in most Gen III designs won’t suffice. Instead, they require HALEU (High-Assay Low-Enriched Uranium), which is commonly 15-20 percent more enriched. This higher energy density allows reactors to be smaller, simpler, and refueled less often, a significant advantage for data center developers, where operations benefit from long fuel cycles and minimal downtime.
But HALEU has one big problem: it hasn't yet been produced at scale. At present, there is no domestic commercial large-scale supplier in the US, with the DOE supplying limited amounts by downblending its highly enriched stockpile.
As a result, the cost to produce the fuel, even in limited test quantities, has skyrocketed from about $10,000/kg to $30,000/kg in the last five years, blowing
An unlimited supply
Fora sector that depends so heavily on a constrained fuel supply chain, ensuring a steady and reliable flow of fuel is critical. Fortunately, nuclear waste offers a solution. The US has about 90,000 tons of stored spent nuclear fuel, while Europe follows with roughly 60,000 tons. This stockpile is growing by around 1,800 to 2,000 tons each year.
While some countries, like Finland, have committed to safely storing waste, others have seen it as an opportunity to ease pressure at the front end of the fuel cycle. SMR developers are at the forefront of this, with several designs capable of running on repurposed fuel.
In the US, Oklo is the most aggressive mover, announcing a $1.68bn recycling facility in Oak Ridge, Tennessee, to supply its Powerhouse reactor, which it says can run on a blend of downblended enriched uranium and recycled material.
In France, Stellaria takes the concept further. The company says its moltensalt fast reactor can run directly on spent nuclear fuel, effectively turning stranded waste into a high-value energy resource. To do so, the company is developing a new class of nuclear technology with its “Breed and Burn”
apart many SMR business models that relied on cost reductions to ensure their products were affordable.
Another concern is the regulatory and security complexities that mass production of the fuel would entail, since enriched uranium above five percent is more tightly controlled internationally. Consequently, there are concerns that with advanced nuclear development could come clandestine attempts to enrich uranium to weapons-grade levels, potentially consigning fabrication of the fuel to nuclear-powered states.
The final concern is the current lack of clear market demand, which means private fuel cycle companies are hesitant to invest, creating a ‘chicken and egg’ problem that delays the deployment of advanced reactors and SMRs alike.
Despite this, there has been some progress in the supply chain, with companies pioneering laser enrichment technologies promising significant cost reductions. One of these is Lis Technologies, which, instead of relying
reactor. Unlike its counterparts, this breed-and-burn technology does not require refueling and is designed to operate autonomously for more than four decades upon delivery.
Described by CEO Nicolas Breyton as a system where the reactor “isogenerates its own fissile energy during the entire operational lifetime of the data center.” The reactor can breed and burn its own fuel in a circular loop. In practical terms, Breyton says, this means “our customers won’t have to refuel their data center” and can “onboard the full amount of energy from the very beginning, securing supply and obtaining predictable energy prices.”
Unlike more conventional reactors that use solid fuel necessitating periodic refuelling, Stellaria’s reactor utilizes liquid fuel and fast neutrons, allowing it to “reutilize and split atoms that are not fissionable today with existing reactors,” says Breyton. This, he contends, unlocks the use of the energy still available in spent nuclear fuel, which is currently stored in pools, as well as massive amounts of natural uranium-238 already mined, which has the potential to free “more than 5,000 years of energy, just in Europe,” he claims.
on massive centrifuge cascades, uses a continuous-wave laser, which it claims is much more efficient than centrifugal methods.
“We can enrich fuel more efficiently, with fewer stages and a smaller footprint than traditional centrifuges,” claims Liebenberg. “That means producing HALEU at a fraction of today’s cost - potentially $10,000 to $14,000 per kilogram instead of $30,000.”
Though this sounds encouraging, SMR developers remain grounded in their outlook. “We don’t promise unrealistic roadmaps,” says Stellaria’s Breyton. “2027 SMRs are a lie - the fuel isn’t ready.”
This has led some to contend that, while fuel constraints are a bottleneck, they are a future problem, not a present one. As Kevin Kong, CEO of Everstar, puts it, fuel scarcity becomes a real issue only once reactors are being produced and deployed at scale. “Fuel is a problem when you have reactors,” he stresses. A milestone that could still be more than ten years away.
Five years away from being five years away Talk to anyone in the SMR sector, and it’s easy to get carried away by the technology's immense potential. But the persistent question that remains is when, or if, these reactors will become a commercial reality. The bottlenecks have led to divisions across the sector over realistic timelines, with some extremely bullish and others more pragmatic.
At the ambitious end sits Oklo. The company is targeting first power before the end of the decade, based on securing fuel allocation, a permitted site, and having already commenced construction of its test reactor. Unlike most competitors, it positions its first unit as fully commercial rather than a demonstration. Oklo’s aggressive timeline has faced criticism, with a recent Bloomberg report contending that its high valuation has been based on “hype” rather than fundamentals, especially given its initial NRC rejection back in 2022, with a former senior agency official reportedly calling Oklo "probably the worst applicant the NRC has ever had."
Radiant, too, has adopted an aggressive deployment timeline, planning to complete its first full-scale deployment in the US by 2026. Following this, it plans to ramp up factory manufacturing and make deliveries to customers with NRC-
authorized reactors by early 2028 or late 2027, claims Starrett.
For others, including the majority of the emerging Generation IV developers, the mid-2030s are seen as the most realistic horizon. Stellaria is among the more open and methodical of this group, laying out a clear roadmap that includes a safety demonstrator by 2030, complete system validation by 2032, and first deployments at data centres and industrial sites in 2035
This aligns with the timelines of other Gen IV players such as Kairos, which, under agreements with Google and the Tennessee Valley Authority, also points to first commercial deployments around 2035.
National strategies reflect this uneven picture. The first Rolls-Royce units are slated for deployment in Wales, with construction expected to start next year. Meaning ULC’s Dutch deployments will likely only begin work around 2035 at the very earliest.
For Equinix, pragmatism is the name of the game, with Anderson frank about the uncertainties that still surround the sector, accepting that not all reactor concepts will reach commercialization. As a result, Equinix only expects the first successful units from current partnerships to come online in the mid2030s. The long wait, Anderson suggests, has less to do with the technology itself and more with all the moving parts around it.
“We want to create as many opportunities for success and collaboration as possible,” he says. “We don’t have a crystal ball, so we need to be pragmatic - not overextend ourselves, but also recognize that some projects will fail. That’s just part of innovation.”
What seems increasingly clear is
that consolidation across the sector is inevitable. With hundreds of designs currently in play, it's evident that only a handful are likely to emerge as commercially viable once real-world data separates engineering promise from economic reality.
“In the end, we only need maybe half a dozen successful reactor designs globally. That's it,” affirms Tim Gregory. This has led some in the industry to argue that a slower, steadier pace may ultimately serve SMRs better, as a mad rush to first criticality risks creating bottlenecks. As for a true SMR-powered future to come to fruition, the products themselves will have to be near-perfect to assuage concerns over safety, financial viability, and regulation, which are currently threatening the sector.
In addition, while the data center market has shown significant backing to the sector, it is unlikely to be the first beneficiary of its power output. The majority of reactors are likely to be operated by a third party - either the utility or the SMR developer itself - meaning a longer time scale and a larger investment than for a data center. As a result, the provider is unlikely to forgo a connection to the grid, to ensure continuous demand, especially given the growing risk of a market correction within the AI space, which could see many planned data center projects not come to market.
Despite the speculation and bottlenecks, there is room for hope, argues Gregory. “Will SMRs be commercialized? Absolutely yes. The only question is which designs,” he says. The bigger question is how long we will have to wait to see this nuclearpowered future become a reality.
designergrp.com/data-centres
Colt romances hyperscalers in Paris
DCD visits Colt DCS in France to talk about its ambitions in the country
Paris has long been one of the major European data center markets –the P in the region’s FLAPD crown. The city of love’s affair with data centers goes back to the 1990s and the opening of the first Telehouse facilities in a former department store.
And as many markets has struggled and slowed – Dublin and Amsterdam both have moratoriums and restrictions – developers are putting their collective foot on the gas in the major hubs still open for business.
In France, dozens of operators, large and small, operate in and around Paris. Data Center Map lists some 119 facilities
from 37 operators across the region, ranging from small Edge sites to major AI-centered hyperscale projects.
Already a well-established market, the capital city and the surrounding Ile de France region as a whole have been on a growth spree amid increased demand for data center capacity to serve AI, with France’s abundant nuclear power proving a powerful lure.
Colt Data Centre Services (Colt DCS), present in the land of the Coq Gaulois (Gallic Rooster, the national symbol of France) for some 25 years, is growing its footprint in France to cater to hyperscalers.
Dan Swinhoe Managing Editor
Colt confident in the Coq Gaulois Originally known as City Of London Telecom and founded in 1992 with backing from Fidelity Investments, Colt Group has been fully owned by Fidelity since 2015.
The company has 13 operational data centers with an additional 19 in development across cities in the UK, Europe, and APAC. In Europe, the firm is mostly centered on the traditional FLAPD hubs, with Rotterdam and Berlin the exceptions.
In the age of the hyperscale customer, however, the veteran
company has been evolving to offer larger single-tenant facilities and trimming its portfolio of retail colocation properties, with France a good example of its strategy
In November 2021, Colt sold a portfolio of 12 non-hyperscale facilities across Europe to AtlasEdge, the new joint venture between Liberty Global & DigitalBridge. The firm subsequently acquired 10 land parcels around Europe – including plots in London, Frankfurt, Paris, and Japan –that would see the capacity of its portfolio increase by more than 500MW.
One of those divested sites was its Paris North facility, leaving Paris South West (aka Paris1/PAR1) as its only data center in France.
“We've always followed our customers,” Jackson Lee, Colt DCS VP of corporate development, tells DCD. “Those sites we sold were small, retail sizes with a different customer base. We pivoted when we saw that there was a high degree of transition to the cloud. Our strategy is a cloud-first one, first and foremost.”
Located in the suburb of Les Ulis, the company’s one live campus in France offers 24MW. The site launched in 2001 in a former cold storage warehouse, with a second purpose-built building added for a hyperscale customer in 2023.
Having launched with just 600kW of colocation space, today the data center hosts 14 halls with space totaling 23.6MW across 9,000 sqm (96,875 sq ft). The aircooled site is a mixture of chilled water, adiabatic, and free cooling.
Growth happened slowly, then all at once. After the 2001 launch, the second and third data halls, adding another 1,000 sqm (10,763 sq ft) and 1MW of retail colocation capacity, didn’t come online until 2013. The entire site was still only 4.4MW by 2018, by which point the first two hyperscale-focused data halls opened for business. Today, some 7,000 sqm (75,347 sq ft) and 21.2MW of the site is dedicated to hyperscale customers. Paris South West is an interesting mix of legacy and new-build hyperscale. But the rest of Colt’s footprint in France is set to be entirely new-build and largely geared towards hyperscalers.
Colt rebuilds in France Colt broke ground on Colt Paris 2 in May 2025. The first of three data centers planned for a 12.5-acre site southwest of Paris in Villebon-sur-Yvette, the facility will offer 40MW at full build-out, with densities of 100kW per rack.
The move is part of a €2.3 billion ($2.58bn) investment in French infrastructure, which will see five data centers constructed by 2031, bringing Colt’s IT capacity in France to 170MW. At the groundbreaking event attended by DCD, the company noted that the site has been in planning since 2021.
Originally set to use air cooling, the site is set to offer liquid cooling at the behest of the unnamed anchor customer, leading to a major redesign. Colt
“Hyperscalers want to build capacity that can serve traditional commercial cloud and AI workloads,”
>>Quy Nguyen, Colt DCS
declined to name the Paris 2’s hyperscale customer, but DCD understands it is Microsoft.
Indeed, the new campus, at 18 Av. du Québec in Villebon-sur-Yvette, is based at a former Microsoft office. The cloud company had long exited the site, with another firm taking over for a time, but the so-called Papillon Building (aka Butterfly Building) will be demolished should Colt decide to spread its wings and develop the campus further.
Offering both hyperscale and colocation space, the site is set to host PAR2, PAR3, and PAR4, with PAR2 due live in 2028, and the others coming online in 2029. The site will be a mix of hyperscale and colocation buildings.
It will feature 1,975 sqm (21,258 sq ft) of rooftop solar, with waste heat being used to warm offices, as well as potentially powering an upcoming district heating network.
“I think there are some good things in Paris,” says Hedi Ollivier, director of development for the EMEA region at Colt DCS.
“You have got a strong supply in terms of energy thanks to the nuclear
“Our strategy is a cloudfirst strategy, first and foremost,”
>>Jackson Lee, Colt DCS
power, and it's low carbon. There's a lot of capacity left available, and I don't think we will suffer shortages like we've seen in Ireland or the Netherlands.”
Ollivier contends that the Paris market “is not as frightening as it seems,” despite worries voiced by many over France’s excessive bureaucracy. “You need to understand the administrative process and follow it properly,” he says.
Two other upcoming data centers, PAR5 and PAR6, at another new campus nearby in Les Ulis, are set to offer 45MW and 14.4MW from 2030 and 2031, respectively. This site will also have a mix of hyperscale and colocation buildings.
Quy Nguyen, Colt DCS chief sales officer, tells DCD that for now the hyperscalers are still being “cautious” and want flexibility from their data center providers, at least outside the US. “They want to build capacity that can serve traditional commercial cloud, but they also want to be able to host AI workloads, whether it's inference or different types of training models,” he says. “There’s this hybrid cooling approach seems to be what people are doing right now, at least internationally.”
Colo companies prepare for AI DC Byte’s research from Q1 2025 shows that there is currently data center capacity of 283MW under construction in France, with 1.8GW of early-stage projects. While the likes of Digital Realty, Equinix, Data4, CyrusOne, and Colt have been present in Paris for years serving hyperscalers, demand is growing amid the ongoing AI boom.
AWS, Oracle, Microsoft, and Google are all present around the French capital. All the large US hyperscalers are looking to grow in the region. At the same time, French and European cloud players are growing their footprints, while a number of US AI and AI cloud companies - the so-called neoclouds - are also targeting
“There's a lot of opportunity in France, and I think there's still capacity for growth,”
>>Hedi Ollivier, Colt DCS
Europe, with the likes of TogetherAI, Fluidstack, and Nebius looking at deployments in France, as well as local players such as Scaleway and OVHcloud.
Colt’s Lee says the company is “entertaining” those hosting the neoclouds, but is being “cautious” about it. However, the company is confident that the risk of an AI or AI cloud firm they do lease to going bust would largely be negated by the location and flexibility of its facilities.
“Because we build our data centers to house both commercial cloud and AI, even if one of those customers does go bankrupt, we feel very confident that if they left the data center, we could bring in another customer,” Lee says. “We're not worried about that type of obsolescence, because we're building in locations that can serve both purposes.”
Nguyen says Colt is “agnostic” on whether US players or domestic operators represent the best opportunity for sovereign cloud or hosting capacity, but for now, it seems the US companies are still leading a lot of demand.
“As long as we pick sites that have good power,” he says, “whoever is going to be the nominated sovereign cloud hosting company, we'll work with them.”
Colt’s Lee acknowledges there’s “a lot of talk” in Europe about developing a homegrown hyperscaler cloud, but “we just haven't seen it just yet.
“It's about having the right ingredients for being flexible,” he adds. “But most of our customers right now are US-based.”
On whether Colt will change its
geographical approach to developing data centers and start chasing large-scale capacity outside the major metros into rural regions, Nguyen says the company’s focus is still to build in central locations where the availability zones are.
“The commercial cloud really hasn't gone away, and we think it's going to colocate itself with AI workloads, at least for now,” he says.
“We absolutely would look at some of those other markets,” adds Lee. “Our customers are constantly asking if this would be a target for us. Our priorities are obviously the Tier One markets, but Tier Two is certainly on our radar.”
While the industry has seen some major acquisitions in recent years, most of the large-scale data center purchases have been by investment firms, rather than tuck-in buys from the big players. Lee says Colt isn’t very interested in acquisitions unless its “very strategic.”
While the firm might consider an acquisition if it were to move into a new market, Nguyen says the company is more focused on “joint ventures where we think we need additional capability to accelerate” more than buying companies. Colt has signed JV deals with Mitsui in Japan and RMZ in India, and bringing outside investors into major hyperscale ventures is a common playbook amongst the likes of Equinix and Digital Realty
Whether it's for traditional cloud or the new wave of AI-centric GPU providers, the data center industry and Colt’s long love affair with Paris is yet to lose that romantic spark.
A sticky business
Csquare CEO Spencer Mullee on coming out of retirement to lead a new colo firm
Afew years ago, Csquare CEO Spencer Mullee thought he was heading for a long and happy retirement, but then his nearest and dearest sent him back to work.
In 2019, after a globe-trotting career as a data center executive, Mullee had sold his Apac-focused firm, DCI Data Centers, to Brookfield, and was ready to leave the Southern Hemisphere - and the world of digital infrastructure - behind and return to a quiet life in his native Florida.
But just when he thought he was out, they pulled him back in.
“I stayed in touch with Udhay Mathialagan, CEO of Brookfield’s global data center group, and he asked me to come back a couple of times,” he says, before adding, jokingly: “I initially said no, and that I was enjoying being retired,
“We have 2,500 customers and 2,499 of them are interested in AI," >>Spencer Mullee
but then my family said, ‘no, you’re not enjoying being retired, go back to work and stop giving us projects’.
“So Uday convinced me to return, and I’m very grateful that he did, because I’m having a great time.”
Today, Mullee has a big project of his own heading up Csquare, a Dallasbased company formerly known as Centersquare, that is part of Brookfield and provides colo services to enterprise customers. It has ambitious plans to expand across North America, having recently purchased 10 data centers in the US and Canada for a cool $1 billion.
Two become one
Centersquare was formed last year when Brookfield merged the assets of one of its data center businesses Evoque, with data centers belonging to Cyxtera, a firm acquired by Brookfield in 2023 after it filed for Chapter 11 bankruptcy. The $775 million deal saw Brookfield buy seven data centers that Cyxtera leased in the US, and exit six data centers (three in the US, and three abroad) rented by the ailing firm.
Evoque, meanwhile, was formed by
Matthew Gooding Senior Editor
Brookfield in 2019 when the investment snapped up AT&T’s data center and colocation assets. Mullee was brought on board to head up the firm in 2023, then chosen to lead the combined organization after the merger. It changed its name to Csquare in December 2025.
“Cyxtera was a very attractive asset that we’d looked at previously and continued to be interested in,” he says. “The bankruptcy afforded us the opportunity to cherry-pick the most profitable centers and convert what was a completely leased portfolio. Today we own 70 percent of the data centers, whereas with Cyxtera they owned none of it.”
Reflecting on the integration of the companies, Mullee says: “We had to have a ‘fix it’ mentality, and it’s a process I’m quite comfortable with because I’ve done it before when I was at DCI, but it’s not something you can do alone, and I have an incredible C-Suite of great data center professionals working with me. They have done a phenomenal job in taking what was supposed to be a two-year integration and getting it done in 11 months, significantly under our acquisition budget.”
“Even through Cyxtera’s bankruptcy, their customer churn was very low,"
>>Spencer Mullee
Csquare is now a team of 600 people, and Mullee says: “Our hope was we’d end up with a third of the people coming from Evoque, a third from Cyxtera, and a third of the people who are fresh faces with new ideas, and it’s kind of worked out that way.
“We had very little staff turnover on the operations side, there’s been a bit of change in the administration team, but maybe less than we expected. We’re working really hard to keep people happy - I think the legacy companies weren’t always the best at communication, so we’re trying to overdo that now.”
Sticking power
While the name and scale of Csquare may have changed with the merger, Mullee says the profile of its clients has been pretty steady.
“Even through Cyxtera’s bankruptcy, their customer churn was very low,” he says.
“Data centers are a very sticky business. We still do significant enterprise business in that 1-5MW range - that hasn’t changed - but now there are the AI guys, and we do some of that business too.”
Most of Csquare’s data centers are liquid cooling-ready, Mullee says, and can offer rack densities of up to 128kW, meaning they are well set to cater for AI racks, though he claims his firm will not be chasing the hyperscale dollars that are flowing into the industry. “The nature of what was built by the telecoms company is that they were over-designed, over-
Spencer Mullee
specced, and built to last for many years,” he says.
“That’s given us the ability to do AI deals for the long-term. Like anyone, we like investment-grade customers, and AI still presents some challenges in that respect. What we tend to focus on with AI is where a bank, healthcare organization, or some other type of large enterprise is doing an AI lab and needs 1-2MW.”
Mullee says Csquare has “half a dozen” of these installations on the go, without naming the clients involved, and he is banking on some more stickiness as the AI market evolves. “These enterprises are doing their training where they will eventually end up doing their inference, because our data centers are all in urban locations close to their end users. It will make sense to do inference there,” he says.
The CEO remains focused on more traditional workloads, though, with much of the AI hype yet to translate into investment. “We have 2,500 customers and 2,499 of them are interested in AI - I was even thinking of changing my name to AI at one point,” he says. “But I’d say while there’s great interest, there’s not a significant portion of them moving in that direction yet. The neocloud business is kind of like the Wild West right now, and I think some are waiting to see what happens. It’s still really in its infancy, so some people aren’t sure how they’ll use it yet.”
Expansion plans
In October, Csquare announced it had purchased ten data centers across North
America, paying $1 billion from its own cash reserves. These included two data centers in Boston and Minneapolis that the company had been operating under long-term lease agreements, along with eight additional colocation facilities in Dallas, Tulsa, Nashville, Raleigh, Toronto, and Montreal.
Mullee says these sites are aligned with the company’s vision to grow near its existing customers. “It’s a proximity and readiness strategy,” he says. “We’re adding capacity in metros where our customers are already growing, and it’s funded with our own cash. We’re not really looking for headlines, we just want to grow in a smart and steady way.”
Having been involved in the digital infrastructure space for decades, Mullee says he did not foresee the current boom in interest, but expects it to continue for some time to come. “A lot of data center people like to talk about their crystal balls and what they saw coming, but I certainly didn’t foresee a lot of the things that have happened,” he says. “Data centers have become sexy, they’re making the headlines on CNBC every day, and I think that is here to stay. When you look at the rapid growth of data and data traffic, it’s hard to imagine what that will look like in another five years.”
And Mullee certainly intends to be on the front line to witness the next evolution, with no plans to return to retirement just yet. “I’m having the time of my life,” he says. “I love what I do, I love the people I work with, and we’re very busy - busy is always fun.”
The Networking Supplement
2G or not 2G:
AI enters the network
> Operators switching off old technologies
Into the Ether:
> Will Ethernet get the better of InfiniBand
Finding an Edge:
> Can broadcast towers host other infrastructure?
Ensure a Seamless and Secure AI Experience
We all know AI is not just influencing networking – it is defining its future.
As data-intensive applications and learning models, high performance computing and big data analytics enter the mainstream, the entire ecosystem must meet the insatiable demand for memory, bandwidth, computing power, storage and speed.
From AI Optimizing AI to Digital Twins, VIAVI cutting-edge Telco AI solutions enable tomorrow's highly automated, zero-touch networks.
Contents
36. Is the telecoms industry ready to wave 2G off into the sunset?
While most use 4G and 5G mobile services, its IoT devices that have prolonged 2G’s 30-year-plus stay
39. Why Ethernet is winning the battle for AI supercomputing
Battle-tested on the world's largest systems, HPE's Slingshot shows how 'Ethernet plus' beats proprietary interconnects
44. Broadcast’s infra Edge play
Why broadcast companies see the value in deploying data centers at TV and radio tower sites
46. AI is driving a networking evolution
Providers acknowledge the challenge presented by running complex systems
It doesn’t matter what sector is being discussed, be it telecoms, data centers, compute, etc., the conversations all seem to pivot around Artificial Intelligence (AI).
Indeed, these industries are pushing forward, looking ahead to build networks that are purposebuilt to handle the demands of AI.
But before the future can be fully tapped into, there’s still the question of what to do with the past. In the next few years, 2G will become a thing of the past as carriers look to beef up their 4G and 5G networks instead, to better accommodate technologies such as AI.
French giant Orange plans to switch off its 2G network as soon as next year, but is the industry ready for this? Not everyone in France will agree it is, in particular, the country’s elevator federation, which has concerns that some sectors will be left behind.
The long-running rivalry between Ethernet and InfiniBand interconnects is shifting in Ethernet's favor, particularly in high-performance AI computing.
While InfiniBand has traditionally dominated due to its low-latency performance and tight Nvidia integration, Ethernetbased solutions like HPE's Slingshot are proving their worth at the highest levels, powering six of the top 10 supercomputers on the latest TOP500 list, including the top three.
Hear from HPE’s Mike Vildibill on how industry initiatives like the Ultra Ethernet Consortium
are now working to standardize “Ethernet plus” capabilities to further position the protocol for continued growth in AI infrastructure.
Meanwhile, data centers popping up at the base of cell towers isn’t exactly a new play for the telecoms industry, but it’s not just telcos that are looking to maximize this opportunity.
Broadcast companies are also doing so, utilizing their infrastructure to deploy Edge data centers in an effort to diversify their product offerings for more monetary value. Some companies, such as Digita, have been spun out of historical broadcasters to explore such business use cases.
“Since these sites are used to broadcast video and radio, they are very well connected,” Riku Helander, senior vice president of telecom, Digita, tells DCD.
Beyond the towers, is there a rethink of the infrastructure from the data center out to the campus Edge? SDx Central’s Dan Meyer explores how AI demand is putting significant stress on the network connectivity infrastructure, and why the Edge plays a crucial role in this.
The demand for AI workloads, through the use of tools such as chatbots and Agentic AI, has providers working hard to keep up, investing heavily in doing so as they look to navigate the challenges around traffic patterns. But how much investment will it take to drive AI’s network evolution?
Is the telecoms industry ready to wave 2G off into the sunset?
While most use 4G and 5G mobile services, its IoT devices that have prolonged 2G’s 30-year-plus stay
Paul Lipscombe Telecoms Editor
As 3G mobile networks continue to gradually be phased out by network operators, telcos are freeing up vital resources in the form of spectrum to support their respective 4G and 5G networks.
But before 3G came 2G. It might not seem like it now, but 2G was an important transition for the mobile industry, as it marked the beginning of the digital era, moving away from the analog 1G networks that came before it.
2G heralded the beginning of the data era, and introduced SMS (Short Message Service), something that is probably an afterthought for most people now, given the instant messaging services such as WhatsApp or iMessage that are used.
Finnish telco, Radiolinja (now part of
Elisa Oyj) won the 2G race, launching the first commercial service in 1991.
While it was revolutionary at the time, 2G is now outdated and largely underused, some 30-plus years later.
An Opensignal report last year revealed that Luxembourg has the highest time spent on 2G networks at 4.5 percent, showing just how little the network is used. In the UK, the number is as low as 1.7 percent.
Time to let go of 2G
Operators are now firmly focused on the here-and-now technologies, such as 4G and 5G, given that these technologies are significantly better suited to meeting the data demands of customers.
Luciana Camargos, head of spectrum
at the GSMA, told DCD that it makes no sense for telcos to operate 2G along with 3,4, and 5G.
“The operators that have implemented 5G have four technologies on the go at the same time. That's an extremely costly way of keeping a network and extremely inefficient usage of spectrum, so the ability to shut down networks is very important,” says Camargos. “You have to upgrade. You can't hold really old services for that long.”
Telcos across the world understand this, she notes. Several have outlined switchoff dates for their respective 2G networks, usually with plenty of preparation time, to migrate those few away who still use 2G.
“That discussion happens differently in different parts of the world, with regulators talking to operators to be able to enable
that. It needs to be something that is a conversation that happens on both parties, so the customers don't suffer. You have to plan for it,” adds Camargos.
Why has 3G gone first?
Despite launching a decade or so later than 2G, most telcos have focused their attention on phasing out 3G networks first.
Most telcos have either switched off their 3G network altogether or have plans to do so in the next year. Notably, in the US, all of the ‘Big 3’ carriers have bid farewell to the network, while in the UK, EE and Vodafone have retired 3G. Virgin Media O2 aims to do so next year. Three, recently acquired by Vodafone as part of a JV, also expects to finalize its 3G shutdown soon.
But while 3G focused more on pushing the boundaries of mobile data, enabling use cases such as Internet browsing and video streaming, it doesn’t offer anything that 4G or 5G can’t.
It’s because of this that 3G was easier to replace. As for 2G, it’s used for some early smart meters and Internet of Things, and
"You have to upgrade. You can't hold really old services for that long,"
>> Luciana Camargos, GSMA
M2M services, many of which still rely on 2G for support.
Although a lot of telcos have switched off 3G, not many operators have done the same with 2G, with Japan a rare exception, with the country retiring its 2G network back in 2012.
DCD covered the 3G shutdown more than three years ago. At that time, Paul Bullock, group chief product officer, Wireless Logic, noted that it was easier to retire 3G from a revenue and a direct impact point of view, given the small amount of 3G devices.
“3G was easier to do because there's no such thing as a 3G-only device, or almost no such thing. Even when 3G came out in the IoT world, nobody was buying 3G because 2G did the job and it was cheap,” says Bullock, speaking in 2025.
There are millions of devices out there that still rely on these 2G networks, such as telecare systems, security alarms, and emergency alert devices. Such devices will end up obsolete once 2G networks are phased out.
He explains that, for some telcos, it doesn’t make sense to switch off 2G services, as some of these devices can still generate revenues.
“When you have things that have a power supply and last a long time, like a smart meter, it's a much bigger deal, so personally, I think that has a lot to do with why the UK schedule for 2G retirement is around 2033,” he says, explaining that planning ahead of time is crucial to minimizing disruption.
“Nobody has a 2G phone, and so keeping this alive is really only just for IoT. And if these networks didn't have a lot of 2G connections in IoT, they would have shut down 2G a lot faster.”
Only when it becomes more expensive to run the network than what’s being generated from 2G, will telcos look to phase out the technology, he adds.
In an MTN Consulting report, if telcos were to have 2G/3G/4G/5G base stations, it’s estimated that 24 percent extra energy is consumed through holding on to the 2G and 3G layers within these base stations.
“You have some operators that don't have a massive number of 2G devices on the network, so it's not an economic hit for them to turn it off, and some that do, so these prolong that revenue stream as long as possible,” says Bullock.
“Help, I’m stuck in the lift!”
The impact around IoT networks appears to be the biggest fear for the industry for the 2G shutdown, with the impact largely felt by other industries, not necessarily telecoms.
One notable example is in France, where the telecoms providers in the country have outlined ambitions to phase out the technology as soon as next year.
These plans have been met with some backlash, amid concerns that it’s too soon to pull the plug. One such group is the Fédération des Ascenseurs, a body with represents the French elevator federation.
The group has warned that many emergency call systems in elevators will stop transmitting, with people potentially being stuck in these lifts without access to call for help. According to the group, the schedule for the switch-off is “too tight.”
“Approximately 230,000 elevators (2G) and nearly 60,000 (3G) require an alarm system update, out of a total of 650,000 elevators in France. If the 2G network is switched off and the telealarm system has not been updated, the connection to the emergency call center will no longer
"You have some operators that don't have a massive number of 2G devices on the network, so it's not an economic hit for them to turn it off, and some that do, so these prolong that revenue stream as long as possible,"
>> Paul Bullock, Wireless Logic
announcement that “network operation and maintenance costs will decrease as legacy equipment required for 2G can be recycled, which may also make physical space available for new equipment at Vodafone’s mobile sites.
In France, Orange is looking to kill off its 2G network even sooner, by the end of 2026. In contrast, Orange’s 3G network won’t go until 2028.
be established,” the group said to DCD in a statement.
In an interview with Lefigaro earlier this year, Alain Meslier, president of the federation, lamented the operators’ plans to end 2G next year, just four years after announcing the plans.
"Four years is very short. It was seven years in all other European countries,” he said. "The elevator company will not be held responsible if someone is trapped without a working emergency alarm system. We are simply the operators.”
As such, elevator operators in France are in a race against time to upgrade 2G systems to 4G, which the federation says is costly.
“Replacing gateways alone generally costs several hundred euros to upgrade from 2G to 4G, requiring around half a day of work,” the group says. “In some cases, a full telealarm replacement may be necessary if the existing system is not compatible with 4G or 5G technologies.”
The group is also concerned about the demands of this migration on its 17,000 to 20,000 elevator maintenance technicians, which it states “already handle substantial workloads.”
Across the Channel in the UK, the Digital Poverty Alliance (DPA) is also
concerned that the shutdown of legacy networks could leave some behind. The group estimates that 19 million UK adults experience one or more forms of digital poverty.
“As the UK moves toward retiring its 2G and 3G networks, we cannot allow progress to come at the cost of connection,” says Elizabeth Anderson, Digital Poverty Alliance CEO.
“The people most at risk are often those least able to upgrade – older adults, those on low incomes, and individuals who depend on telecare for safety. Unless action is taken now, the switch-off could disconnect exactly those who rely on these systems the most.”
What the telcos say
Understandably, the network carriers are looking ahead to the future, with 4G and 5G the main focus.
According to these telcos, the ability to retire legacy networks is a no-brainer, given the limited amount of data that travels over these networks.
Vodafone UK aims to retire 2G by 2030, saying it will do so gradually in phases.
The operator declined to speak with DCD when asked about its planned switch-off, but did state at the time of its
The carrier notes that its switch off of 2G is necessary to ensure its overall network is “faster, more efficient, and safer.” According to the telco, it will also help reduce energy consumption.
“Modernizing our fixed and mobile networks is central to our strategy. With the planned phase-out of 2G in 2026 and 3G by the end of 2028, Orange will guide all customers to 4G and 5G,” says Bénédicte Javelot, EVP strategic projects & development at Orange.
Orange says it expects to fully shut down its 2G and 3G networks across all of its European markets by 2030.
Don’t forget about us
There’s an acceptance from all quarters that newer technologies are part and parcel of the evolution of mobile networks.
But that doesn’t stop the fear that comes from the likes of the DPA is that some will be left behind.
“Without targeted support, millions risk being disconnected from essential services, communication, and opportunity,” the DPA notes in its report.
Nothing lasts forever, even a given generation of mobile networks.
That said, don’t expect to see 4G networks phased out anytime soon, as Bullock jokes, that conversation will take place when he’s retired.
"As the UK moves toward retiring its 2G and 3G networks, we cannot allow progress to come at the cost of connection,"
>> Elizabeth Anderson, Digital Poverty Alliance
Why Ethernet is winning the battle for AI supercomputing
Battle-tested on the world's largest systems, HPE's Slingshot shows how 'Ethernet plus' beats proprietary interconnects
Ben Wodecki Senior Editor, SDxCentral
Twice a year, the Top500 project ranks the most powerful supercomputers in the world. This gauntlet for geeks pushes the boundaries of what a computer can actually do.
It’s easy to look at the upper echelons of the lists to see what brand of GPU is powering some of the most intensive computing workloads ever known to man. But there’s an often overlooked component that’s the real performance powerhouse behind these impressive feats of engineering.
The interconnectivity, which brings the various disparate GPUs together, is the real supercomputing marvel. Able to move workloads from one chip to another at ungodly speeds, all while ensuring the data being transmitted remains intact.
And while the Top500 list pits supercomputers against one another, there’s a second battle that’s been raging on: should clusters leverage Ethernet or InfiniBand?
It's a debate that's run for more than two decades, with each commandeering control from one another in constant peaks and troughs for dominance akin to Formula One drivers jostling for podium places.
DCD sat down with Mike Vildibill, VP and GM for Slingshot high-performance networking at Hewlett Packard Enterprise (HPE), to work out how the interconnectivity landscape is shaping up today, and which protocol may be set to come out on top in the era of AI.
Ethernet versus InfiniBand: What's the difference?
The foundation of modern Internet Protocol (IP)-based networking, Ethernet, is arguably the most defined and understood Internet protocol.
Connected devices in a wired LAN (local area network) or WAN use Ethernet to communicate with one another. In the world of supercomputing, Ethernet does the same thing, allowing interconnected GPUs to transmit data to one another with a cluster (known as scale up), or entire clusters communicating with each other from one facility to another (known as scale out).
Infiniband, its rival interconnection protocol, does the same thing, though unlike Ethernet, it is a largely proprietary offering. While technically an open standard, Nvidia controls the ecosystem, and it is only really found in the GPU
"We try to continue the legacy of what Cray did after being acquired by HPE,"
>> Mike Vildibill
giant’s networking solutions. Nvidia acquired InfiniBand leader Mellanox in 2020 to further cement its dominance.
InfiniBand users can therefore make use of all the offerings inside Nvidia’s CUDA software stack, which is deeply integrated with the protocol, giving supercomputer operators access to a mature ecosystem.
Therein lies the difference
Ethernet is open, meaning data center and supercomputer engineers are easily able to interconnect hundreds of GPUs and other hardware from multiple vendors, be it AMD, Intel, or even Nvidia (with its newfound Spectrum-X fabric, which is built on Ethernet in a departure from its norm).
Given its wider availability among vendors, Ethernet is familiar to the vast majority of engineers from across the industry.
InfiniBand, however, has historically offered superior performance for HPC workloads, particularly in ultra-low
Credit: HPE
latency scenarios. This stems from the native inclusion of Remote Direct Memory Access (RDMA) – with network adapters able to transfer data directly between the memory of different systems, effectively bypassing the CPU entirely and eliminating processing overheads that would otherwise add latency.
Ethernet
finds a way
Given its high-level performance, alongside the industry clamor for Nvidia hardware, it’s hardly surprising that InfiniBand went on to dominate the AI network landscape. As recently as late 2023, the protocol held around an 80 percent share of the market.
But where InfiniBand once ruled supreme, slowly but surely, Ethernet has found a way. And to no greater success than in the world of supercomputing, where HPE’s Ethernet-based Slingshot interconnect is among the top contenders.
In the latest list, dated June 2025, six of the top 10 most powerful computers in the world use HPE’s Slingshot, including the top three: El Capitan, Frontier, and Aurora. That dominance extends to the top 30, with a total of 12 systems using Slingshot InfiniBand is the interconnect system with the largest share on the list, with 189 of the Top500 using InfiniBand NDR200. But in terms of performance, Slingshot 11, HPE’s latest and greatest interconnect, held a 48.1 percent performance share, compared to just 28.8 percent from Infiniband NDR200.
And Ethernet shows no sign of slowing, either, with recent research from Dell’Oro Group projecting Ethernet to dominate the data center-scale fabric space in the coming years, helping to drive nearly $80 billion in data center switch sales over the next half-decade as operators scramble for an open alternative to InfiniBand.
The latest system to take up what Mike Vildibill and the HPE team describe as “Ethernet plus” or “Ethernet with a twist,” was Isambard-AI, the fastest supercomputer in the UK. Inaugurated in summer 2025, it shot straight to 11th on the Top500 in June, with Slingshot providing 25.6 Tbps of bi-directional bandwidth across 64 ports, each capable of 200G.
The HPE VP explains that the trick to Slingshot’s success is getting the technology to “act like a proprietary interconnect on the inside and Ethernet on the edges.
“We try to continue the legacy of what
Cray did after being acquired by HPE,” Vildibill says, referring to Cray Research, the supercomputing pioneer that HPE purchased in 2019.
Vildibill continues: “They felt they could implement an interconnect that was Ethernet compatible and compliant at the edges while still doing some of the highly specialized work inside the fabric, or the secret sauce. They pulled it off, and we want to continue that.”
Making strides
With Ethernet projected to dominate then, what lies in store for the future? Cooperation and scale, if industry movements are to be believed.
First up, there’s the ESUN, the Open Compute Project’s new networkingfocused working group set to explore Ethernet for AI scale-up. HPE joins the likes of AMD, Meta, and Microsoft in a project set to examine Ethernet-based network switches, with the view to potentially build open, standards-based Ethernet switching for AI workloads.
But most significant of all, perhaps, is the Ultra Ethernet Consortium (UEC), which is looking to take Ethernet networking to the next level. Their 1.0 specification brings that coveted RDMA support from InfiniBand to Ethernet, providing low-latency transport for high-
"We were finding problems that no one had ever encountered,"
>> Mike Vildibill
throughput environments, while ensuring the interoperability component that has made Ethernet so successful. And it’s a project HPE is throwing its weight behind.
“The UEC is moving to define what is essentially Ethernet plus, as an open industry standard. The industry wants to build [and] do exactly what we've done with Slingshot,” Vildibill explains. “Not only do we support the industry move, we are a founding member of UEC. We've contributed a ton of intellectual property based on Slingshot, and we very much welcome an industry standardization on something that is closer to Slingshot than Ethernet is today.”
Despite being less than a year old, Vildibill tells DCD that as much as 70 percent of the UEC transport specification comes from Slingshot.
“We’re not competing against UEC, we’re embracing it, because it furthers what we're trying to do as well, which is to take Ethernet everywhere.”
Forged at scale
What sets Slingshot apart isn't just its technical specifications; it's how those capabilities were battle-tested. Unlike most networking products that start small and scale up, HPE's approach was, as Vildibill admits, doing things the hard way.
“Cray developed a new interconnect, and their first deployments that they had designed for, which HPE delivered after the acquisition of Cray, were the world's largest systems, bigger than ever built before,” Vildibill says. “We were finding problems that no one had ever encountered.”
At extreme scale, even the smallest issues become critical, as the VP explains: “When you're running something on 100,000 nodes, and you have a failure rate of one in 10 million, they're going to hit that within about 10 seconds.”
This forced the team to eliminate edge cases and bugs that would be negligible at smaller scales but became showstoppers when amplified across hundreds of thousands of nodes.
The result of this extreme vetting? World-beating systems that make use of a product refined at the highest end first.
“Their reliability, resiliency, error rate, bug rate, are phenomenally low because these things, if we've got them not happening at scale, then they're very rarely, if ever, happening at small scale,” Vildibill concludes.
Credit: OLCF at ORNL
The transformative power of telco AI
The story of artificial intelligence, or AI, is one that spans decades and multiple industries. AI enables a myriad of potential applications that will transform the way we live and work. In the telecommunications industry, we are already seeing AI’s impact and a future where AI enables the automation, orchestration, and optimization of networks and services.
Why AI?
The ultimate goal of technological progress is to make life easier and safer. In the communications sector, AI’s predictive and generative powers enable users throughout a network ecosystem to streamline operations, reduce costs, minimize service issues, and improve the delivery of services to end customers.
AI ensures that as the network grows, it can adapt and expand to new consumer needs and business requirements. This will give rise to complex, demanding, and ultra-low-latency applications ranging from smart cities and autonomous driving to Industrial IoT and remote surgery.
VIAVI Solutions brings its long history of leadership in the communications
industry to accelerate its transformation. For 100 years, VIAVI’s suite of tools has operated within the network, improving its performance, resiliency, coverage, and reach. Many of these tools have progressively incorporated AI and machine learning (ML) techniques to elevate the accuracy and efficiency of network functions and take on new tasks.
Digital twin’s predictive power
VIAVI’s AI-powered “digital twin” technology enables new predictive capabilities that help fortify networks. By creating a virtual model of a network in the lab, service providers can more quickly and effectively manage network behaviors and disruptions. Because they have seen it before, they can take their learnings from the lab to the field.
These digital twins play a pivotal role in advancing network design, reliability, and performance. They are an invaluable tool for identifying network issues before they arise, and they enable the timely troubleshooting of operational challenges once the network is deployed and operational.
VIAVI digital twin technology ensures reliable connectivity during mass gatherings. Sporting events, celebrations, and other large-scale gatherings, for example, can easily overwhelm a network – potentially cutting off the communications needed to reunite families in a large crowd, provide medical services, or simply share experiences of the day online. VIAVI AI models help operators understand potential outcomes and equip them to optimize the network and respond to different scenarios.
Using its AI technology, VIAVI can also predict how natural disasters might disrupt communications and emergency services. Network operators can leverage these models to design more resilient networks, respond to issues, and keep lines connected during the disaster, as well as accelerate restoration of critical services afterwards. This ultimately helps mitigate the repercussions of natural disasters on businesses and helps preserve human lives.
VIAVI’s AI-based solutions can also help the transition to Open RAN technology, which is critical to securing the telecommunications supply chain. Rather than a controlled network stack from a
single vendor, Open RAN networks will consist of highly diverse, disaggregated hardware and software components from multiple vendors.
The inherent openness and flexibility introduced by Open RAN also bring complexities – from interoperability and integration challenges to the imperative to match the performance and resiliency of traditional single-vendor-based radio access networks. VIAVI’s digital twin technology emulates in the lab the intricate network traffic patterns and behaviors of complex 5G, O-RAN, and anticipated 6G networks. Network operators can virtually build an O-RAN network alongside a digital representation of their existing network to see how they will work together – all before the first component is added.
VIAVI has taken on certifying, benchmarking, optimizing, and verifying all interoperability use cases and interfaces associated with Open RAN technology. In addition, we develop and verify the RAN Intelligent Controller (RIC), a crucial element responsible for automatically and intelligently allocating resources within the network. The security testing, network automation, orchestration, and
optimization we perform often leverage advanced AI/ML techniques to enhance the efficiency, automation, and security of Open RAN systems.
RAN Scenario Generators and Their Critical Role
The role of artificial intelligence within telecommunications is evolving quickly, shifting from speeding discrete automation tasks to intelligent, contextaware decision making in which it becomes a critical element of network operations. This change is particularly visible in the way that AI is being used in new and emerging 5G and 6G deployments, not least when it comes to MU-MIMO.
To date, networks have deployed AI as an add-on. It is being used to optimize and enhance existing systems and has enabled the dynamic allocation of network slices, the better management of resources, and the improved detection of both potential issues and security threats.
However, new 5G and 6G implementations built around “AI-native” architectures and MU-MIMO will shift AI from the periphery to the very heart of the network. This will enable autonomous operation across immensely complex, heterogeneous Service Management and Orchestration (SMO) Networks built on principles such as Open RAN and managed by programmable platforms such as the RAN Intelligent Controller (RIC).
This shift, however, presents a fundamental challenge: How do you ensure that AI is making the correct decisions for the network, especially when it needs to scale rapidly across heterogeneous, dynamic environments?
Training an AI (and ensuring its longterm viability) requires data. As well as being accurate and reliable, this data must
be representative of a real-world, dynamic network rather than an ideal or a snapshot at a particular time. Using this data, the AI applications used to run and maintain the network should then be continuously tested and challenged to prevent drift and to ensure readiness for change and unforeseen scenarios.
While AI has massive potential, access to high-quality training data is a major challenge. That’s where the VIAVI AI RAN Scenario Generator can help. Even the smartest AI will struggle if the data is poor or sparse. With high-volume, high accuracy RAN data, you can train models that are smarter, stronger, and more reliable. VIAVI combines real network data with synthetic emulation, creating realistic, diverse datasets that reflect true network conditions and user behaviors. This means AI models can be trained and validated faster, Narrator: "This enables enhanced AI model training and validation, accelerating development and boosting performance in AI-driven RAN rollouts.
Only AI can truly test and certify other AI systems. VIAVI AI RSG adds intelligent, closed-loop validation, giving actionable insights through advanced analytics and machine learning.
A
future shaped by telco AI
In the ever-evolving landscape of telecommunications, AI will play a pivotal role in shaping a more secure, efficient, and resilient network. Like many significant technological advancements that have come before it, AI necessitates a discussion of regulations and guidelines to safeguard its operation and address its direct and indirect consequences. But no two AI technologies are equivalent, and each carries different risks, impacts, and implications. Low-risk, high-value AI systems like those from VIAVI represent a new frontier in enhancing network security, resiliency, and efficiency. The regulatory landscape surrounding AI must find a balance between innovation, security, and public interests.
For more information, please visit www. viavisolutions.com/ai
Broadcast’s infra Edge play
Paul Lipscombe Telcoms Editor
It’s no secret that telcos have been looking for new avenues to better monetize their infrastructure assets. This has led some to either sell some of these assets, such as telecoms towers, or even spin them off into totally independent units. Others are following a similar pathway, with traditional broadcasters rethinking ownership of radio towers.
Seizing the opportunity
For many reasons, it makes sense that broadcasters are utilizing these assets more effectively.
As Riku Helander, senior vice president of telecom, Digita, explains to DCD, there’s instant power at these sites that makes them ideal for other opportunities.
“Equipment shelters and buildings on a broadcast site are significantly larger by area in comparison to telecom sites.
Broadcast tower sites also have highcapacity grid connections, which telco sites lack,” says Helander, which highlights the easy availability of access to the grid as a key benefit.
Founded in 1999 as a spin-off from national broadcaster Yle, Digita provides digital infrastructure and services to Finland, where it claims to be the largest independent owner of telecommunications masts.
The company provides services for media companies, consumers, mobile operators, industry, infrastructure companies, and property owners.
Broadcast companies are now using these sites to deploy Edge data centers.
“Since these sites are used to broadcast
Why broadcast companies see the value in deploying data centers at TV and radio tower sites
video and radio, they are very well connected,” explains Helander.
“So with regards to data centers, since we're hosting part of the infrastructure for video and radio broadcasting, we thought it would be a natural extension to start looking into also providing this service to other parties as well,” he adds.
At present, DigitalBridge-backed Digita has only deployed two Edge data centers at its towers. The company operates 38 broadcasting towers across the country.
According to Helander, an opportunity has presented itself in the market for Edge colo data centers to be deployed at its broadcast towers from the country’s migration away from analog to digital.
“Television has migrated to digital broadcasting from analog, and here in Finland, we have now also transformed, both on the terrestrial and cable TV as well, to HD-only quality.”
Due to this, the power requirements of the broadcasting infrastructure have reduced significantly.
“In essence, the main broadcasting stations have been designed for a significantly higher power consumption than they are experiencing today. So if you look at the situation, you have excellent connectivity, availability of power with relative ease, and you also have the resilience of having the power backup systems available for the site. From that viewpoint, they are good locations for data center activity as well,” he says.
Monetizing these existing connections
Others have similar plans, including
České Radiokomunikace (CRA) in Czechia, Telecentras in Lithuania, HKBN in Hong Kong, and Rai in Italy. In the Netherlands, Cellnex has also deployed data centers at TV and radio tower sites.
Rai, which has five Edge data centers across the country, has confirmed to DCD that it identified the tower data center play as a way to differentiate its strategy.
“The idea was to go to the market with a new offering of Edge data centers and colocation facilities. And basically, this is what we are doing. The idea was to differentiate our tower broadcasting business with data centers,” a spokesperson told DCD last year.
The company added that the sites were connected via Rai Way’s fiber network, which spans more than 6,000km.
“Because we are building data centers on land that is already owned by Rai Way, they are already connected by our own fiber ring.”
Towering above the Baltics
The Latvia State and Television Radio Center (LVRTC) is another company that has deployed data centers at TV and radio tower sites.
Founded in 1924, state-owned LVRTC is Latvia’s main operator of the terrestrial radio and television broadcasting network.
However, the company also provides infrastructure and services for Internet, telecommunications, and data centers. It was in the last decade that LVRTC started using its infrastructure to deploy data centers at some of its tower sites.
“When I first joined the company in 2014, one of the first main tasks was to
write the business plan on how we can commercialize square meters in our towers and infrastructure units,” says Janis Delvins, head of data center business direction, LVRTC.
Like Digita, Delvins tells DCD it was a “natural fit” for the LVRTC to enter the data center game.
“We need to think about what to do with these available square meters, because we have connections between towers and between Internet service providers. It felt natural to go into the data center business.”
One of Riga’s most famous attractions, the Riga Radio and TV Tower on Zaķusala Island, is actually owned by LVRTC, and is the largest tower of its kind in the European Union. Despite being a key part of the Baltic city’s landscape, the tower also serves a bigger purpose.
“At the moment, this is the biggest colocation point in the Baltics,” Delvins points out. “90 percent of all Internet services produced in this region go through this building. At the ground level, there is [a] high-availability data center there as well.”
LVRTC claims to offer 99.75 percent service availability at the Tier II data center site.
Delvins adds that LVRTC wants to deploy more data centers in Latvia as it aims to “provide sovereign secure cloud for AI solutions.”
He adds that the amount of capacity LVRTC’s customers are demanding has shot up significantly in recent years, noting that five years ago an unnamed customer had a 3kW rack, while that same customer is using a 25kW one today, and is not even a hyperscaler.
“It’s important that we’re ready to provide the amount of capacity that our customers want, while being efficient.”
Towercos do the same It’s not exactly a new play to utilize tower assets to host Edge data centers. Tower companies such as SBA Communications and American Tower have notably deployed multiple data centers close to towers.
American Tower opened an Edge facility in Raleigh, North Carolina, earlier this year, and has plans to build more of these sites, including a 4MW Edge data center in Indianapolis.
In an interview with DCD last year, Jake Rasweiler, SVP of innovation in the US tower division at American Tower, noted that the towerco was utilizing its towers
and easy access to fiber to deploy such sites. In Rasweiler’s words, where there’s fiber and power, there’s a play to build data centers.
“The fact that we have power, fiber, and land makes it easier to build a data center or extend the use of that land to a data center, so we think that's a natural way for us to extend the capabilities that we already have on our tower sites,” he said.
Separately, the company could acquire more towers, including a portfolio of broadcast towers from French company TDF Infrastructure, another business that provides radio and television transmission services.
TDF (Télédiffusion de France) is another broadcaster that has notably built data centers in buildings previously dedicated to broadcasting television, radio, and telecom signals that have been repurposed.
Cloud RAN focus
When it comes to the use cases driving these data center deployments, there’s a range of potential use cases that come to mind.
One of which, is around mobile connectivity, with Digita’s Helander highlighting Cloud RAN as a potential use case at broadcast tower sites.
In short, Cloud RAN is cellular network architecture that centralizes and virtualizes base station functions into a cloud-based data center.
“As part of the tower growth, mobile network operators (MNOs) are also piloting Cloud RAN solutions, which is also a potential growth area for Digita as
well,” he says. “Additionally, since Edge data center solutions are key ingredient in the streaming delivery chain - the growth of streaming services supports growth of our data center business.
“We are positioning ourselves for the use cases such as people who want to have the lowest latency of streaming services, or who want interconnection services, so that they can then spread it out to the hyperscaler infrastructure over here.”
Helander says Digita is anticipating the wider rollout of Cloud RAN in the coming years as an opportunity for its broadcast stations. The technology is still fairly early in its commercial deployment.
Come one, come all?
Reports suggest that more companies could follow the trend in deploying Edge data centers to these tower sites, including BT in the UK.
While BT has been quiet on this publicly, the telco giant has a vast tower and telephone exchange portfolio across the country, as the company decommissions its legacy copper network.
Meanwhile, mobile carriers such as Verizon have also been vocal about tapping into opportunities at the Edge. Frontier, in the process of being acquired by Verizon for $20bn, offers what it calls ‘Edge Colocation’ at more than 2,500 locations, including its Central Offices.
By 2028, STL Partners estimates that there will be nearly 2,000 network Edge data centers globally. How many of these will be located at a broadcasting tower site remains to be seen. Stay tuned.
AI is driving a networking evolution
Providers acknowledge the challenge presented by running complex systems
Data centers are exploding with AI demand that is putting significant stress on the network connectivity infrastructure that ties them together. It’s a stress that network providers are racing to alleviate.
This stress is evident in the multitude of recent AI-related expansion and enhancement projects announced by firms operating up and down the networking stack. These moves are based on the need to both increase the capacity of networks running between AI-engorged centralized data centers and latency-minimizing Edge locations.
"Our current infrastructure is simply not built to go out and accommodate that level of traffic," >> Jeetu Patel, Cisco
Jeetu Patel, Cisco’s chief product officer, recently told attendees at an analyst conference that three AI-related traffic waves will stress current network architectures and force a complete rethink of infrastructure from the data center out to the campus Edge.
Patel explained that the first wave is coming from so-called “chatbots,” where “I ask a question, I get an answer back,” adding that these AI platforms “have very spikey traffic patterns,” where “the utilization spikes up and then comes right back down.”
Patel noted that this pattern is being handled competently by current network architectures.
The next AI-driven traffic wave is coming from agentic AI, which is set to amplify those chatbot peaks into a sustained higher level of activity.
“The traffic patterns start to get much more sustained and persistent over time,” Patel noted of this model. “And so our current infrastructure is simply not built to go out and accommodate that level of traffic pattern.”
Dan Meyer Executive Editor, SDxCentral
The third pattern is what Patel called “physical AI, where you’re going to need some more Edge-based computing and Edge-based networking that’s only going to compound the requirements.”
Patel explained that those last two traffic patterns are driving enhanced infrastructure requirements “both in campus branch as well as in data centers,” with the latter also needing to adhere to power limitations.
“And so what you're starting to see is re-architecting of data centers to accommodate for this new additional volume of usage,” Patel explained.
“And you're also starting to see rearchitecting of campus branch networks because everything from WiFi to routing switching needs to get rethought.”
Dell’Oro Group echoed this concern in a recent report, which forecast increased spending on campus Ethernet switch gear “as enterprises invest in higher capacity networks.”
“The wide use of AI agents is expected to put new requirements on the [LAN],” Dell’Oro Group research director Siân
Morgan wrote. “It is still too early to determine exactly how it will play out, but traffic patterns, volumes, and sensitivity to latency are likely to change – leaving room for product differentiation by campus switch vendors. Enterprises recognize the importance of a high-performance LAN, especially as they plan for the implementation of AI use cases.”
Stress on – or under – the street
This flood of need is forcing vendors to selectively attack the problem.
John Coster, manager for innovation, planning, and strategy at T-Mobile US, touched on this challenge during a panel discussion at the recent Yotta 2025 event in Las Vegas.
“Think about 125 miles (201 km) as a millisecond, and you’ve got like 30 miles (48 km) between towers, so think about 100 miles (161 km) from tower to tower from the fiber standpoint,” Coster said, adding in the roundtrip for data traveling over that fiber network. “We shoot for 10 milliseconds [of latency] and there are people that say with [augmented reality], virtual reality, you want to see less than three [seconds of latency].”
Coster said T-Mobile US works with application developers, such as the companies behind autonomous vehicles, to work through this latency deficit by fine-tuning the ability to process data within a vehicle and for what data needs to be sent back and forth over a fiber network.
“There are so many unknowns right now, it's just evolving and everyone's kind of trying to feel their way along,” Coster said. “How much do we invest in edge AI to handle the kinds of traffic when we don't know who the client is. It's a little bit of an inventive process.”
Changing the network architecture for AI
Lumen Technologies CTO Dave Ward explained the depth and challenge of this network architecture quandary in an interview with DCD last year.
“There are so many capacity constraints associated with constructing AI: power, literally power from the grid, and then where you can place your data center, can you get the GPUs?” Ward says. “This network capacity is a very scarce resource, in particular, where the data centers or AI data centers are being built.”
Analyst firm LightCounting, for instance, predicts growing AI use will
result in a doubling of sales this year for Ethernet optical transceivers used in AI clusters.
“What this means for us as a connectivity partner is we fully plan on building an AI fabric between these locations and major data centers that are of the right power, size, and scale to host GPUs and the AI workloads and create a specialized connectivity fabric just for that purpose,” Ward says of this effort. “So much fiber and so many waves are required, and that has really become a different segment than some of the other cloud economic segments that are coming in, and we can construct just to that.”
Ward recently authored a Lumen white paper that called for a “reset of network capabilities,” contending that current systems are not fit to meet demands for the next generation of cloud.
That reset included the need to extend fiber and optical connectivity into areas where power exists and data centers are planned. This will require work with hyperscalers and enterprises to target expansion into tier-two markets.
“This explosion of rural data center operator clusters only further exacerbates current architecture problems,” the white paper reads. “Instead of backhauling to a major metro to find a carrierneutral facility interconnect, new local interconnection may be much more efficient. Lumen has identified dozens of new data center clusters scattered across the US that will require fiber, wave, and IP services.”
Dark density and powerful programmability
The Lumen white paper also suggests that more “purpose-built connectivity” is needed to truly meet the demands for AI workloads. Instead of connecting
“Most of the architectural foundations that defined Cloud 1.0 are obsolete,”
>> Dave Ward, Lumen Technologies
everything equally, Ward and Lumen’s idea for purpose-driven networks would see more dark fiber networks employed to help connect facilities.
Distributed data centers are of growing interest to hyperscalers, with solutions like Nvidia’s Spectrum-XGS looking to turn multiple, disparate data centers into unified “AI super-factories,” to borrow the chipmaker’s parlance.
Ward wrote: “While we expect that these factories will primarily communicate with one another through distributed and sharded training and reinforcement at an industrial scale, they will also interface with the already overburdened Cloud 1.0 architecture in major metropolitan areas to distribute inference and exchange data and models with partners,” Ward wrote.
In addition to dark fiber, Ward and the team at Lumen want networks supporting next-generation workloads to feature programmable underlays. This would allow operators to employ bandwidth from premises to the cloud and create fabrics as needed, with SD-WAN and secure access service edge (SASE) tunnels helping to keep workloads secure.
“Interconnected enterprises will most certainly transform with Cloud 2.0,” he added. “Both public and private internet networks must become faster – as well as more secure, distributed, and programmable – to meet new demands. … Most of the architectural foundations that defined Cloud 1.0 are obsolete and cannot support the requirements of the new cloud era.”
This need to stay ahead of demand is set to drive significant network investments over the next several years. This will include both the extension of those networks and in technology to glean the most efficiency from those deployments, investments that will need to remain aligned with AI-fueled data center expansion.
Future-Proof Your Communication Networks with Quantum-Safe Technology
Quantum-safe communication is no longer a distant goal, it’s happening now.
A diverse ecosystem of technologies is driving this progress, including Quantum Key Distribution, Post-Quantum Cryptography, end-to-end hybrid QKD-PQC models, satellite-based cryptography and key management, and transitional architectures that combine classical and quantum-safe systems. These innovations are transforming our understanding of secure communications in light of quantum computing’s disruptive potential.
With decades of experience in systems, physics, lab validation, and network execution, VIAVI offers security frameworks and validation tools that enable organizations to move quantum algorithms and architectures from theoretical models and lab environments into secure, real-world deployments.
To learn how VIAVI can help your quantum safe strategy, visit viavisolutions.com/qsafe or scan the QR code above.
A wonderful life
TV’s Kevin O’Leary, aka Mr Wonderful, on why he backs data center firm Bitzero, and why AI bubble fears are misplaced
Kevin O’Leary is a man who wears many hats. Not content with taking the world of TV by storm on hit US show Shark Tank, the businessman and investor now has his sights on silver screen stardom, playing alongside Timothée Chalamet in the upcoming movie Marty Supreme
O’Leary assumes the role of wealthy businessman Milton Rockwell in the film, which sees Chalamet’s character embark on a self-destructive journey to table tennis glory, and has already garnered a slew of positive reviews ahead of its release over the holiday season.
Whether it will be the start of a long and glorious film career for O’Leary
remains to be seen (“the writers told me ‘we need a real asshole, and you’re it’,” he revealed to TMZ in a recent interview about how he landed the role), but away from the world of showbiz, the man known as Mr Wonderful has plenty of other interests, one of which is data centers.
O’Leary is an investor in Bitzero, the Canadian cryptominer now turning its hand to AI and High-Performance Computing (HPC), which recently went public in a bid to boost its expansion plans.
“The value of what Bitzero has has risen dramatically, and I think over time the market will recognize that,” O’Leary says.
A Bit of alright
Founded in 2021, Vancouver-based Bitzero operates data centers around the world, and claims to select locations based on their cool climate so as to cut the carbon footprint of its servers.
Its Norway 1 facility, in Namsskogan, Norway, offers 40MW capacity across 14,000 cryptomining rigs. Bitzero says this is currently being expanded to 110MW. It has also leased a 5MW site in nearby Røyrvik, known as Norway 2.
In Finland, Bitzero is operating out of a facility in Kokemäki, which offers 10MW. In December, it broke ground on a 100-acre expansion of the site, which
Image: Bitzero
Matthew Gooding Senior Editor
could eventually take its capacity up to 1GW.
Meanwhile, in North Dakota, US, the company purchased a former missile base, the Stanley R. Mickelsen Safeguard Complex at Nekoma, commonly known as “The Pyramid,” in 2022, and now runs a 2.5MW data center on the site, with 30MW of capacity “prepped for rapid deployment.” The 184-acre site could eventually offer up to 300MW.
CEO and president Mo Bakhashwain is a crypto consultant with a background in real estate, and says he started the company because he “wanted to connect the tangibility of real estate with the upside of crypto.”
At its Scandinavian sites, the firm benefits from the region’s plentiful lowcarbon power, with both its Norway sites served predominantly by hydro power, and the Finnish facility taking advantage of a mix of hydro, nuclear, solar, and wind energy. The picture is less green in the US, where Bitzero says it will deploy a mixture of wind, natural gas, and grid power in North Dakota.
Nonetheless, the firm says it has the power secured and available to deliver large campuses in Europe, and that it can do this at a low price, so it’s no surprise it is looking to take advantage of the seemingly insatiable demand for AI-ready compute. The crypto-to-AI pivot (covered in-depth in DCD Magazine #55) is a well-trodden path at this point, but unlike erstwhile rivals CoreWeave and Iren, which have abandoned Bitcoin entirely in favor of AI, Bitzero intends to try and marry the two worlds.
“We see a really big opportunity in HPC,” Bakhashwain says. “We have a great engineering team, the same people who have worked with Microsoft and [neocloud] Nscale on their deployments in Norway, so we have the expertise to deliver what the industry needs, as well as the power and land.”
But, he says, his firm “is not going to neglect Bitcoin mining,” explaining: “We’re hoping to get the best of both worlds - the
long-term, investment grade, cash flows from HPC and AI, while having exposure to the speculative upside of Bitcoin. I don’t actually see [Bitcoin] as speculative, because if you study the network enough, you realize it usually evens out from a return-on-cash point of view.”
Many would agree with Bakhashwain’s initial assessment of the unpredictable nature of Bitcoin, but the fact remains big money can potentially be made from cryptocurrencies, and the combination with AI is likely to be an alluring one for investors. Bitzero has already garnered more than $100 million (“we’ve spent it on hard infrastructure,” Bakhashwain says) from private backers, including O’Leary, and listed on the Canadian Stock Exchange in December to try and boost its coffers.
Bakhashwain says Bitzero is targeting the hyperscalers and potential “sovereign AI deployments” as end users for its AI data centers, but is coy about how far discussions have progressed with future customers. He argues that the combination of AI and cryptomining can have an environmental benefit, with power dedicated to crypto rigs that can be redirected for use as backup power for AI servers in the event of an outage.
“Backup power is needed in any AI compute cluster or traditional data center,” he says. “This is a minimum of 30 percent, so why pay for 30 percent idle energy when you can cover the costs by using it to mine Bitcoin. I would call it an ESG play, because the mining helps not waste energy that is already reserved and paid for as redundancy.”
For his part, O’Leary initially invested in Bitzero as part of a wider interest in cryptocurrencies, but tells DCD his view on what the company can offer has changed in light of the AI boom and the impact this has had on the data center market.
“We have the expertise to deliver what the industry needs, as well as the power and land,”
>>Mo Bakhashwain, Bitzero
Ventures, has partnered with the Municipal District of Greenview to build an off-grid natural gas and geothermal project that will power an undisclosed AI data center in Alberta, Canada.
Dubbed “Wonder Valley,” it is hoped the project will provide an initial 1.4GW of power, rising to 7.5GW in a five-tenyear period.
But in other areas, he is skeptical that many of the large data center projects announced over recent months will come to fruition, despite the hefty returns investors who back digital infrastructure schemes can reap.
“50 percent or more of the data centers that have been announced won’t be built,”
>> Kevin O’Leary
He says: “I don’t really consider it a Bitcoin miner anymore, I consider it a real estate power company. What it has is sub-six cents per kWh power, with land, permits, and water, which is something that’s incredibly hard to find anywhere in the world.”
Wonderland
O’Leary’s data center interests extend beyond Bitzero. His firm, O'Leary
“I’m involved in investing in many different jurisdictions around building data centers, and I’ve had a chance to meet with many state governors in the US and Canada,” O’Leary says. “I’ve been a real estate investor my whole life, and right now the returns are penciling out at anywhere from 14-20 percent on data centers. But the scarcity of power is causing a major problem.
“When you go and look at opportunities in the US, I would say 50 percent or more of the data centers that have been announced won’t be built because there is no power on the grid.”
Mo Bakhashwain
“Data center is now a dirty word in every township and city in America,”
>>Kevin O’Leary
Local opposition to data centers, and the power infrastructure required to run them, is likely to hamper any efforts to improve this situation, O’Leary argues.
“Data center is now a dirty word in every township and city in America,” he says. “The reason is simple, because every time you propose a capex expansion on the grid, you’re talking about a 12-30 percent increase in electricity costs for the local hospital, library, or care home. The chance of that being approved is zero.”
Many developers are turning to natural gas as a way to access the power they need, but are running into difficulties sourcing turbines, with waiting lists for new machines running to several years.
Accessing stranded natural gas is also an expensive business, O’Leary says, even though it’s something America has in abundance. “You’re talking about $2 billion to get a gigawatt set up,” he says. “And that’s even if you can get a contract on a pipeline somewhere. That’s why many of these sites are collapsing under the burden of getting financed.”
Though some in the sector tout nuclear energy as the answer to this problem (see page 17), O’Leary does not expect to see new reactors popping up across the US. “Maybe that’ll work in 20 years if you can get a permit for it,” he says. “Find me one township in America that wants a nuclear facility, large or small, in its backyard? I love all this stuff, but I also live in reality.”
For O’Leary, part of this reality is that AI is here to stay. Recent months have been characterized by stories about the AI bubble being on the verge of bursting, with companies including GPU giant Nvidia and ChatGPT-maker OpenAI signing a series of what appear to be circular deals.
These agreements have caused concerns in the financial markets, most notably from Michael Burry of 'The Big Short' fame, who says he is shorting Nvidia because the company's
accounting of stock-based compensation was inaccurate and that the AI market was in a bubble. Nvidia denied these claims in a note sent to analysts.
O’Leary does not share Burry’s concerns. “There’s an error in the thinking that draws an analogy between what’s happening now and the early days of the Internet in the 1990s, when Pets.com and all that crap came and went,” he says. Amazon-backed online pet shop Pets.com was one of the highest-profile victims of the dotcom bubble, going into liquidation in November 2000, just nine months after an IPO that raised $82.5 million.
What’s different about AI is that “all 11 sectors of the economy are using it for margin enhancement and productivity gains,” O’Leary says. He continues: “I don’t care if you’re in financial services, real estate, consumer, or pharma - every single one of these industries has a use case and is implementing AI models now.
“They're not building AI themselves, they're leasing the tools from the four or five behemoths who are spending billions setting up the infrastructure to develop it.”
O’Leary adds: “Even in my small portfolio, which is focused on content generation, AI expenditure is up about 40 percent a quarter right now. The demand for this stuff is insatiable.”
Kevin O’Leary
systems from grid to chip
Grid-interactive uninterruptible power supply
The data center hearing (almost) no one showed up to
Visions of the past and the future in Slough
Local land planning meetings about data centers are a febrile source of content for news publications looking to get a human angle on AI.
At these meetings, one can expect citizens of all ages, always impassioned and sometimes articulate, to deliver a speech about utility rates, the environment, NDAs, or some other common grievance relating to a new data center development. The rhetoric is captivating and unmistakably moral, with some going as far as to invoke God.
But on Wednesday, October 22, as a government-led inquiry into a data center project in Slough moved into its seventh day of proceedings, only two of the chairs in Slough Borough Council’s meeting room were occupied: one by DCD, and the other by an employee of JLL, a prominent real estate consultancy.
As the inquiry concluded on October 24 – which, in the words of a council representative, was supposed to contain the inquiry’s “juicy bits” – the number of people in the audience had dwindled to one.
The people, both literally and metaphorically, were nowhere to be found.
In the US, meetings concerning data center projects often require extra seating, and some hearings are rescheduled because a new venue is required to accommodate all interested parties. But such was the dearth of public interest that the council concluded that there was no point in even live-streaming the proceedings.
The commonly accepted narrative of data center opposition pits the aggrieved citizen against Big Tech and the venal officials who help grease its path. But in
“The Green Belt has almost entirely prevented any development,”
>>Professor Paul Cheshire, LSE
this instance, opposition to the project seemed only to come from the council itself.
Slough gives a glimpse of what the dynamics of data center development could look like if the world becomes used to data centers, just as it has become used to bridges, roads, or factories. When compared with the more politically active counties in northern Virginia, Slough feels like an alternative future where data center opposition becomes technocratic, professional, and high-strung, with that familiar, scrappy, grassroots flavor rinsed out and rendered undetectable.
The inquiry
It follows that the inquiry for the data center project, which lasted eight working days from October 14 to October 24, was altogether unexceptional.
In December 2024, a company called Manor Farm Propco submitted an application for the construction of a 147MW data center, a Battery Energy Storage System (BESS), a substation,
Jason Ma Junior Reporter
offices, backup generators, and other related infrastructure.
Manor Farm Propco intended for this to occupy a 74-acre land parcel called Manor Farm. This parcel is constituted by two parts – a northern piece, home to a former logistics hub, and a southern piece, the majority of which is green space – and it wraps around another parcel called Poyle Farm. This is located between Slough’s town center and London’s Heathrow Airport in an area called Colnbrook.
Manor Farm Propco spent £70 million ($92m) buying the land from a company called the Airport Industrial Property Unit Trust. But both companies are actually subsidiaries of a Real Estate Investment Trust (REIT) called Tritax Big Box. Although Tritax deals primarily in the sale, purchase, lease, and management of warehouses and logistics facilities, it has since dipped its toes into data centers.
After the council received the application, it emailed Manor Farm Propco asking if it would be alright for the council to postpone its decision-making date from April 3 to April 30, giving it more time to consider the application in full. Manor Farm Propco did not respond, and a decision was not issued by April 3. Suspecting that the project was going to be denied, in May, Manor Farm Propco
filed an appeal asking the Planning Inspectorate, a national body responsible for dealing with planning appeals in the UK, to consider an inquiry process.
A month later, the Inspectorate said that it considered an inquiry to be suitable, meaning that the council and Tritax would have to present their case to an individual appointed by the Inspectorate – an inspector – who would decide whether the proposal could move forward.
In August, the then-Secretary of State for Housing, Communities and Local Government, Angela Rayner, “called in” the appeal, giving herself the final say in the project. Rayner has since resigned after admitting to underpaying stamp duty on one of her flats, so it will be her successor, Steve Reed, who will make the final decision based on a report written by the inspector.
This level of government involvement is unusual. Planning decisions are typically left in the hands of the council. But LSE’s Emeritus Professor of Economic Geography, Paul Cheshire, says that over the past ten years, the government has become more willing to intervene in projects that are deemed to be nationally important.
And in the eyes of the UK government, nothing seems to be more important than data centers.
When Rayner was in charge, the government was happy to intervene in data center proposals across the UK, “calling in” and approving three projects in Hertfordshire and Buckinghamshire despite protests from local councils and residents.
Support for data centers has also become a matter of national policy. The current UK Labour government, facing
down an electoral threat from the right in the form of Nigel Farage’s Reform Party and the threat of being left behind in the AI boom, is betting that data centers can help revitalize an economy beset by slow growth and deindustrialization.
In September 2024, the government designated data centers as Critical National Infrastructure, giving the sector access to more support in the case of an emergency.
In January of this year, the government announced that it would establish ‘AI growth zones’ that would provide data center operators with access to faster planning permission and power. At time of writing, four such zones have been announced.
In September, a state visit by US President Donald Trump brought with it a series of data center-related investments both new and repackaged, including a £5bn ($6.9bn) Google deal, a $678m BlackRock venture, a new Vantage facility, OpenAI's Stargate UK, a £1.5bn ($2bn) investment from CoreWeave, and a $15bn commitment from Microsoft.
Last month, Home Secretary Shabana Mahmood was set to approve legislation that would allow operators to request their data centers be considered a “nationally significant infrastructure project,” a designation traditionally reserved for utility and transport projects that would further speed up the planning process.
One sees in the UK a repeat of the story that is occurring across the world. The demands of AI training and inference are much more energyintensive than typical cloud and colo workloads, pushing data center operators beyond their typical stomping grounds and into communities with little
“They’d rather see something there than nothing, because as is the nature of industrialization, you get a new technology that comes along, and then it becomes established, and then other people come along and they do it more cheaply somewhere else, and your old technology becomes uncompetitive, and you’re left, you know, just with an empty shell,”
>>Councillor Dexter Smith
Manor Farm DC site plan
experience and understanding of digital infrastructure.
Slough and data centers
In this way, Slough is at once a vision of the past and the future.
Slough has long been the home of the country’s cloud and colo workloads, but the construction frenzy soon to sweep the UK will be driven by data centers built specifically for AI usage.
At the same time, Slough’s attitude and approach to data centers is a glimpse into what the dynamics of data center development could look like in the next few decades. When compared with the more politically active counties in northern Virginia, Slough feels like an alternative future where data center opposition becomes technocratic, professional, and at times soporific, with that familiar, scrappy, grassroots flavor rinsed out and rendered undetectable.
The town is rather fond of data centers. Slough Borough Council itself proudly advertises that Slough is “Europe’s largest data center cluster,” openly touting the fact that its 32 data centers, located in the Slough Trading Estate, constitute the world’s second largest cluster behind Northern Virginia’s Data Center Alley. Operators were originally attracted to the Slough Trading Estate because of its plethora of fiber connections, its easy access to power, and its location next to London.
Segro, the company that operates the estate, has said that it is prepared to deliver 4.3 million sq ft (400,000 sqm) of additional data center accommodation over the next seven years. For context, in the past five years, the estate welcomed 14 data centers totaling 2 million sq ft (185,000 sqm) – a bit of back-of-ahandkerchief math would mean that the estate stands ready to accommodate around 28 more data centers of the same size.
The local planning authority has also approved a series of Simplified Planning Zones for the trading estate, meaning that data center developers looking to build there do not require further planning approval so long as they comply with less onerous supplementary requirements and stipulations.
One of Slough’s councillors, Dexter Smith, said that “the general attitude is that residents are relatively happy to see continued development, particularly given that most of these data centers
are on brownfield sites.”
Smith says: “They’d rather see something there than nothing, because as is the nature of industrialization, you get a new technology that comes along, and then it becomes established, and then other people come along and they do it more cheaply somewhere else, and your old technology becomes uncompetitive, and you’re left, you know, just with an empty shell.”
In keeping with this logic, at the start of December, Slough Borough Council approved an Equinix data center project located at a former factory operated by paint manufacturer AkzoNobel.
So why was a council historically welcoming of data centers rejecting this one?
Firstly, Manor Farm is located on the Green Belt, which is a British planning designation established in 1955 that, in the words of Professor Cheshire, has “almost entirely prevented any development.”
The designation was invented by planners to do three things: stop urban sprawl around big cities, to prevent neighboring towns from merging into one another, and to preserve the setting and special character of historic towns. On maps, it appears as a giant ring surrounding all of the UK’s largest metropolitan areas. Within the country, the largest by far is the Metropolitan Green Belt, which occupies 513,860 hectares around Greater London and other neighbouring counties.
However, the election of the current Labour government brought with it a new planning designation – grey belt –which refers to Green Belt land that can be developed on. To be considered grey belt, the land has to be a brownfield site that does not “strongly contribute” to any of the Green Belt’s purposes.
But Professor Cheshire says that the vagueness of the definition means that “it’s up to local authorities to decide for themselves whether any particular piece of land is Green Belt or grey belt.”
Tritax contends that the land should be considered grey belt, and that the national importance of data centers justifies the use of the land as a data center. The council disagrees, stating that Tritax could not demonstrate that the data center absolutely had to be built in Manor Farm.
There was the simple suggestion that Tritax could have considered another, more suitable piece of land. The council argued that Segro had made way for data centers on the Slough Trading Estate, and a data center in Manor Farm would require the construction of additional infrastructure, including wires and fiber cables, in order to connect it to existing power and fiber wires.
Tritax, for its part, is adamant that this land parcel be used as a data center because it has already secured a private write connection with the nearby electrical substations in Iver and Laleham, enabling it to bring the data center online as soon as 2027.
Second, the Council argued that the land parcel is also located in the Strategic Gap – land that is designated as a gap to separate Heathrow from the rest of Slough – and the Colne Valley Regional Park, which is a series of parks and green spaces along the River Colne.
Third, the site could potentially be used as a location for freight forwarding as part of the expansion of Heathrow Airport, and the council went as far as to suggest that the facility would “deprive the expansion of essential freight facilities and/or saddle
“Well, we can’t do anything about it,”
>>Resident living near Manor Farm
[Heathrow] with enormous unquantified compensation costs as a result of granting this permission.”
But by the time the hearing concluded, the government had not decided whether it would support Heathrow’s own plans for expansion or an alternative plan proposed by construction company Arora Group. The former included Manor Farm as part of its proposal, but the latter does not, and Tritax argued that the site’s inclusion as part of the expansion was not necessarily a foregone conclusion.
The government has since chosen Heathrow’s proposal, and this will likely factor into the Secretary of State’s eventual decision. Tritax has said that it has not yet received any formal notification regarding planning consent from the Secretary, and the Secretary’s decision is unlikely to arrive until 2026.
As the meeting drew to a close, heads were nodded, hands were shaken, and compliments were traded. Representatives of the council were happy to offer the opposing side some biscuits that someone had brought in.
Although the council and Tritax were opponents, the pair shared an understanding that more data centers were necessary. They squabbled about scope, definitions, and numbers – there was an extended digression over the methodologies used by both sides to estimate data center demand in the UK – but no one ever challenged the basic assumption that data centers needed to be built.
If AI is here to stay, it is reasonable to assume that data centers will become normal. Choosing where to build a data center will no longer be a moral question, but a technocratic one.
It certainly seemed normal to those who were sitting in that meeting room in Slough Borough Council. After all, they were just doing their jobs. Little could be found of the passion and acrimony that many have come to expect from a data center hearing. No one was fighting for re-election, and the proceedings contained little trace of ego or self.
It is possible that this is a vision of the future that is already past. Maybe the more intensive and speedy buildout of hyperscale facilities will create development dynamics that render Slough’s experience entirely irrelevant.
But it seemed clear that it was business, not politics, that had been done here.
Slough house
Manor Farm and the surrounding area is designed mainly for transport, making it difficult to get to the site on foot. The sidewalks leading up to Manor Farm are narrow and in some cases non-existent, requiring one to walk on grass and soil as cars, lorries, and planes zip past at a regular cadence.
Almost everything around you is moving or meant to be moving. Just across the road is the Poyle Industrial Estate, home to various courier and freight forwarding companies that service Heathrow Airport, and directly north of the land is the clunkily named Hilton London Heathrow Airport Terminal 5 hotel.
The houses that sit on the edge of the Poyle Industrial Estate are occupied by businesses, not people. One such property hosts two recruitment firms, and when asked if they knew anything about the possibility of a data center becoming their future neighbour, the receptionist simply shrugged and said: “I don’t know anything. We’re a recruitment agency.”
There are exceptions. There is a stretch of houses in an area called Colnbrook, which is further north of the Hilton hotel, and in the immediate vicinity of Manor Farm are two residential properties. One of them juts directly into the southern portion of Manor Farm’s greenfield land, and the other sits right at the entrance of the former logistics hub.
The latter seemed uninhabited. The stiff, plastic buttons on the buzzer have not been used in a while, and pressing
them only elicits the faintest of noises from inside the house.
But the door opens, and out walks Pete and Alan, aged 67 and 65. Alan is hunched over, his center of gravity bent forward as he leans his weight on a pair of grey crutches. Black tape is wrapped around the area just below the handholds.
Pete has been living there for 30 years, Alan has been there for six. They say they are the only ones still living in the house, and everyone else has since moved. When asked for their thoughts on the new data center project potentially coming to their home, I am greeted with a blank stare.
When asked if they want to comment, they say no. But they go on to say that their house is set to be demolished as part of the new project. They say they are waiting for the Council to issue an eviction notice after being told to leave in March by Tritax. Everyone else in the house has left, but they say that if they left, they’d just be making themselves homeless.
It seems obvious that this was the sort of person who should have been occupying the empty seats in the planning meeting. But when asked why they did not show up, they shrugged their shoulders and simply said:
“Well, we can’t do anything about it.”
It seems like few either saw or cared enough about the sign placed at the gate of Manor Farm, just a few steps away from Pete and Alan’s house. Printed and laminated, one line on the notice stands out:
“Members of the public may attend the Inquiry and, at the Inspector’s discretion, express their views.”
DCD Awards>2025 Winners
Following months of deliberations with an independent panel of expert judges, DCD Awards is proud to celebrate the industry’s best data center projects and most talented people.
With thanks to our headline sponsor Mercury Engineering.
Edge Data Center Project of the Year
Winner: BDX Data Centers and Nvidia
BDx created a national AI platform for Indonesia by combining highdensity core training campuses with a network of 50+ Edge sites. CGK4 was delivered in 73 days and covers 98 percent of the population.
Asia Pacific Data Center Project of the Year
Winner: Bridge DC
Bridge's MY07 facility brought Malaysia’s first effluent-water cooling into live service, turning sewage into cooling-grade water on site. It cut the treatment footprint and enabling water recovery above 90 percent.
Sponsor
North American Data Center Project of the Year
Winner: Crusoe and AlfaTech
Crusoe’s Abilene site is a 1.2GW AI data center built for extreme density and rapid delivery. Using liquid cooling and repeatable modular design, the first phase became operational around 12 months after breaking ground, supported by an energyfirst model that includes onsite generation.
Latin America Data Center Project of the Year
Winner: Elea Data Centers and Hyphen
Rio AI City is turning Rio’s Olympic Park into Latin America’s first AI-ready digital district. Phase one delivers 1.5GW of renewablepowered capacity, scalable to 3.2GW, with water-free cooling and will support 10,000+ jobs.
Middle East & Africa Data Center Project of the Year
Winner: Pure Data Centres Group and Laing O'Rourke
The winners delivered AUH01 on Yas Island, bringing 10MW online across two phases. Phase one was completed in February 2025, with phase in July 2025, supported by prefabrication and digital-lean delivery, with targets including WUE 0.18.
European Data Center Project of the Year
Winner: Start Campus and Schneider Electric
Start Campus and Schneider Electric delivered SIN01 as the first step of a 1.2GW AI campus in Sines. It pioneers gigawattscale seawater cooling, achieves a WUE of 0, and is built to support rack densities above 130kW, setting a new benchmark for scale and sustainability in Europe.
Energy impact award
Winner: RE24 and Keppel Data Centres
Four Irish PPAs covering 100 percent of annual demand, reaching 79 percent hourly matching and delivering more than £1.5 million in tenant savings in under 12 months.
Environmental Impact Award
Winner: atNorth
atNorth’s ICE03 expansion shows how AI-ready capacity can scale responsibly, delivering a PUE below 1.2 with direct liquid cooling and sustainable construction. It also reuses waste heat to support a greenhouse, linking it to local sustainability and education.
Mission critical innovation award
Winner: Ciena & Meta
Ciena and Meta rethought out-of-band management using passive optical network technology to simplify scale and reduce infrastructure overhead. The result is a resilient, automated approach that can free up more than 99 percent of the rack space used by legacy setups, while also reducing power and cabling.
Sponsor
Community Impact Award
Winner: Microsoft and United Way of Hyderabad
A a Comprehensive Community Development Program to help four villages drive their own progress across education, health, livelihoods, and sustainability. In its first year, it has supported more than 22,400 people through learning centers, digital access, and local water restoration and tree planting.
Editor’s Choice Award
Winner: Digital Realty and Mercury Engineering
Digital Realty's €1bn+ investment in their Paris Digital Park not only delivers 76.8MW of IT capacity but also invests in urban regeneration through job creation and community focused infrastructure development.
Young Mission Critical Engineer of the Year
Winner: Arwa Alali, Khazna Data Centers
In three years, Arwa Alali has contributed to more than 6GW of AI-ready data centre delivery across seven countries. Her work includes major optimization wins, including a 35 percent space improvement on a 100MW project, a significant capacity uplift in Dubai, and a zero-downtime AI upgrade at a live site.
Data Center Construction Team of the Year
Winner: Princeton Digital Group
PDG's TY1 engineering team successfully delivered Phase 1 of the 96MW TY1 campus, one of Japan's most advanced AI-ready data centers, overcoming seismic, regulatory, and infrastructure complexities using local expertise and strong vendor collaboration.
Data Center Operations Team of the Year
Winner: Chindata Group
Chindata’s Lingqiu operations team manages more than 350MW and over 30,000 racks while maintaining 100% SLA performance. With AI-led operations and liquid cooling, they achieved a PUE of 1.18 and reduced cooling energy by 9.4 percent, saving roughly $1.7 million a year.
Outstanding Contribution to the Data Center Industry
Winner: Joe Kava
Joe helped quietly rewrite the rules of what a modern data center could be — long before the rest of the world caught up.
During his time at Google, where he served as vice presdient for data centers, he challenged longheld assumptions about cooling, power, efficiency and scale. He proved that hyperscale didn’t have to mean waste. That resilience didn’t have to mean excess. And that sustainability could sit at the very heart of global digital infrastructure.
For more than a decade, the influence of Joe's thinking has rippled far beyond a single company — shaping how operators design, build and run data centers across the world today.
Data
Center
Woman of the Year
Winner: Karen Petersburg
Karen Petersburg is showing what modern leadership looks like in data centre delivery, pairing AI-ready campus execution with measurable ESG outcomes. She has pioneered sustainability tracking and built practical community programs, including workforce pathways and initiatives that deliver tangible local benefits.
Data Center Workforce Initiative of the Year
Winner: Scala Data Centers
Scala links data center growth to tangible workforce outcomes in the communities around its sites. In 2024, the initiative supported more than 17,000 indirect jobs, while the Re:Flow program recycled materials such as 6.3 tons of copper cable and reinvested the proceeds into vocational training.
Control cost. Mitigate risk.
Charlotte
Trueman Compute, Storage, and Networking Editor
Storage wars
Could the shortages facing the HDD market ultimately lead to the death of hard drives in the data center?
Twelve months ago, if you had asked anyone in the industry what the biggest limiting factor for future AI data center growth was, they would probably have said a lack of available GPUs. Reports of Nvidia NVL72 Blackwell racks overheating had started to swirl, and customer anxiety over shipping delays was reaching fever pitch.
Fast forward to November 2025, and the picture is rather different. Against all odds, the chip supply chain has remained impressively resilient, but concerns have instead turned to shortages of other components, namely enterprise-grade Hard Disk Drives (HDDs), which are currently facing lead
times of up to two years as manufacturers struggle to keep up with AI demand.
HDDs, which use magnetic storage to retain information, have long been popular with data center operators due to their ability to provide high capacity at scale while offering a lower cost per terabyte compared to Solid State Drives (SDDs), where data is kept in integrated circuits. However, as Matt Taylor, VP and GM of artificial intelligence (AI) at storage firm Pure Storage, explains, HDDs are now effectively sold out through 2027, and the hard drive manufacturers aren't building any more capacity. And even if they couldand right now they're not - it takes a couple
of years of investment to get more capacity online.
Taylor says the size of scale of many of the AI data centers announced this year has taken the industry somewhat by surprise, and while companies have been smart enough to shore up their supply of GPUs to support these deployments, for others, “storage has kind of been an afterthought in the hierarchy of what they're prioritizing.”
“The world has been so focused on GPUs. But people are starting to realize that in order to go and deploy these large-scale, complex data centers, they need more than just GPUs,” he laughs.
“Against all odds, the chip supply chain has remained impressively resilient, but concerns have instead turned to shortages of other components, namely enterprise-grade Hard Disk Drives.”
“Now, that next order of the critical supply chain has become memory, storage, and data center infrastructure, and I just don't think people thought about it.”
In addition to the unprecedented number of planned data center deployments, Taylor says that long-term supply agreements signed before the AI data center build-out boom are also likely to have exacerbated the problem, with vendors across the storage industry now having to work with customers to plan out their needs over the next two years.
Customers have been somewhat blindsided by the shortage – as recently as two quarters ago, there was still plenty of supply, he says, describing the current situation as “scramble mode.”
In a separate conversation, hard drivemaker Seagate’s Mohamad El-Batal, chief technologist in the company’s CTO Office, described the current situation as a “supply chain nightmare,” noting that many component supplies are currently on a 52-week allocation.
Taylor says that in recent months, Pure Storage has had a lot customers ask the company to educate their supply chain teams on what's happening in the market, leading to a number of frank discussions about what Pure is seeing from a vendor perspective, and looking at what are the trade offs in terms of different timelines or how the company can best map to their requirements.
However, he cautions that there are not a lot of easy solutions, and even careful planning doesn’t mean the problems facing the industry will suddenly disappear.
“The reality is we're going to be in this state for a while,” he says. “On the hard drive side of things, people need to realize that all the hard drive capacity is bought.
People have been saying: ‘Maybe the hyperscalers won’t order too much’ or ‘They're just trying to capture all the supply that's out there to be strategic,’ as has happened when there’s been shortages before.
“But that’s not the situation here. [The hyperscalers] have all bought everything they can, and they still want more, but there is no more, and I don’t think we’re going to see an easing of the market any time next year.”
The rise of the all-flash data center
One way data center operators are looking to mitigate this problem is by transitioning to flash storage, specifically QLC (quad-level cell) storage.
This is a capacity-optimized NAND memory technology that can lower the total cost of ownership for read-centric loads, providing increased capacity with the speed of all-flash. However, QLC storage is also less durable than
other quad-level solutions, such as TLC (triple-level storage) or SLC (single-level cell), offering 1,000 program-erase cycles before it starts to break down and become unreadable. By comparison, SLC SSDs can withstand more than 100,000 read cycles, on average.
Taylor says QLC technology is being touted as a hard drive replacement because, from a cost-per-terabyte perspective, it is the next best alternative. Earlier this year, Pure Storage announced it was partnering with Korean memory manufacturer SK Hynix to deliver QLC flash storage products to hyperscale data centers.
With QLC, “the density of the technology is going up significantly in terms of bits per cell, whereas TLC is not as much,” Taylor explains. “So that's the appeal – with QLC I can get to cost economics on a dollar per bit, or dollar per terabyte, that is hard to reach on TLC.”
While TLC does continue to be deployed in many leading-edge environments, Taylor says that if managed correctly, QLC can also meet
“The world has been so focused on GPUs… but people are starting to realize that in order to go and deploy these largescale, complex data centers, they need more than just GPUs,”
>>Matt Taylor, Pure Storage
a lot of the needs of high-performance workloads. Furthermore, he says that in the long term, the total cost of ownership of a QLC-based environment – not simply acquisition of the technology, but also taking power costs, reliability, and uptime into account – is proving to be lower.
Some hyperscalers, such as Meta, have already been vocal about their desire to transition away from combination HDD and SSD deployments in favor of all-flash data centers, with the company releasing a blog post in March of this year titled ‘A case for QLC SSDs in the data center.’
In it, Meta said that while HDDs have been growing in density, they’ve failed to match that with performance. It went on to argue that the growing need for increased power efficiency has led to the development of innovative storage
“Given this increasing desire to transition from HDDs to QLC, will the industry ever go back to the traditional hard drive?”
solutions, with QLC storage “forming a middle tier between HDDs and TLC SSDs - providing higher density, improved power efficiency, and better cost than existing TLC SSDs.”
Although the shortage of HDDs is helping to accelerate the transition to all-flash data centers, Taylor says it’s not something that can happen overnight, with customers still needing to source HDDs in the interim. Furthermore, it's been reported that companies rushing to shore up supplies of QLC storage to avoid similar shortages has led to production capacity being booked through to 2026 at some NAND manufacturers.
Taylor says he expects to see more NAND availability in the second half of next year, but notes that, like with HDDs, there's way more demand than there is supply right now.
“HDD unavailability has accelerated that move [to all-flash] substantially, and the hyperscalers are buying it in tens or hundreds of exabytes a year –that shift from them is a major industry movement,” he says, noting that
simultaneously, all of the large AI labs are accelerating their deployments of these large-scale AI infrastructures, which are also predominantly all-flash.
For high-performance workloads, three tiers of storage are typically deployed: a caching layer that is all NVMe-based flash, a warm tier, which is SSD-based, and a cold, or archived tier, which has traditionally consisted of HDDs or tape, though flash-based options for cold storage are now becoming a reality too.
“So, you've got the hyperscalers making this move, you've got the AI foundation model builders also doing this, and then you've got just a general buildout happening in the neocloud space, and they're pretty much all deploying SSDs as well,” Taylor says. “All three of these combined have created this huge onslaught of demand that is creating a challenge.”
In the long term, Taylor says this leaves the industry with an important question to answer: Given this increasing desire to transition from HDDs to QLC, will the industry ever go back to the traditional hard drive?
“If you can't get more hard drives, so decide to make this transition, which has been slowly happening already, why would you go back?” he questions, noting that there is a lot of industry excitement about the potential opportunities this presents drive manufacturers with.
With hyperscalers buying into the narrative that all-flash is the way to go, Taylor notes that although there are historical reasons that have stopped them fully committing to the transition previously, the shortages in the HDD market is likely to act as a forcing function, fueling investment and R&D spend in NAND technology, leading to solutions that can offer higher density bits per cell and making the cost economics of QLC even better over time.
“I do think this is a trend, and it’ll be a huge opportunity for the flash industry to go and actually accelerate development,” he argues. “I also think the ability for the supply chain to flex is much easier on the flash side of things – yes, you can't easily build more fabs, but you can increase the wafer count, or increase the number of starts you have,” he says.
“It won’t happen overnight, but it’s certainly an easier task for the NAND market to achieve than it is for the hard drive industry.”
Engineered For Impact
As your end-to-end partner, Salas O’Brien delivers high-performance mission-critical projects through pre-engineered solutions and offsite prefabrication that reduce risk, limit rework, and accelerate resu lts.
Pre-engineering & offsite prefabrication
Mechanical, electrical, plumbing, and fire protecti on
Architecture & interior design
Technology & telecom systems
Decarbonization & sustainability
Commissioning & QA/QC
Let’s talk about your next project .
The profundity and paradox of time
How time keeps modern societyand data centers - in one piece
There is a place in South East London where you can stand upon the edge of time.
Greenwich is, in many ways, the birthplace of timekeeping. Home to the Prime Meridian and giving its name to Greenwich Mean Time, one cannot explore the world of timekeeping without visiting the grassy hill, upon which sits the Royal Observatory and a vast collection of timekeeping pieces from throughout the centuries.
DCD visited the museum earlier this year, and while most of the clocks on display are from a different era, the
real-life implications of time-keeping continue to echo through its highceilinged rooms.
Time governs our daily lives on a human level, but it can easily be forgotten how it also rules modernday technology, such as data centers, networks, and the Internet as a whole.
Without timekeeping, chaos would ensue.
The importance of accurate timekeeping for the Internet was neatly summed up to DCD by data center operator Telehouse’s senior buildings manager, Paul Sharp.
“Data is shipped around the world
in bytes and packets, sent over the network, and then, at the far end, put back together,” he explains. “I’ll use the analogy of the book. The Internet without timekeeping would be like removing the page numbers. You wouldn’t know if you were putting the pages back together in the right order.”
The concept of time itself is somewhat hard to pin down. We know it exists, but where is it? Where does it come from? Who decided what a “second” was? Who owns it?
The answer is, of course, complicated. But with time-keeping today a vastly different proposition
Georgia Butler Cloud & Hybrid Editor
from when we relied solely on the sun rising and setting each day to know the world had moved on, a brief history is necessary.
A (much simpler) brief history of time
You would be hard-pressed to find a person with a greater grasp on the history of timekeeping than David Rooney.
Author of About Time: A History of Civilization in Twelve Clocks, Rooney currently works as a curator at the Science Museum in London, having previously headed the timekeeping collection at the aforementioned Royal Observatory in Greenwich.
For Rooney, the history of timekeeping has many pivotal moments - but three he picks out when talking to DCD are the invention of the pendulum and the balance spring in the 1600s, and centuries later, the arrival of the first atomic clock in 1955.
“In the second half of the 17th century, there were two inventions that transformed clocks and watches from effectively inaccurate guides to the time, and transformed them into scientific instruments. That was the invention of the pendulum in 1656 and the invention of the balance spring in watches in 1675,” Rooney says.
“In both cases, the accuracy was transformed and turned both clocks and watches into scientific instruments, which, in the age of the scientific revolution and the enlightenment, transformed humankind. It was profoundly significant in human history.”
It is hard today to imagine a world where there was no strict concept of “time.” People would have looked
ahead and to the past, but with very little quantifiable data to shape their perspective.
Even with these devices and a greater degree of accuracy, time remained very different for different people. Towns and cities would operate in their own time zones, rather than what we have today, where standards cover large swathes of countries. Previously, time would also have been communicated by word of mouth, with Rooney writing in his book about the people who would take the time from Greenwich and sell it to businesses across the capital.
Getting through any significant portion of the history of time would require a thesis-length article, so instead, we shall skip ahead to 1955 and the invention of the atomic clock.
Measuring time
The invention of the atomic clock, a clock “more accurate than the rotating Earth itself,” was, according to Rooney, “a profound idea.” By 1967, humanity had officially adopted atomic time.
The atomic clock he mentions was the formative system that inspired the clocks we rely on today for timekeeping. But while it solved some problems, it created others.
With the clock being more accurate than the Earth itself, it often doesn’t match the minuscule fluctuations in the planet’s rotation. This is something that the laboratories around the world that are calculating the time must take into account, finding ways to make our solar time scale and that of the clocks align.
Our time is generated by the work of around 700 “high-end” atomic clocks in around 85 national laboratories around
the world. Those labs send regular data to the International Bureau of Weights and Measures (BIPM) in France, which then calculates Universal Coordinated Time (UTC).
“UTC is really a paper timescale. It’s a calculation, and each of the labs gets informed of their offset, so what the gap is to UTC, and then there is the option to either slowly reduce that offset, or maintain it and inform the nation,” explains Dr Leon Lobo, head of the National Timing Centre (NTC) of the National Physical Laboratory (NPL) in the UK - one of the aforementioned labs contributing to the system.
NPL operates a number of “clocks,” which are currently hydrogen maserpowered, as well as a cesium fountain - a type of atomic clock that works by using lasers to cool and "toss" cesium atoms upwards, measuring the rise and fall due to gravity - which is used to calculate the span of a second in a highly stable way.
These clocks are so sensitive that even raising them up a meter can impact their operation due to minute changes in their gravitational pull. “We manage the temperature, the humidity, and vibration,
“The invention of the atomic clock, a clock more accurate than the rotating Earth itself, I think, is a profound idea,”
>>David Rooney
and isolated plants manage the airflow. The electrical interferences are basically blocked out,” Dr Lobo says.
While highly accurate, Dr Lobo notes that all clocks “experience drift” at varying rates. The 1955 atomic clock, for example, had a drift of a second over a 300-year period - in other words, after 300 years, the clock would become inaccurate by one second.
“Our cesium fountains today have a second-level drift of 158 million years, and the clocks we are developing now - which will be next-generation optical atomic clocks - will be stable for the lifetime of the universe - or around 14 billion years. It’s not that the clock itself will last that long, but the stability is capable of it,” he explains.
One misconception noted by Dr. Lobo is the role of satellites in timekeeping. “The element that is not known is that satellite systems are actually dissemination methods for time, and not sources of it.”
Global navigation satellite systems (GNSS), including GPS, Russia’s GLONASS, China’s BDS, and the EU’s Galileo, are all linked to our concept of time. These systems take the “time” data from the global laboratories and transmit it to the BIPM in France, but they are also the way that many data centers and related digital infrastructure access time.
The satellites have atomic clocks on board, which are synchronized with those on Earth, and disseminate readings to the world en masse. While our mobile phones will get their “time” from an internal clock and the Internet, GNSS plays a major role in providing the source material.
While GNSS is the typical method
used, it has its downfalls. Elena Parsons, strategic business development manager at NPL, explains that relying on GNSS can be risky. “GNSS signals are incredibly weak, and it’s quite easy to interfere with them - what we would call jamming, or even spoofing, where someone could change the time being delivered.”
The consequences of this would be significant. Without accurate timekeeping, data center networks would fail to function, and any industry reliant on digital infrastructure - which, in 2025, is almost every industry - would take a hit.
According to a report by London Economics, the economic loss for the UK due to a GNSS outage for seven days would be an estimated £7.6m ($9.5m) or about £1.4 billion ($1.8bn) in a 24hour outage.
Because of this, one of NTC’s main goals over the years has been to establish greater reliability and redundancy for the UK’s timing service. It is doing this by diversifying how time is transmitted, using fiber optic cables, communication satellites, terrestrial broadcasts, and radio signalling, as well as GNSS.
A Telehouse data center campus in London has taken advantage of this, and is now home to one of NPL’s service nodes, which delivers “continuous, assured timing signals over dedicated fiber optic cables that are traceable to UTC (NPL) and independent of GNSS” to the data centers and their customers.
NPL’s Parsons notes that the data center sector has been pretty receptive to the Time Service thus far. “One good thing is that the data center sector, in particular, is very familiar with redundancy. They have it in power infrastructure systems, in networks, so talking about it in a timing reference is resonating with them.”
When you arrive at the Telehouse London campus, you are immediately greeted by a large clock delivering time down to the last millisecond, directly sent by NPL. Watching the numbers rapidly ticking by as you sit in the waiting room creates an odd sense of urgency, but also demonstrates how seriously Telehouse takes its relationship with NPL.
Keeping time in the data center Telehouse’s Sharp explains to DCD how the “Time Service” works.
“Essentially, there is a master clock and a backup (or slave) clock,” he explains.
Timeis also used for navigation purposes. A GNSS receiver knows where an individual is through trilateration. Time data will be sent by at least four different satellites to the receiver, which, by measuring the latency, can then establish the exact distance from each satellite, thus revealing the location.
“Those are synchronized to ensure the time is accurate. We then take dual redundant fibers from each and bring them onto the site.
“We haven’t got a ‘clock’ as such on site, but we have a repeater that captures that timing essence. We have one in Telehouse South and Telehouse North, and those are then linked across so customers can choose between them or use both to have a resilient, redundant feed.”
Despite the service being available at the data center, it isn’t necessarily adopted by every customer. “We have customers who take GPS or GNSS antennas and put them on the roof,” Sharp says. “For example, the cloud providers often prefer to take a cookie-cutter approach to their operations around the world, and will just replicate it across all sites, using GPS antenna and converting the signal to the UTC time stamp.”
According to Telehouse Europe’s VP of sales, Will Scott, accurate timing is
Paul Sharp
prioritized by a number of its customers - including financial services companies, with high frequency traders needing “100 microseconds of accuracy on a time stamp to UTC, with NPL’s service accurate to one microsecond,” as well as streaming platforms, and live broadcasters.
While Telehouse offers time as a service, it is up to the customers within its data center to use it wisely.
SiTime provides products specifically designed to help keep time accurate through MEMs (microelectromechanical systems) or oscillators.
“We are the heartbeat of electronics. Literally any electrical device that runs today needs timing in there, and because digital electronic signals are all zeros and ones, to make sense of those, you need a baseline reference, which is a clock,”
SiTime’s EVP of marketing, Piyush Sevalia, tells DCD
SiTime’s solutions are based on silicon, rather than the traditional Quartz, which the company says makes them more resilient and stable.
According to Sevalia, synchronization in the data center is important on many levels. He offers the example of having a multi-data center campus all working on a single task - for example, AI training.
“If your AI cluster has, say, ten clusters in one data center, or in multiple data centers, they all need to be synchronized with each other when you're parallel processing the AI training tasks.”
Computer networks are synchronized using the IEEE 1588 standard - also known as the Precision Time Protocol. While data centers will use this to remain in sync across clusters or even across buildings, Sevalia notes that they will also
have “localized time sources so that the time at the data center is very accurate.”
Data centers haven’t always needed localized clocks, however. “Today's bandwidth requires that pretty much all data centers need highly accurate local clocks,” Sevalia says. “Ten years ago, that was not the case. Maybe a couple of them may have needed it. Maybe they could have gotten by just synchronizing with the universal time that NIST (the US’ National Institute of Standards and Technology) puts out. Maybe they could have gotten by with that.”
But as demand for lower latency has grown, data centers have been forced to adopt higher bandwidth, thus demanding a more accurate timekeeping system.
The importance of this was reiterated to DCD by Yiming Lei, a doctoral researcher at Germany’s Max Planck Institute for Informatics.
“In a traditional data center, there are lots of machines connected together and, for example, for the management of workloads or applications, a synchronized clock is needed,” he says. “If there are distributed databases running in a data center, and different machines processing different user requests, those will need to have a timestamp for each request to order the requests.” To return to our previous analogy, the book pages need to be numbered.
“Because requests are handled by different machines, this process needs to be relatively synchronized to have a global order. For example, if you used it for financial purposes, the accuracy of this content is really important because
“The element that is not known is that satellite systems are actually dissemination methods for time, and not sources of it,”
>>Dr. Leon Lobo
you need to decide who to sell stock to, for example.
“But, as long as it is a distributed application - which most popular applications running in data centers area synchronized clock can be used to some extent, but there are different levels of accuracy,” he explains.
At a high level, the traditional way of doing this is by exchanging messages with time stamps. Lei says that 20 years ago, packages would often be delivered with a timestamp coming from the software or operating system. Today, a timestamping function has been added to the network interface card or hardware, making them far more accurate.
Time stamping and accuracy extend beyond the computing done in a data center, however, to the actual functioning of the facility.
Dr. Luke Durcan is the system AI commercial and IP leader at Schneider Electric, a power technology company serving many data centers around the world. In Dr. Durcan’s experience, accurate time keeping has proved to be key to figuring out operational problems in data centers on several occasions.
The company’s meters and UPSs have internal clocks, meaning that when “incidents” arise, they can be traced back to a particular moment.
An example offered by Dr. Durcan was a repeating failure of cooling equipment at a client's data centers. Every day, at the same time, “30 percent of the units would just stop working,” he says.
“We looked at the meters, and we could see a major power quality event was occurring at exactly the same time, so we were able to associate the power events with the cooling units, and because of that, we were able to go to the utility and see where the issue was.
“It turns out, there was another huge data center that was doing load testing at
Dr Leon Lobo
the same time and shedding 20MW at a time, and it was causing huge disturbance to the network.”
While in this case, the time stamping was important to be able to compare to our human perception of time, in other cases, the time itself is less important than ensuring different devices are synchronized.
“Most of the communication on a data center network is on Modbus, which is a very common and fairly straightforward protocol,” Dr. Durcan says. “Depending on when the asset is polled, that is the time it communicates - so, say I poll both a UPS and a meter at the same time, when it’s recorded in the system, it’s synchronized, but it’s not relative to NTP or a clock.
“We use block polling or sequence polling. So, say “zero time” is the poll of the meter, then zero plus one is the UPS. Those two readings will be registered as zero and one, and then any other polls continue consecutively,” he explains. “Most of the data is relative to the infrastructure.”
Then, there is “subcycle data” or log data, which is interpreted through log files instead of Modbus.
“What’s important from a clock perspective is making sure that those subcycle events are synchronized, so if an event happens on the meter that the UPS log will correspond from a time scale perspective. That’s where it gets interesting. It’s not unusual to get multiple power quality events occurring at the same time, and then you do need a very high accuracy of synchronization to determine what’s a transient, a surge, or a sag. It’s bad regardless, but there are granularities of bad and good.”
“Leap seconds can play havoc with digital systems, because suddenly you are adding an additional second into time, and digital systems have to ensure they implement it properly or everything can fall apart,”
>>Dr. Leon Lobo
Taking the leap
With so much relying on time and synchronization, the hope is that, provided everything operates accurately and consistently, issues shouldn’t arise.
While “drift” is a known and identified issue, another obstacle can cause a glitch in time’s arrow.
Since adopting atomic clocks, one of the ways that scientists have made them match up to our solar day is through the concept of the “leap second.”
“Leap seconds can play havoc with digital systems, because suddenly you are adding an additional second into time, and digital systems have to ensure they implement it properly or everything can fall apart,” Dr. Lobo tells DCD
The leap second is more or less as it sounds. When the disparity between UTC and the rate the Earth is spinning becomes too disaligned, the International Earth Rotation and Reference Systems Service (IERS) adds a second in - either the last second of December or June.
In total, 27 leap seconds have been added since 1972. In 2022, however, at the General Conference on Weights and Measures (CGPM), governments globally voted in favor of ending the leap second by 2035.
The move to be rid of the leap second coincided with the rotation of the Earth speeding up, rather than slowing down, effectively meaning that instead of adding a second, a second may need to be deducted.
The most recent leap second - added at the end of 2016 - caused a Cloudflare outage. The 2012 leap second caused a major Facebook outage as the social media company’s Linux servers became overloaded trying to understand why they were transported back into the past.
Despite some failures occurring, the majority of tech companies find a way to handle the untimely shift.
“Some do it in advance of when we implement it, which is the last second of either June or December, depending on when it’s decided. Google, for instance, ‘smears’ it over every second of that day, for example,” says Dr. Lobo.
“But nobody operates in isolation, particularly financial markets. They are global and are always interacting with other organizations in order to trade and transfer data and the like, and if they don’t do it in the same way, you would have massive sync issues.”
Conversations as to how the leap second will be removed are ongoing. One solution is to extend the threshold, says Dr. Lobo, so instead of a leap second, there is a leap minute, for example, that happens every hundred-odd years.
While not approaching time from a technical perspective, DCD asked David Rooney his thoughts on the latest shift.
“My view has always been it's the job of the coders to make it work, because what the leap second does is ensure that the time on the clocks of people around the world, the time in civil life, is connected to
the rotation of the Earth and the passage of the sun through the sky, which is how we as humans experience time,” he says. “Even though we've invented the atomic clock as humans, we're still animals, and as animals, we experience time by rotation of the earth and then by the seasons.”
Rooney adds: “The engineers are clever, and they made a system that they could make work, and I believe that engineers are still clever, and they could still make it work, and we could retain the system. The arguments lost, fair enough. It's not the end of the world.”
For now, while the leap second hasn’t reared its head in nine years, the solution to removing it for good is up for debate.
Optical time and optical data centers
One thing is clear: The next generation of both clocks and data centers are being explored, and both of them share the same name. Optical.
In 2023, Google began working on a project dubbed “Mission Apollo.” The search and cloud giant wanted to replace traditional network switches with optical circuit switches, using light, instead of electrons, to send information, and created its own switches to do just this.
In traditional network topologies, signals jump between optical and electrical states, but many surmise that keeping these signals in an optical state for as long as possible will lead to
efficiencies. After all, light travels at, well, the speed of light.
While Google is embracing the technology, known as photonic networking, this is by no means the standard approach, and one key change that comes with it is the need for even greater time and synchronization accuracy.
Max Planck’s Yimeng Lei studied this phenomenon in his paper Nanosecond Precision Time Synchronization for Optical Data Center Networks.
Speaking to DCD about his research, Lei explains that an optical data center network, with less transferring between electrons and light, can “scale to the end of Moore’s law, and it’s much more energy efficient than traditional electrical switches.”
He continues: “The reason it requires time synchronization is related to how this network works. It doesn't check the packet; it just forwards based on the current connection. Optical switches change their internal topology over time and forward accordingly, which means the end point of this network needs to know the current configuration of optical switches, and thus they need to have a synchronized clock to what configuration it has at a particular point in time.
“It needs to be highly synchronized. The current, more advanced, designs of optical data center networks tend to reconfigure this optical switch to the nanosecond level, and that’s as far as we can go right now.”
Simply put, better and more accurate clocks are needed, and Lei’s research established that they could reach a 28-nanosecond sync accuracy with their implementation of Nanosecond Optical Synchronization.
SiTime, too, has seen some interest from data centers looking to use optical networking solutions.
Sevalia tells DCD that the company currently has “a bunch of customers who are using us in optical modules” for the reason that their technology has a “rejection of power supply noise and minimizing its impact on the timing signal that is much better than the quartz devices.”
The world of photonic or optical networks is garnering increasing attention, with the likes of Oriole Networks, Lightmatter, and Xscape Photonics heavily investing in
developing and expanding the technology. But for now, it remains somewhat of a niche approach.
With the technology needing more accurate clocks, the pursuit to push beyond cesium devices is also ongoing.
SiTime’s Sevalia tells DCD that timekeeping devices come in various forms that can be ranked by accuracy and stability: rubidium, cesium (the current timekeeping solution used), and optical - the clocks that NPL is currently developing, which use neutral strontium atoms held in an optical lattice potential.
It is these clocks that Dr. Lobo says have the potential to remain stable for the lifetime of the universe, and are being developed for various applications: quantum sensing, synchronization of high-speed networks, space science, and tests of fundamental physical theories.
It is when talking about this next generation of clocks that a gleam of unrestrained excitement emerges from Dr. Lobo.
“The reason why we are developing the next generation of clocks beyond cesium is, firstly, because we can, but also because they are demonstrating stability better than what is the primary standard at the moment,” Dr. Lobo says.
“We are also looking at redefining the second within the next decade, moving from a cesium hyperfine transition to an optical transition in strontium or ytterbium or a combination of different elements, and it's absolutely crucial because from the point of view of all our use cases, most are already in the microsecond or nanosecond range, and there are many that shifting beyond that as well.
“We will always need more stringent time, more precision, and breaking down events into shorter and shorter fractions of a second. In order to be able to do that, the highest-end clocks and the national metrology institutes need to be several orders of magnitude better.”
On a human level, introducing optical clocks and defining a new “second” will not have a material change on our everyday lives. For most people, we can choose to simply ignore it. But this is something that Dr. Lobo calls out for change - the ignorance of time.
“Unfortunately, it is very much that invisible utility that supports everything, and no one really stops to consider where they get their time from, or what they would do if they lost it,” he says.
Capacity Range
250kW - 550kW
500MW+ Capacity Installed
Globally
Indirect Evaporative and Dry Air-Side Cooling
Facilitates ultra-efficient, high-temperature liquid cooling by removing the need for low-temperature chilled water for hybrid-cooled environments.
Rapidly increases speed to market by reducing installation cost and complexity. Removes requirement for chillers and chilled water pipework on site for air-cooled loads, simplifying the design, build, startup and commissioning processes. Complete package cooling system within a single unit.
By utilising an incredibly fine mist at peak ambient temperatures, the Zero has optimal thermal management with a 90% reduction in water consumption with a WUE of just 0.001 litres per kW in the UK.
The Zero’s built-in water storage ensures 24-hour redundancy in case of water failure, all with its own water treatment onboard with absolution biological filtration (0.2 microns) to eliminate legionella risk.
In perfect harmony: How Emerald AI is turning data centers into flexible grid assets
DCD speaks to Dr. Varun Sivaram, CEO of Emerald AI, on how it is using AI to redefine utilities' relationship with data centers
An orchestra thrives on harmony, every instrument in tune and in lockstep with the conductor's cues. However, even if one note drifts out of sync, an entire symphony can be ruined.
The electrical grid is very much the same. Synchronization is essential to ensure that every generator and power source delivers electricity in exact alignment with the system’s frequency, voltage, and phase angle. Failure to do so can trigger severe issues, including grid instability, equipment damage, and mechanical stress.
As energy demand for AI compute continues to skyrocket, conventional wisdom has placed these facilities at odds with the grid, a system never designed for their unique power profiles. A situation worsened by the cumbersome pace of grid infrastructure expansion, which lags far behind the insatiable appetite for power emanating from the data center sector. In hotspots like Virginia, data centers are waiting as long as ten years just to secure a connection, while local ratepayers are saddled with the growing costs of building out new transmission and power infrastructure to meet soaring demand.
For Dr. Varun Sivaram, CEO of Nvidia-
backed startup Emerald AI, however, compute is the solution to this problem, and all it needs is the right conductor. Launched earlier this year following a $24 million seed round, Emerald AI has positioned itself to hold the baton in this new form of orchestra. The company believes AI data centers don’t need to be rigid, large loads on the grid, and instead can become flexible, gridsupporting assets, bringing clarity to this electronic symphony.
““Some
workloads malfunctioned midrun. Emerald Conductor adapted automatically, keeping the rest of the cluster stable. It showed the power of autonomous orchestration,”
>>Dr. Varun Sivaram, CEO, Emerald AI
A conductor on the grid
At the core of Emerald AI is its Emerald Conductor platform. Described by Sivaram as “an AI for AI,” the system orchestrates thousands of AI workloads across one or more data centers, dynamically adjusting operations to respond to grid conditions while ensuring the facility maintains performance.
The system achieves this through a closed-loop orchestration platform comprising an autonomous agent and a digital twin simulator. Designed to work independently and to rely on very simple inputs, the Conductor system tags jobs with priority or tolerance for slowing. It then dynamically manages compute demand, slowing specific processes, shifting workloads between locations, or adjusting chip clock frequencies, depending on performance requirements and grid signals.
“Emerald Conductor orchestrates AI factories, reduces grid stress, and maintains performance,” explains Sivaram. “We make the AI data center flexible. We accept grid signals, forecast, and orchestrate.”
Therefore, if a grid operator signals that it needs the power load to drop,
Zachary Skidmore Senior Reporter, Energy and Sustainability
the Conductor can modulate workloads and achieve the reduction precisely and instantly. Conversely, when grid operators offer incentives for frequency regulation or load shifting, the platform allows data centers to monetize their flexibility, providing a dual benefit to both the grid and the data centers themselves.
Sivaram claims that what really sets EmeraldAI apart from its competitors is its software-only approach. This, he argues, allows the Conducter to be completely hardware-agnostic and scalable across existing and future facilities, which “allows us to deploy across hundreds or even thousands of data centers without redesigning facilities from scratch,” he says. Notably, the system requires no access to sensitive model or training data, alleviating concerns about data protection, and can therefore be deployed across most, if not all, data centers.
Practice makes perfect
This all sounds very good in theory, but what about in practice? To prove its efficacy, Emerald AI has conducted two commercial-scale demonstrations in Phoenix, Arizona, and Chicago, Illinois.
In Phoenix, the company partnered with Oracle on a proof-of-concept demonstration. During the test, the
““We make the AI data center flexible. We accept grid signals, forecast, and orchestrate,”
>>Dr. Varun Sivaram, CEO, Emerald AI
Conductor was able to modulate real AI workloads, achieving 25 percent power reduction over three hours while maintaining workload performance. For Sivaram, the test represented “a clear, measurable proof point that these ideas work in the field - not just in the lab.”
In Chicago, the company raised the stakes. Unlike the Phoenix demonstration, where the team was aware of the workload profiles before the demo, in Chicago, the Conductor handled unknown, random workloads. This demonstration presented a significantly more challenging environment and, according to Sivaram, the platform proved resilient.
“We were pleasantly surprised at how robust the system was,” he recalls. “Some AI workloads malfunctioned mid-run. Emerald Conductor adapted automatically, ensuring stable power consumption that remained below the grid-defined power response target. It showcased the power of autonomous AI orchestration to gracefully maintain
workload performance and meet grid needs.”
The company intends to undertake several more demonstration projects in the US, including a geographic workload migration project later this year. It is also engaging with regional transmission operators, such as PJM, to explore a largescale rollout. In October, Nvidia announced it would deploy the Conductor platform at a data center it is building with Digital Realty in Manassas, Virginia. It was also reported that the GPU giant has invested in Emerald AI as part of an $18 million funding round.
Emerald’s next project, however, has an international flavor.
International ambitions
While the US remains EmeraldAI’s primary market, the company is also looking overseas. In August, it announced a partnership with National Grid, the UK's electrical and gas system operator, and as this magazine goes to press a live trial is taking place to test Emerald’s system under UK conditions.
“This is our first step internationally. We want Emerald Conductor to become a global standard for grid-friendly AI data centers,” Sivaram says.
So why the UK? For Sivaram, it was down to three reasons. First was National Grid's forward-looking strategy, with the company openly aware of the possibilities the Conductor system could offer utilities to support the grid while maintaining a consistent connection between data centers.
A point keenly pointed out by Steve Smith, chief strategy and regulation officer at National Grid, at the time of the announcement: “As the UK’s digital
Dr. Varun Sivaram
AI Cluster Achieves Demand Response Objectives in Phoenix
Job Performance By Flex Tier
economy grows, unlocking new ways to flexibly manage energy use is essential for connecting more data centers to our network efficiently.”
The second reason was National Grid's transatlantic stature - as an American company active in both the UK and US markets - and its commitment to the technology. “They’ve invested in the program and agreed to a demo, which makes them the ideal partner for our first international launch,” says Sivaram.
The final, and most important, factor, notes Sivaram, was the access to the NextGrid Alliance, a consortium of 150 utilities worldwide. By gaining access to such a robust partner network, the deal
““AI factories can be the best friend a grid has ever had. If orchestrated correctly, you get a more reliable, cleaner, and more affordable energy system while powering the AI revolution,”
>>Dr. Varun Sivaram, CEO, Emerald AI
could serve as a springboard for further international projects.
This aligns with the company’s broader partnership approach. Emerald AI has already leveraged Nvidia’s cloud partner network to test its technology across US data centers, laying the groundwork for broader deployment and continued global collaboration. Through the National Grid deal, Emerald AI hopes to exert the same leverage across the utility sector.
Net positive
For National Grid, the decision to partner with Emerald AI was based primarily on the potentially groundbreaking impact it could have on how utilities view data centers on the grid.
Therefore, for Sivaram, illuminating the platform's capabilities across as many stakeholders as possible is crucial in redefining the relationship between utilities and data centers, towards a future in which utilities could begin competing to connect data centers to their networks.
“Flexible data centers become assets. Utilities may even compete to connect you first,” says Sivaram. Subsequently, we could see data centers that use systems such as the Emerald Conducter benefit from “advanced interconnection,” argues Sivaram, jumping ahead in the queue for grid access, cutting connection times by months or even years, acting in a similar
vein to battery energy storage systems (BESS), which earn priority access and lower costs for their role in stabilizing the grid.
In addition, with increased flexibility, the demand for massive build-outs of transmission infrastructure could abate, “making it possible to connect far more data centers with less immediate grid expansion,” contends Sivaram. He explains: “I’m not saying we won’t prudently upgrade networks - we willbut if data centers can be reliably flexible, we can avoid a lot of rushed, expensive reinforcement and reduce upward pressure on consumer electricity prices.”
A vision for the future
Looking ahead, Sivaram envisions two potential futures. In one, inflexible data centers overburden the grid, leading to blackouts, higher costs, and community resistance. In the other, data centers become active participants, stabilizing the grid, reducing costs, and driving economic development.
“AI factories can be the best friend a grid has ever had. If orchestrated correctly, you get a more reliable, cleaner, and more affordable energy system while powering the AI revolution,” Sivaram says.
However, with the company still in its infancy, we must wait to see whether it can, in fact, hit the right notes and become the power grid's perfect conductor.
A direct approach
Is the D2D trend bringing innovation to the L-Band by satellite?
While the satellite industry is by no means old, it has been around long enough to have transitioned through a few hardware shifts, driven by innovation and developing consumer demand.
Legacy geostationary satellites supporting L-band communications are being eclipsed by a fascination with direct-to-device (D2D) low Earth orbit (LEO) constellations. But though these craft offer more throughput, the pricetag for a whole fleet of revolving satellites may not stack up with current demand. Direct-to-device describes a technology in which satellite service skips its typical relationship with a ground station
gateway or a mobile satellite terminal to connect directly to its destination, typically a smartphone, or similar piece of communication equipment, though the trend extends to connected cars, drones, and other advanced technologies that may end up anchoring its relevance in future years.
Starlink’s D2D fleet alone has leapt from more than 100 specialized satellites in 2024 to 650+ as of October, seized upon a trend they see as decisive, but also mirroring much of the activity on the trend they’ve been seeing from players like AST Mobile and Skylo. In August, Echostar ordered a D2D constellation from MDA Space for $1.3
billion. Starlink’s up-and-coming peer in Amazon Leo, formerly Project Kuiper, has been developing its own D2D technologies since at least 2024.
“While there are lots of assets customers are interested in tracking, the economics simply haven’t worked when the satellite terminal costs several thousand dollars,” Andy Kessler, vice president of enterprise and land mobile at Viasat told DCD earlier this year. “If the satellite terminal now costs tens of dollars, that’s a different story.”
Kessler spoke of Viasat’s substantial legacy L-Band commercial business, which consists of a variety of
Laurence Russell Contributor
“Satellites later in their lifespan can also still provide significant capabilities,”
>>Kevin Cohen, Viasat
applications in global satellite phone coverage and IoT applications, as well as emerging opportunities in narrowband non-terrestrial networks. With the D2D trend subtracting the role of conventional satellite terminals, this sector may prove very relevant.
The L-Band describes the range of radio frequencies spanning 1-2 gigahertz, at the top end of the ultra-high frequency band, widely adopted for supplementary downlink via satellite for terrestrial mobile and fixed communications networks. Prominent operators in the space include Inmarsat, merged with Viasat in 2023, Iridium Communications, Ligado Networks, and Thuraya.
That demand reinforces the utility of L-Band satellites across the industry that have passed their prime, but may yet offer an outsized return serving low-bandwidth solutions, given their operational age. “Satellites later in their lifespan can also still provide significant capabilities,” explains Kessler’s colleague Kevin Cohen, vice president of direct-to-device at Viasat. “Historically, we’ve been maximizing the use of our assets in space to reliably extend their design lifetime.”
The evolving role of L-Band
Recent regulatory developments around spectrum have sparked debates and differences between players. In November, the Telecom Regulatory Authority of India received conflicting messages from telcos and satellite providers seeking to inform a consultation on the use of L- and S-Band, and its necessity for the mobile-satellite service industry, with SIA-India, a group representing satellite companies, stating the frequencies were not up for auction anywhere in the world, asserting that considering doing so would affect the growth of the nation’s fledgling satellite sector.
The drama occured in reaction to a request made by Reliance Jio Bharti Airtel and Vodafone Idea, who appealed that L
and S-Bands should be considered for mobile spectrum planning and auctions, opening up 1200Mhz of spectrum for licensed movile services, questioning a prior piece of legislation, The Telecommunications Act 2023, which concluded these bands should be given through administrative allocation, not bidding wars.
The Global Satellite Mobile Association (GSMA), backed the regulator’s intention to award spectrum to operators in contiguous blocks, citing India’s population density, ballooning data requirements, and potential for 6G expansion.
The conversation around spectrum sales may have been influenced by Echostar’s $17 billion sale of satellite spectrum to SpaceX in early September. The spectrum sold covers what is referred to as the H block, comprising frequencies between 1,915-1,920MHz, used widely for 4G and 5G mobile voice and data transmission.
"We're so pleased to be doing this transaction with EchoStar as it will advance our mission to end mobile dead zones around the world," Gwynne Shotwell, SpaceX president and chief operating officer, said in a statement at the time.
Shotwell recommitted to SpaceX’s plans to produce direct-to-cell satellites, enabling coverage for millions of customers worldwide. The question of who this was a good deal for is yet to be seen.
Rumors continue to circulate on a collaboration between Apple and Starlink, with Musk having made several attempts to lock the iPhone developer in as an anchor customer for the Starlink network. In late 2024, Apple announced plans to commit $1.5 billion to Globalstar to use its L-Band network to equip iPhones for emergency communications. Globalstar has since been reportedly exploring a sale to SpaceX, according to sources from Bloomberg
The apparent enthusiasm to occupy this spectrum and the technologies that make use of it is substantiated in the ambition of its applications.
Rise of the Connected Cow
Viasat’s Kessler describes IoT for cattle herds as an outstanding example of L-Band’s utility.
“In livestock tracking, the entirety of data transmitted over the course of
a month might very well be less than a kilobyte. That becomes a really attractive way for a ranch to keep track of its assets in a location either exclusively or primarily outside cellular coverage.”
The question of which farmers have the liquidity to invest in satellite tracking for cows has invited scepticism in the past, though as the value of beef and other livestock produce tick upwards, and tracking technologies simplify, such products make more financial sense. Livestock farmers in Central Asia, for example, have long made pasture markets their way of life, though they have long suffered from poor regional connectivity.
“There’s a growing reliance on satellite-enabled IoT devices for water management, crop and soil analysis and remote equipment inspections,” Cohen adds. “Our recent field research shows that satellite connectivity now powers half of farmers’ agricultural IoT projects.”
Cohen echoes the argument about falling cost barriers through the removal of the middleman in expensive satellite terminals, predicting it could invite a greater uptake of IoT sooner than we think.
Though for some, the cost pressures that have long defined the behaviours of farmers and other prospective customers can’t be discounted.
“Traditional GEO satellite infrastructure and communication protocols that deliver always-on real-time connectivity are not well-suited to IoT, as the infrastructure cost makes meeting the cost sensitivity of many IoT use cases, such as agri-tech, environmental sensing, water management, and remote infrastructure monitoring impossible,” says Nicola Russo, vice president of commercial operations at Myriota.
Myriota, which is based in Adelaide, South Australia, launched a narrowbandIoT satellite with Viasat in March this year to support its Myriota HyperPulse service, which relies on the L-Band network Viasat provides.
Simplifying Emergency Communication
In October this year, Iridium came together with Qualcomm on integrating data services into the Snapdragon Mission Tactical Radio with the intention of bringing them to US government customers and their partners. Built on secure L-Band
Iridium satellites, the collaboration supports handheld and mounted radios, capable of reliable performance in areas of congested, compromised, or absent terrestrial networks. Their work in the field maintains L-Band’s reputation for delivering mission-critical communications.
Iridium did not immediately respond to DCD’s request for comment.
“Millions of consumer mobile devices have already accessed satellite SOS connectivity in North America in the last two years,” Cohen explains. “As industries become more and more reliant on real-time data from remote assets, we anticipate demand to rise, particularly given what was once perceived as a prohibitive, costly investment could be more cost-efficient. First responders operating in a disaster zone where cell towers are down can establish a reliable means of sending critical information through D2D services. The same goes [for the military] in areas where harsh environmental conditions or remote locations mean terrestrial networks are unavailable or compromised.”
Viasat acknowledged it was also
exploring the use of D2D technologies for governments and defense applications, with push-to-talk (PTT) applications being an example of where D2D 5G technologies and solutions over satellite could enable greater scale and costefficiency for large-scale operations and missions.
“We see global demand here,” Myriota’s Russo agrees, “including [telemetry usecases for mission-critical customers in] autonomous monitoring of unattended infrastructure, asset tracking, and environmental monitoring.”
Cell power in the fast-developing world
Also in October, Viasat ran a D2D technology demonstration in Mexico, enabled by 3GPP standards-based nonterrestrial RAN and CORE infrastructure from its partner Skylo, demonstrating native SMS messages on an Android smartphone moving through their I-4 F3 satellite – a first for the country.
“We see growing opportunities for consumer mobile D2D services in areas that are currently underserved by terrestrial connectivity,” Cohen told
“There’s a growing reliance on satelliteenabled IoT devices for water management, crop and soil analysis,”
>>Kevin Cohen, Viasat
DCD. “Many regions around the world, including India, Asia, the Middle East, Africa, and across North America, don't have reliable cell coverage everywhere, particularly in suburban and rural areas. In fact, almost every country has at least some part of its landmass in which cellular networks are patchy or unavailable. This challenge can be compounded in areas prone to extreme weather events, which can damage or interrupt cellular networks. This leaves populations unconnected, impacting everything from emergency services to basic communication.”
Cohen describes India, Asia, and Africa in particular as continents with some of the fastest-growing economies in the world, with industries growing at a pace that generates heated demand for connected consumer and IoT devices.
“Satellite messaging offers a scalable and cost-effective solution to meet this demand, fostering digital inclusion without requiring the large upfront investment in terrestrial infrastructure that may be required,” he says.
Viasat is a founding member of the Mobile Satellite Services Association, a nonprofit association seeking to further the progress of direct-to-device and IoT technologies, comprised of communications companies in North American, European, and MENA regions These fast-developing nations are sites of strong demand for Myriota, too, Russo says.
“Due to the lack of reliable NB-IoT and LTE coverage available, and the affordability of our offering when compared with traditional satellite alternatives, LATAM, Southeast Asia, and Africa are all regions of high growth and demand for Myriota, and we see this continuing over the next five-10 years,” he says.
True commissioning isn’t delivered at completion, it’s established on day one.
It demands continuity, independence, and verification at every stage.
We exist to prove that every system performs exactly as intended, not just on paper, but under real conditions.
Our role isn’t to only manage the process - it’s to validate the outcome, and certify the result.
Discover how we protect performance at global-cxm.com
The next era of silicon?
Elad Raz, founder and CEO of Israeli chip startup
NextSilicon, on how his firm can take on Nvidia. Maybe
Like most chip startups, the inception point for NextSilicon was simply “can we run compute in a different way?”
For Elad Raz, the company’s founder and CEO, the answer was yes, and thus, in late 2017, his company was born. Raz is an engineer by trade, having previously founded firm offering professional services in the highperformance computing (HPC), cryptography, and high-performance networking space. That organization was subsequently acquired by Mellanox, with Raz leaving before the networking company was itself acquired by Nvidia in 2020.
In the early days of NextSilicon, Raz says it was mainly him just looking at
old attempts at dataflow architecture and asking why it had failed to become a commercial success.
The concept was pioneered by researchers at MIT in the 1970s and early 1980s. Unlike traditional von Neumann or control flow architecture, which executes instructions sequentially, dataflow architecture executes instructions as soon as the required data becomes available, allowing independent operations to be executed in parallel.
With dataflow architecture, proponents say, there is also no shared memory bottleneck, as data is passed between processing nodes when operations are triggered. However, prior attempts to build content-addressable memory (CAM) – a type of memory
necessary for accessing certain highspeed searching applications – large enough to hold all the program dependencies have failed.
Raz believes one reason dataflow architecture has never taken off before is because of how the chips are programmed, and if that problem could be solved, the architecture would provide an obvious solution for powering compute-intensive workloads, such as HPC and AI.
“The one thing that I’ve seen with the dataflow architecture is that it’s super hard to program in the right way,” he explains. “That is why dataflow architecture failed, because you never know how the data flows within the algorithm.”
Charlotte Trueman Compute, Storage, and Networking Editor
Credit: NextSilicon
Israel-based NextSilicon was founded based on Raz’s realization that, in parallel software, there is only a small portion of code that runs the majority of the time. “So I thought, what if we invent a processor code that gives love to every instruction the same way?” he says.
It should be noted that hardware that allows you to run operations in parallel is currently available. NextSilicon just thinks its offering is better. GPUs are hindered by the need for specialized programming languages in order to be used most effectively, the company argues, while fully-optimized ASICs designed specifically for individual use cases come with high price tags, long development cycles, and hardware offerings that are largely inflexible.
The solution Raz came up with was to synthesize the workload into something like an FPGA (field-programmable gate array), a type of circuit that can be programmed after manufacturing, and then essentially use trial and error – or, in this case, continuously learning from the mistakes made by the synthesizer and then reconfiguring the chip in response –to discover all the different computational kernels and optimize them in the way that is best for your data.
Making compute intelligent
NextSilicon has called the architecture its accelerators are based on ‘Intelligent Compute,’ describing it as a novel and original computing architecture” that delivers an increased performanceper-watt when compared to traditional GPUs and high-end CPUs, while simultaneously cutting power consumption and related costs.
The company’s Maverick-2 Intelligent Compute Accelerator (ICA), the chip that has to deliver on this bold claim, was
“That is why dataflow architecture failed, because you never know how the data flows within the algorithm,”
unveiled in October 2024 and is built on TSMC’s 5nm process technology. Available in a single-die PCIe format with 96GB of HBM3e memory and a maximum power consumption of 300W, the dual-die Open Accelerator Module (OAM) offering also uses 5nm process technology but contains 192GB of HBM3e and has a maximum power consumption of 600W.
Designed for workflows that run in HPC and AI environments, the Maverick-2 ICA also supports popular programming languages and requires no code or software stack changes to deploy.
In October 2025, the development of a new RISC-V test chip, dubbed Arbel, was also revealed. While few details regarding that new chip have been released yet – following the initial announcement, NextSilicon said additional details on Arbel benchmarks will be released as progress is made beyond the test-chip phase – Raz says the company is “very proud” of its RSIC-V core and that the chip will compete with Intel and AMD upon its release.
One of the challenges with designing custom chips is trying to keep up with the ferocious pace of change currently sweeping through the industry and
>>Elad Raz, NextSilicon
the growing demand for custom hardware. Earlier this year, an executive from Google said getting specialized architecture from concept to live in production at the speed of light still takes two and a half years, even for the world’s best design and engineering teams. And that’s if you get everything right, which very few do.
Raz acknowledges this and says it is something NextSillicon has had to navigate, noting that the company started designing Maverick three years ago, meaning it needed to estimate where the competition would be almost half a decade into the future – something which he claims, thus far, it has managed to get right.
However, NextSilicon is not the only company currently claiming to be developing a chip that will change the infrastructure landscape. What sets it apart from the rest, Raz claims, is its combined software-hardware play.
“Look at every AI ASIC out there –Nvidia, AMD, Intel, the AI ASIC startups – all of them say: ‘Behold, here is a sophisticated chip! We have something in the hardware that is better, is analog, and has a sophisticated interconnect’,” he says. “Each one has done it, and all of
Elad Raz
Credit: NextSilicon
them are saying: ‘We are hardware gurus, we are going to build better hardware than the current leader,’ which, right now, is Nvidia.
“What we are saying is: ‘We have this software-hardware play, and it's an architecture from scratch,’ meaning that if one of those companies wants to go the NextSilicon way, they need to start from the very beginning. To gain configurability, you cannot take a processor code and reconfigure that, you cannot take a tensor code and reconfigure that. So, [what NextSilicon is doing] it's very different.”
The market’s continued desire for software innovation has also been beneficial to the company, given the somewhat daunting pace of change in the hardware market, largely driven by companies like Nvidia and AMD committing to a yearly release cycle for new products.
“No one knows what the next algorithmic approach might be –we already saw a transition from convolutional neural networks to transformer-based models, and we could wake up tomorrow, and there’s the next new thing,” he says. “I think that future proof-capability, so to speak, that we build the foundation around with our software stack, allows us to more easily adapt, and should give us a leg up with customers who want to try something new, but are already a bit nervous around what might come tomorrow from a software innovation perspective, which will always outpace the hardware innovation.”
Conquering HPC
Right now, NextSilicon is all in on the high-performance compute (HPC) business.
Although the decision was made eight
“I can show you the seed investment deck I used eight years ago, it says: Conquer HPC, then move to the commercial market with AI/machine learning. And that's exactly where we are now,”
>>Elad Raz, NextSilicon
years ago to go after the supercomputing industry, Raz says the company is still focusing its efforts on educating the market on its novel architecture approach, as it can be “super hard” for some people who are already accustomed to a particular processor code to understand what it is that NextSilicon is proposing.
“We are writing a lot of technical papers and hoping to explain why this is the future… why it's the only way to get 10x performance per watt on the power consumption.”
Despite scepticism towards its approach from some circles, it hasn’t stopped NextSilicon from securing a number of rather high-profile customers and investors – to date, the company has raised more than $303 million across four funding rounds.
In 2024, NextSilicon announced it had partnered with Sandia National Labs and Penguin Solutions to deliver an Advanced Architecture Prototype System, as part of the lab’s National Nuclear Security Administration (NNSA) platform strategy. At the time, Sandia said it was also planning to build a novel architecture for its Spectra supercomputer using Maverick-2 as part of its Vanguard-II program.
NextSilicon also counts US government departments among its customer base, in addition to a number of unnamed academic research institutions and financial services, energy, manufacturing, and life sciences organizations.
Raz labels those operating in the HPC space as “true early adopters,” adding
that the “big machines” the company is building with the US Department of War and various national laboratories is allowing the company to learn a lot about its own scale-out story and how to develop its technology the right way.
“I can show you the seed investment deck I used eight years ago, it says: Conquer HPC, then move to the commercial market with AI/machine learning. And that's exactly where we are now,” he says.
After securing “big wins” in the HPC market, he is confident NextSilicon will be able to do the same when executing the next part of its business plan. The main difference is that for AI and machine learning workloads, you need a lower precision - FP4 or FP8 instead of FP64, for example - and Raz believes this will be the company’s next evolution.
Looking ahead, Raz acknowledges that the company’s customers aren’t about to immediately build a state-of-the-art cloud-scale system using NextSilicon chips in the next three months, but says it is on a journey to get there.
“It's a journey that we are doing together with our partners, and big news is coming,” he says. “We’ve unrolled our chip, and now we are doing like a rolling thunder of news that goes deeper into our technology, which will allow customers to write papers and give testimonials about how they see that progression with NextSilicon.
“We are competing against giants that have been in the market for a long time, but we have something that is so novel and so unique that some of the customers experiment with us, they say: ‘Can I build a cluster that is 10 times faster in terms of performance and power consumption?’ To which we say: ‘Yes!’ It’s a journey, but this is how we are slowly getting there.”
Circlemiser is an AHRI-certified air-cooled chiller equipped with innovative cylindrical condensers, cascade-flooded evaporators and oil-free, magnetic-bearing Turbocor compressors. With up to 15% higher energy efficiency and a 10% smaller footprint than traditional oil-free chillers with V-Bank condensers, the Circlemiser series sets a new standard in data center cooling. Already installed in some of the world’s largest data centers, Circlemiser proudly stands as the most efficient air-cooled chiller on the market.
Significant PUE improvements
Increased cooling capacity
Space optimization
Lower maintenance costs
Higher reliability
Lower envonmental impact
LEARN MORE
MVNOs: A niche or a nuisance?
Mobile virtual network operators are shaking up the telecoms market, but is this at the expense of the mobile operators?
US President Donald Trump, fintech app Revolut, and English football club Millwall do not appear to have much in common, but the trio are among those to have made the leap into the telecoms industry by setting up their own mobile virtual network operators, or MVNOs.
They join the numerous organizations that have launched MVNOs in markets around the world in recent years, and the trend that looks set to disrupt the industry.
An MVNO doesn’t own or operate its own network infrastructure in the same way that a traditional mobile network operator (MNO) like Verizon or Vodafone does. These MVNOs instead piggyback off of these networks as wholesale providers, offering their own mobile plans.
“MVNOs are particularly good at finding a segment of customers, and that can be a very large segment of customers and building a proposition that kind of fits around those customers,” says telecoms industry analyst James Gray, managing director of Graystone Ltd.
MVNO origins
Though MVNOs were initially a Scandinavian concept, the first to launch was Richard Branson’s Virgin Mobile, which hit the UK market in 1999.
“The UK and [telecoms regulator] Ofcom were smart,” says Allan Rasmussen, chief executive officer at consultancy MVNO Services. “But it all began in Scandinavian countries. There was a company in Scandinavia that wanted to launch this MVNO, but it was a new idea, and the regulators were not keen on allowing it.
“Meanwhile, Ofcom was actually looking into what was happening, and they green-lit the MVNO, so Branson scooped in and launched before anyone else.
Fast-forward to 2025, and GSMA Intelligence estimates that the number of these networks sits at around 2,138 globally.
But the focus and strategy of these MVNOs have also shifted, according to Rasmussen, who has covered this market since the very beginning.
“We saw with the early days of MVNO 1.0 was the discount model coming in, being cheaper, but giving the same offers as mobile network operators were doing,” he says. “But you can't do that for a long time, because it will kill you. So from here we saw a shift.”
Around 2010, the market transitioned into what Rasmussen describes as an MVNO with a purpose.
“They were targeting specific segments and lifestyles of people, and then upselling what they already had in their business,” he says. “From here, we started to see big brands and retailers coming in, and these guys are not interested in selling SIM cards and airtime.”
The wholesale opportunity
The trend of more MVNOs entering the market is no surprise, says Gray. He used to work for Vodafone, where he helped the telco to pursue MVNOs in the mid2000s. He also later helped launch iD Mobile Ireland, another MVNO.
“The barriers of entry have dropped
Paul Lipscombe Telecoms editor
significantly,” Gray says. “When I was at Vodafone, the capex requirements were quite significant. You'd probably have to spend at least a million pounds to get your MVNO. But a lot of that's changed now.”
Gray didn’t specify costs, but it’s reported that MVNEs (mobile virtual network enabler), which enable MVNOs, can launch for between $100,000–$400,000.
Gray explains that cloud-based billing systems, the broader shift to ecommerce versus traditional retail, have made it easier to become an MVNO. “You can just do it all online,” he says. “So that's dropped some of the barriers. I also think MNOs are more open to MVNOs, or have a strategy of bringing in virtual network aggregators, to have the ability to resell the network services.”
Celeb appeal
While the nature of the mobile carrier market is a bit more structured and traditional, in that the main carriers will offer specific plans or airtime, with MVNOs, it’s a bit different.
Earlier this year, President Trump launched Trump Mobile via MVNO Liberty Mobile Wireless (LMW). The service was launched as part of Trump’s plans to
release a smartphone in September, but the T1 Mobile has still not been released.
Beyond Trump, YouTuber MrBeast is said to be considering his own MVNO. He will have noted the success of actor Ryan Reynolds, who took a 25 percent investment in MVNO Mint Mobile in 2019. The company was acquired by T-Mobile for $1.3 billion in 2024.
Gray says that it’s likely more big names could enter this market, tapping into their fanbases in the process: “Some of these celebrities have incredible reach, and hundreds of millions of viewers, in the case of MrBeast,” he says.
YouTubers could offer their content via a telecoms plan, Grat says, in the same way customers can get a Netflix subscription via their contract with the main mobile carriers.
Indeed, Rasmussen says that the latest wave of celeb interest in becoming MVNOs is nothing new, pointing out that numerous football clubs in Brazil have their own.
And not everyone is convinced that the celeb factor is a successful model to aspire to.
Peter Adderton is CEO and founder of US-based MobileX, an MVNO that uses AI to determine the adequate data plan for customers.
Adderton knows a thing or two about MVNOs. He created Boost Mobile in 2000, back in Australia, before expanding to New Zealand, Canada, and the US. Boost would eventually be acquired by Sprint (since acquired by T-Mobile).
He points out that Mint existed several years before Reynolds got involved,
“MNOs invest huge amounts of money in their networks and spectrum, so MVNOs are a great way for them to absorb some
of these sunk costs,”
>> Alex Franks, Tesco Mobile
noting that the company had distribution and a base before the Deadpool star’s involvement.
“[Reynolds] paid money to buy in and then Mint used his personality pros to build its brand,” he argues. “If it had been Reynolds Mobile, or Ryan's brand, or whatever, it would not have had the same success.”
“Every little helps”
Retailers have been big beneficiaries of the MVNO boom. Supermarket brands including the likes of Sainsbury’s Mobile, Asda Mobile, and Superdrug Mobile have all hit the UK market, but none have had the impact of Tesco Mobile. Launched in 2003, it has close to 6 million mobile customers, and according to Uswitch, is the fourth-largest mobile provider in the UK with a seven percent market share.
“Millions of people shop in Tesco every week, so it’s an easier upsell,” says Gray.
“There’s an opportunity for telcos to monetize their networks better and push into wholesale,”
>> Kester Mann, CCS
It’s this loyalty that Tesco Mobile is tapping into, notably through its loyalty points membership Clubcard.
“MVNOs offer customers more choice in a market where we are seeing consolidation of the MNOs. Tesco Mobile has a broad appeal, is competitive on price, and offers additional benefits through Clubcard,” explains Alex Franks, group director for telecoms, Tesco Mobile.
Franks says Tesco Mobile aims to utilize its vast footprint to grow its customer base, both in the UK and other markets such as the Republic of Ireland, Czechia, and Slovakia.
“I think there continues to be opportunities for MVNOs even though the market is becoming increasingly crowded, especially with the likes of Revolut and Monzo joining the party,” he says. “We think we have plenty of opportunity to go after, we have well over 20 million Clubcard holders who shop regularly at Tesco, that's a huge amount of footfall coming to our over 500 phone shops.”
Carriers opportunity
For the MNOs themselves, MVNOs offer an opportunity to play to segments of the market that they don’t necessarily serve via their traditional means. Many are pushing their own low-cost sub-brands.
VodafoneThree, which merged in the UK earlier this year, confirmed it will continue to offer MVNO services as part of its plans to operate a multibrand mobile strategy, with MVNOs Voxi, Smarty, and Talkmobile complementing its main networks.
Graystone’s Gray says MNOs try to be “all things to all people,” and running their own MVNOs can help with this. Operators are “recognizing that they need specific propositions and branding to target a specific type of customer,” he says. “The MVNOs have always been very good at that, and have also been very good at running a lightweight organization.”
Rasmussen argues that having several low-value brands could end up “watering down their own brand,” but the trend looks set to continue, with reports suggesting that BT’s EE is set to introduce a budget MVNO.
Tesco’s Franks says: “MNOs invest huge amounts of money in their networks and spectrum, so MVNOs are a great way for them to absorb some of these sunk costs and also a great way for MNOs to protect their more 'premium' tariffs by
“I think you're going to see a massive car wreck of MVNOs in the next five years, because carriers are now looking to grow there,”
>> Peter Adderton, MobileX
having slightly more affordable MVNOs on their networks.”
Crowded field
With as many as 300 MVNOs operating in the US alone, Adderton believes the market may be reaching saturation point.
“When I started Boost, it was around 49 percent market penetration, which means 51 percent of Americans didn't have a line,” he says. “It was the same in Australia, where they didn't have one. The opportunity to grow that business expanded really quickly because half of the country didn't have any phone.”
With penetration is now at around 120 percent, he fears “everyone is running in for the gold rush, but the problem is there's no gold left in the hills.”
As a result, “carriers are stealing off each other, and where they were quite happy for MVNOs to handle different segments such as the lower value
audience, that's all changed now,” he says.
Indeed, Adderton contends that, despite bullishness from some quarters, the MVNO market is “in the worst state it's ever been in.” This is because MVNOs are only actually able to grow at the “whim of the carrier.”
He continues: “If the carrier doesn't want you to grow or wants to slow you down, and your negotiating power is zero. They can raise your prices, they can lower theirs. There are different handles that they can pull to make sure that you stay in your box.”
Popping off
Adderton is pretty gloomy when it comes to the market’s future.
“I think you're going to see a massive car wreck of MVNOs in the next five years, because carriers are now looking to grow there, where before they let us grab the coins out of the couch,” he says.
Ramussen strikes a more positive note for MVNOs, and says there could be an opportunity to develop specialist data networks for IoT and other B2B use cases. Gray, meanwhile, says he expects some fiber providers to launch their own networks.
At the consumer end of the market, analyst firm CCS Insight predicts a pop star will launch their own MVNO by 2027, with names like Beyoncé and Taylor Swift tipped to break into this segment. For existing MVNOs, hitting the right note in an increasingly crowded field is unlikely to get any easier.
In 1964, the country of Liberia's post office released a series of stamps to celebrate communications in space, titled 'Peaceful Uses of Outer Space.'
The top stamp looks at Mariner II, a NASA project behind two major telecom achievements. Using the Deep Space Instrumentation Facility (now the Deep Space Network), the agency successfully tracked and communicated with the spacecraft as it passed Venus - managing to communicate across 53 million miles. NASA also used a 13kW, 85ft antenna to bounce radar signals off of Venus and to the Mariner.
The Relay stamp celebrates an early elliptical medium Earth orbit satellite developed by the nowdefunct RCA Corporation. The Relay 1 sent the first American television transmission across the Pacific Ocean - initially it was meant to be a pre-recorded message from President John F. Kennedy to Japan, but was instead news of his assassination.
Syncom, a series of active geosynchronous communication satellites, were developed by Hughes Aircraft Company. Syncom 1 was meant to be the first such satellite, but immediately suffered electronics failure. The second was a success, used purely for testing, and managed to send the first two-way call between heads of government by satellite (Kennedy and Nigerian Prime Minister Abubakar Tafawa Balewa). Number 3 was ready for prime time - supporting Department of Defense communications in Vietnam, and was no longer purely for peaceful uses
- Sebastian Moss, DCD Editor-in-Chief
Georgia Butler Cloud & Hybrid Editor
The woods of IT are dark and deep
Seasoned CIO Tony Scott’s journey through the forest of Disney, Microsoft, and the US Government
It seems unlikely that a career could pivot from the forest to the data center, but it can and, in the case of Tony Scott, can do so very successfully.
Scott is quite active online, sharing his thoughts regularly on IT issues and how CIOs should be looking at their businesses. It makes it quite easy to perceive his passion for the topic, not to mention his manifold experience over the years.
What is not immediately apparent from his online persona is his passion for the outdoors - and how his career took a pretty drastic pivot just as it was about to kick off.
Having worked on projects that took at-risk young people out of inner-city environments and into the natural world, Scott was pretty certain that he wanted to work in parks and recreation administration. But “midway through college, I realized that probably wasn’t going to be the final destination of my career,” he tells DCD, chuckling.
Since that pivot, Scott has held CIO roles at General Motors, The Walt Disney Company, Microsoft, VMware, and the US government - just to name a few of his career exploits.
While at the University of Illinois, Scott began rethinking the trajectory of not just
his career, but his life. According to him, his roommate at the time was something of an insomniac and would spend night after night rattling on about how much of a nightmare the Midwest was. Eventually, Scott was convinced to head to California, where he began working at a theme park.
This was sometime in “the late 1970s or early 1980s,” and at the time Scott still hadn’t made the move into IT. But, while at the theme park, the axis shifted.
“The IT guy at the park said to me one day, ‘Tony, there’s this new computer out called Apple 2 that I think we could write software on, and that would be better at predicting attendance at the theme park,
and scheduling labor, and a bunch of other stuff.’ At the time, that was all done with a mainframe,” Scott says, adding that, at the time, he didn’t know a thing about software.
“The IT guy said: ‘I can teach you to write software, but you know the business better than anybody else.’ After about 12 weeks, we had come up with a program that was beating the mainframe, and that got me hooked.”
There was, of course, a period between this moment and the succession of CIO roles. Scott mentions a few startups that he kicked off, including one that could manufacture floppy disks “faster than any other technology” at that point.
“That was a great business until there was no need for floppy disks anymore,” he says.
Eventually, Scott ended up at The Walt Disney Company as the global CIO, holding the helm from 2005-2008. Naturally, a lot will have changed since he was there, and what he describes provides an interesting time capsule.
In the early 2000s, the entire Disney group was operating out of one data center located next to Walt Disney World in Orlando. “It no longer is, but that’s a whole different story,” Scott says. DCD was intrigued.
“We had a huge IT incident where lightning struck a flagpole outside the data center and fried a bunch of the electrical equipment in the data center,” Scott says, able to laugh about it 20 years on. “We had to failover to our backup data center. There was no operational disruption - but it was a huge event for the IT staff, and we made a decision right after to, well, invest in data centers outside one of the lightning capitals in the world - Orlando.”
These days, Disney has a pretty varied IT estate. In 2021, the company launched a data center (DC3) in The Woodlands, Texas, that centralized several facilities across New York, North Carolina, Las Vegas, and the Los Angeles area. At that time, the studio also had DC1 and 2 in Bristol, Connecticut. DCD has contacted the company to see if those data centers are still in use, and if that is now the company’s full IT infrastructure footprint.
Disney adopted a private cloud model in 2015, and in 2021, the company was using HPE technology for its internal cloud workloads and Amazon Web Services (AWS) for the Disney+ streaming service.
While serving the Magic Kingdom, one
of the major projects Scott had to deal with was the launch of on-demand streaming on ABC Television, a part of Disney.
Today, this would seem like a given, and at the tim,e Scott says Disney saw it as “a big opportunity.” He explains: “ABC Television was one of the properties of Disney, and we were one of the first companies to start offering the streaming of our hit TV shows the day after. We had a show called Desperate Housewives that was super popular at the time, and it was a big deal that you could go on your computer and catch up on what you missed the night before.”
Of course, this necessitated a different approach to IT. Suddenly, a whole new business model has entered the game, in the form of the nascent cloud computing market.
“The Internet existed, but there was no real cloud,” Scott says. “At the time, it was just content distribution networks (CDNs), and all of this sort of thing had to be invented along the way. When you are a pioneer in a space like that, you start to realize all of the business issues as well as the technology issues that have to be solved for a thing to happen. A lot of the models that we have today were really developed in that 2005 time frame. It was a time of really intense change.”
While CDNs still play a key role in managing network traffic today, most streamers rely heavily on cloud computing providers. Disney+ is based on Amazon Web Services, as is Netflix, having moved in 2016.
In 2008, Scott departed for Microsoft, the first of many “poachings” through his career.
If you look up Tony Scott on LinkedIn, you immediately know what he stands for. He shares lengthy opinion posts at least weekly, and a recurring theme among them is a strong cloud-positive approach.
Explaining his philosophy, he says it comes down to “a few things.” One is a belief that “a constant refresh of core infrastructure and keeping things modern is of value, and gives you a leg up - as opposed to sweating assets until they break.”
Scott explains: “The cloud really gives you the opportunity to do that, because you can swap out pieces at a time and refresh them without having to do a complete lift and shift.” His time at Microsoft as CIO saw the firm move its internal systems to its own Azure platform.
“We made
a decision right after to, well, invest in data centers outside one of the lightning capitals in the world - Orlando,” >>Tony Scott
On the cost of cloud, Scott says: “Moore’s law is still pretty good. I can run workloads today and, assuming they stay the same, half the cost in three years or so. Why wouldn’t you want to take advantage of that?”
The cost-benefit is something often proffered by cloud providers, though, as with anything, it is heavily caveated. Cloud computing costs can quickly ramp up, and increasingly, we are seeing headlines of companies reducing their reliance on platforms like Azure. Zoom notably said in 2024 that it was looking to decrease its dependence on cloud computing services, and software firm 37signals has widely publicised its own exit from AWS, claiming to save millions in storage and compute costs.
Beyond costs, Scott sees further benefits to the cloud.
“Because of cloud architecture, you can build in reliability and redundancy in a way that’s harder to do now,” he argues. “You can do it in your own private cloud, but that fundamental architecture really lends itself much better than the classic older mainframe models we grew up with.”
Having experienced a redundancyrelated “incident” at Disney, this is understandably a key concern for Scott.
“I get asked all the time what keeps me up at night. And my answer is always one of two things: One, human error. In 99 percent of cases, somebody fat-fingered something, or pushed the wrong button, or was really tired and had been working on something for 20 hours straight.
“The other is where there is a single point of failure that hasn’t been recognized. The flag pole thing, for example. We had just no idea, but had we known, we could have prevented it. The cloud enables you to ameliorate and guard against some of those things.”
While Scott toots the cloud horn, it’s important to remember that no system is perfect. Or, at least, it is up to the enterprise to put redundancy measures in place even when workloads are running in the cloud, as a string of high-profile outages in recent months have shown that a cloud server going down even for a few hours can have big implications.
Scott’s most recent CIO role perhaps represents the apotheosis of his career:
CIO of the federal government. He was the third-ever federal CIO, holding the role from 2015-2017, before stepping down with the entrance of the Trump administration.
Understandably, the federal CIO role is pretty unique. Rather than handling the IT of a single organization, you are instead dictating the IT approach of a nation, with the job falling under the Office of Management and Budget (OMB).
“It has two real responsibilities,” Scott explains. “One is the management authority for establishing what the rules are in the metrics for how the government will evaluate IT. For example, guidance around open source technology or cybersecurity requirements. That’s important because we were spending a billion dollars a year on just IT in the civilian side of the federal government, and then another two billion dollars in defense. It’s a lot of money, and you need some frameworks for it.
The other side of the role is managing the budget. “Every year, Congress has to pass an appropriation and authorization - two separate votes for everything the government does in IT. Over the years,” Scott says. “That has become a very complex process.”
In times when Congress cannot agree to a budget before October 1 - or the beginning of the Federal Government’s fiscal year - the government falls into either a continuing resolution or even
Tony Scott
“I believe a constant refresh of core infrastructure and keeping things modern is of value, and gives you a leg up,”
>>Tony Scott
a shutdown. At the time of interviewing Scott, the US was in a government shutdown for this very reason. It was eventually resolved after 43 days, making it the longest in US history.
Scott says: “When Congress has done its job, then money is set aside for whatever they propose to do, and they come back to the Office of Management and Budget and the Federal CIO part of that office, and say, ‘Here's specifically what I'm going to propose to do with that money that's been authorized and appropriated. Do you agree?’ With the OMB’s blessing, the agency can go do that particular project. So it's a very powerful role.”
It is this responsibility that made the role so very special to Scott. With so many agencies reliant on not just the OMB, but him as an individual, he tells DCD that it was “certainly” the highlight of his career.
As he surmises: “You are dealing with things every single day that matter. It can literally be life or death.”
With his CIO days behind him, Scott has shifted instead to a CEO position at Intrusion, a network security company. Instead of making the IT decisions, he is now helping other organizations to ensure they have secure IT networks and, of course, helping the masses on social media see the wood from the trees.
Don't say the 'b' word
Inflection point
Once discussed only by talking heads or quietly on the sidelines of industry events, AI bubble talk is now reaching a fever pitch.
Markets are taking note. Oracle has seen its share price erase all value from its $300 billion OpenAI cloud deal, and its bonds have taken a huge hit over debt fears. Better-funded hyperscalers aren't loading up on as much debt, but are still burning through cash reserves and taking on some debt to fund data center capex.
At the same time, actual take-up of AI tools has been muted. Most enterprises have launched prototype tests, but the results have been mixed at best.
We're now at an inflection point: Market appetite for speculative building is wan-
ing, while the number of new projects that can realistically be announced is shrinking. OpenAI cannot announce another trillion in spending, and hyperscalers have already got gigawatts of pipeline to fulfil.
That does not mean it's definitely a bubble, it just means that keeping the growth going is getting harder. AI companies like OpenAI and Anthropic have less and less time to actually show value in their machinations, while data center firms will have to work hard to hit the aggressive timelines they've promised.
2025 was the year of the data center party. 2026 is when we'll start to see if we'll regret the excess and exuberance, holding our heads and promising 'never again' until the next bubble, or if we'll manage to dance on and keep the music going.