Skip to main content

CXO DX March 2026

Page 1


THE CHALLENGES OF BUILDING AN INTELLIGENT ENTERPRISE

The industry is consolidating its approach away from deploying technology solutions as tools, towards redesigning how work, security, and decision-making come together in increasingly complex environments. With AI at the centre of this shift, the focus is moving beyond the initial phase of productivity gains towards execution by an AI-driven workforce.

From copilots to agents, from dashboards to decisions, enterprises are beginning to explore what it means for AI to not just assist, but actively participate in workflows. This raises new questions around accountability, governance, and the structure of work itself, particularly as organisations begin to manage both human and machine-led processes. We are fast-tracking towards an era where AI agents are likely to be colleagues, bringing associated challenges into the mix.

While this transition unfolds, data security and identity remain serious concerns that organisations must address right from the design and implementation stages. Governance assumes even greater significance when organisations operate with a mixed workforce of humans and AI agents working and collaborating together. The nature of these collaborations will vary across industries and organisations.

As identity becomes the primary attack surface and data environments grow more distributed, enterprises are being pushed to adopt models that prioritise continuous verification, unified visibility, and real-time control. The convergence of AI, data platforms, and security frameworks is no longer optional, it is becoming central to how organisations operate.

What is becoming increasingly evident is that technology alone will not define success. The organisations that move ahead will be those that align architecture with operating models, integrate intelligence into core workflows, and rethink how value is created and measured.

The enterprise is being restructured around intelligence, autonomy, and the ability to turn data into decisions at scale. This transition is also exposing new pressures such as CIO accountability tied to AI outcomes, highlighting that success will depend as much on alignment and execution as on technology itself.

RAMAN NARAYAN

Co-Founder & Editor in Chief narayan@leapmediallc.com Mob: +971-55-7802403

Ali Raza Designer

SAUMYADEEP HALDER

Co-Founder & MD saumyadeep@leapmediallc.com Mob: +971-54-4458401

Nihal Shetty Webmaster

MALLIKA REGO

Co-Founder & Director Client Solutions mallika@leapmediallc.com Mob: +971-50-2489676

14 » OPERATIONALISING INTELLIGENT UTILITIES

Through its CONNECT platform and AI-infused digital twin capabilities, AVEVA is enabling the utilities industry transition from fragmented operations to unified, intelligence-driven systems.

Sabir Saleem, CEO at MBUZZ, highlights how the company is expanding its focus across AI, datacentre, networking, security, and intelligent computing to align capabilities with real business requirements.

Ahmad Shakora, Group Vice President at Cloudera how Cloudera is enabling organisations to bring AI to their data while maintaining governance, flexibility, and hybrid control.

Tommaso Stefano Tini, Head of Digital Twin Market Growth and Consulting at Omnix International writes that digital twins are being seen as foundational capabilities that support better governance and performance across complex built environments.

Tolga Özdil, Regional Commercial Director, Middle East, Turkey & Africa (META) at ASUS, outlines how the company is aligning its portfolio around AI-ready devices, hybrid work demands, and secure, high-performance computing.

The clock is ticking for security operation centers in the Middle East, with relevance and resilience hanging in the balance, writes Ahmad Alshaer, Security Leader, Middle East & Africa, DXC Technology

Siddhesh Nagaonkar, CISO at Direct Honest Safe International Exchange FZE, highlights how evolving threats and AI-driven attacks are reshaping how organisations approach identity security.

Mortada Ayad, VP – META at Delinea, examines why enterprises are managing AI agents with far less discipline than human employees, and the risks this creates.

Sameer Joshi, Digitalization Director, NADEC, explores how the rise of AI agents is redefining the nature of work and what it means for organisations preparing to manage a hybrid workforce of humans and machines.

Raif Abou Diab, General Manager –South Gulf & Sub-Saharan Africa at Nutanix, examines why cloud operations are struggling to keep pace and what organisations in the MEA region must rethink to regain control.

PURE STORAGE BECOMES EVERPURE ANNOUNCES INTENT TO ACQUIRE 1TOUCH

Pure Storage has announced its new name: Everpure. This change reflects the company’s greater impact from reshaping storage to defining the future of data management. The company also announced it has entered into a definitive agreement to acquire 1touch, an innovator in data intelligence and orchestration that provides a comprehensive, unified view of an enterprise’s information. With 1touch, Everpure furthers its commitment to data management innovation, making data secure, accessible, intelligent, and ready to perform.

“Everpure reflects the company we have become as we help enterprises unleash the full power of their data. It captures the power of our Enterprise Data Cloud architecture and adaptability of Evergreen, reinforcing what has always set us apart as we redefine important markets. With 1touch, we are taking the next step in helping organizations not only gain control of

their most valuable asset—data—but also understand, enhance, and contextualize that data for actionable intelligence,” said Charles Giancarlo, CEO of Everpure.

As AI becomes central to business operations, the modern enterprise has reached an inflection point. AI has exposed the weaknesses of current infrastructure, where siloed data, manual processes, and inflexible architectures cannot support the scale, speed, and intelligence demands of enterprise AI.

Powered by the Everpure Platform (formerly the Pure Storage Platform), Everpure’s Enterprise Data Cloud (EDC) architecture transforms storage into a unified, virtualized cloud of data, governed by an intelligent control plane. It manages datasets globally, through policy, eliminating the friction of manual configurations, which brings unprecedented simplicity,

agility, and efficiency to data management.

The acquisition of 1touch will extend Everpure’s data management capabilities by adding data discovery and semantic context to the Everpure Platform. By integrating storage with 1touch’s ability to discover, classify, contextualize, and enrich data across all datasets and any environment—from SaaS to the edge—Everpure will ensure enterprise data is inherently AIready at the source.

GENETEC SURVEY REVEALS SAUDI ORGANIZATIONS LEAD EMEA IN CLOUD ADOPTION AND SECURITY INVESTMENT

Forty-three percent of respondents reported increased physical security budgets in 2025

Genetec, a global leader in enterprise physical security software, released its Saudi Arabia findings from the 2026 State of Physical Security Report. Based on insights from more than 150 physical security professionals in Saudi Arabia, the findings show a market that is investing confidently in modern, connected security infrastructure, with strong momentum in cloud adoption and operational modernization.

The report reveals that Saudi Arabia has the highest proportion of cloud-based physical security systems in the EMEA region, with 13 percent of respondents stating they use cloud security systems, compared to the EMEA average of seven percent. This reflects a growing preference for flexible deployment models that support scalability, resilience and simplified system management.

Saudi Arabia also recorded the highest level of operating expenditure growth among

EMEA markets surveyed. Forty-three percent of respondents reported increased physical security budgets in 2025, nearly double the EMEA average of 24 percent. Among those reporting an increase, 92 percent said budgets grew by more than 10 percent, and nearly two-thirds (64 percent) reporting growth of 11 to 25 percent, highlighting sustained investment in security as a strategic business function.

Unlike many markets across EMEA, outdated infrastructure is not seen as a major barrier in Saudi Arabia. Only 12 percent of respondents cited legacy security infrastructure as a top challenge, compared with 44 percent across EMEA overall, reflecting the Kingdom’s continued investment in new infrastructure, smart cities and largescale development projects.

“Saudi organizations are moving quickly from traditional security deployments toward connected, flexible platforms that

support broader operational requirements,” said Firas Jadalla, Regional Director for the Middle East and Africa at Genetec Inc. “The Saudi findings reinforce the Kingdom’s position as a fast-moving and future-focused security market, where organizations are prioritizing sustained investment that aligns with the Kingdom’s Vision 2030 digital transformation goals.”

Firas Jadalla

VEEAM INTRODUCES AGENT COMMANDER TO CONFRONT AGENTIC AI RISK

The solution will detect AI, protect AI, and undo AI mistakes precisely with deep data and AI risk intelligence.

Veeam Software announced Agent Commander, a unified solution to help organizations safely detect AI risk, protect AI systems, and undo AI mistakes, empowering them to proactively address AI-driven risks and securely scale AI agents everywhere. The first integration from Veeam’s successful acquisition of Securiti AI, Agent Commander combines the capabilities of both to give organizations visibility, control, and protection over their entire data and AI estate, with the ability to undo AI mistakes with precision and ease. Agent Commander will be available in a future release of the Securiti Data Command Center, bringing together the industry’s leading Data Resilience and Data Security capabilities.

“AI happens at machine speed, which means organizations must understand what data is being used, by what agent, and how in real-time. If an error occurs, organiza-

tions not only need to understand what data was impacted, but they also need the ability to undo any damage rapidly,” said Anand Eswaran, CEO of Veeam. “With Agent Commander, organizations know what data is powering AI, and it gives them the power to detect, protect, and, when necessary, undo AI actions with speed and precision. It represents the future of what’s expected from data security and data resilience, and it’s only possible with Veeam’s unified platform.”

The most critical gap in AI infrastructure today is trust. An agent is only as trustworthy as the data it can see, access, and act on. Yet enterprise controls remain fragmented with separate systems for protection, security, governance, and recovery, and none built to provide unified visibility, granular control, or precision response at the speed and scale AI now demands.

Commander brings Veeam’s trusted data resilience together with Securiti AI’s Data Command Center. This unified platform gives organizations total visibility into their AI environment, detects hidden risks and Shadow AI, and provides comprehensive controls to protect data as it moves through AI systems.

CISCO UNVEILS AGENTICOPS INNOVATIONS TO SIMPLIFY AND SCALE AI-DRIVEN IT OPERATIONS

New AgenticOps capabilities in networking, security, and observability reimagine how to automate, scale, and simplify IT operations in the AI era.

Cisco has announced new AgenticOps innovations for the AI era. First launched last year, AgenticOps is an agent-first IT operating model for autonomous action with built-in oversight. New capabilities unveiled today across networking, security, and observability further transform how IT teams operate at scale.

“For teams responsible for operating and securing distributed networks and infrastructure, AgenticOps represents a profound and fundamental shift away from complexity,” said Jeetu Patel, President and Chief Product Officer, Cisco. “This is the true power of Cisco as a platform. By delivering agentic capabilities aligned to critical IT operations priorities, we’re combining Cisco’s unique crossdomain visibility, purpose-built models, and governance together to supercharge teams.”

Cisco is extending agentic-driven operations across networking, security, and observability, delivering AgenticOps to support IT operations in cloud, onprem-

ises, airgapped industrial, enterprise, data center, and service provider environments.

New tools, skills, and platform enhancements across networking, security, and observability focus on operating networks at AI scale through intelligent, agentic execution. Capabilities include autonomous troubleshooting with end-to-end investigations, continuous optimization through context-aware recommendations, and trusted validation of network changes against live topology, configuration, and telemetry. Experience metrics consolidate network signals into a single actionable view, while agentic workflow creation enables deterministic automation within Cisco AI Assistant.

In data centers, AgenticOps enables early detection and intelligent event correlation to deliver prescriptive performance insights, while service providers benefit from Crosswork AI capabilities that identify and resolve complex multi-vendor issues more efficiently. Within Cisco Security Cloud

Control, firewall operations are enhanced through proactive policy recommendations, improved efficiency in detecting issues such as elephant flows, and continuous compliance monitoring for PCI-DSS. At the same time, AI Agent Monitoring in Splunk Observability Cloud provides visibility into the performance, cost, quality, and behavior of LLM and agentic applications.

Agent
Jeetu Patel President & Chief Product Officer, Cisco
Anand Eswaran CEO, Veeam

BPS APPOINTED AGGREGATOR FOR MSPS BY NUTANIX FOR MIDDLE EAST AND EGYPT

BPS aims to enable MSPs to scale faster, simplify onboarding, and deliver Nutanix solutions through flexible, consumption-based models

Nutanix has announced a strategic new goto-market partnership with BPS, appointing the company as its first aggregator for Managed Service Provider (MSP) business across the Middle East and Egypt. The collaboration marks a significant milestone in Nutanix’s partner-first strategy and underscores its commitment to delivering customer-centric outcomes through a strong regional ecosystem.

Through this partnership, BPS aims to enable MSPs to scale faster, simplify onboarding, and deliver Nutanix solutions through flexible, consumption-based models. For customers, the collaboration translates into easier access to trusted local MSPs, consistent service quality, and solutions designed around evolving business needs.

Nutanix has launched a purpose-built MSP program designed to support both managed service providers and their customers over the long term. Through its partnership with BPS, Nutanix is positioned to deliver these benefits to MSPs more rapidly, enabling them to respond more effectively to customer needs. The agreement also

supports Nutanix’s expansion of its MSP footprint across the Middle East and Africa, strengthening an ecosystem that already includes more than 400 MSPs globally.

Shaista Ahmed, Director – Channel & Ecosystem, Middle East & Africa at Nutanix said, “By leveraging BPS’s extensive reach and expertise, we are able to access and nurture the MSP market, creating new opportunities for growth. This collaboration allows us to deliver tailored solutions, accelerate adoption of our portfolio, and provide dedicated support to these partners. Together, Nutanix and BPS are not just expanding our footprint—we are enabling a more focused, impactful, and sustainable approach to serving this critical market segment.”

“Being appointed as Nutanix’s first MSP aggregator across the Middle East and Egypt is a significant milestone for BPS. This partnership allows us to bring Nutanix’s industry-leading cloud platform closer to regional service providers, enabling faster onboarding, simplified consumption models, and stronger go-to-market execution,” concludes Negib Abouhabib, General Manager, BPS.

ANKABUT AND DELL TECHNOLOGIES SIGN MOU TO EXPAND HIGH-PERFORMANCE DIGITAL LEARNING IN THE UAE

Dell will supply state-of-the-art client devices, workstations and infrastructure systems, supporting Ankabut’s mission

Ankabut, the UAE’s leading education cloud and network service provider, and Dell Technologies have signed a memorandum of understanding (MoU) to advance technological innovation within the country’s education sector.

The agreement, signed by Walid Yehia, Managing Director – South Gulf, Dell Technologies, and Tarek Jundi, CEO, Ankabut, underscores a shared commitment to redefine the learning experience in the UAE through cutting-edge technology and strategic collaboration.

Operating out of its state-of-the-art data centre at Khalifa University, Ankabut provides a wide array of services ranging from networking and virtualization to cloud, application services, security and managed support services.

Through this collaboration, Ankabut will leverage Dell’s GPU-as-a-service capabilities to empower academic institutions with accelerated computing for data-intensive research and advanced learning applica-

tions. Dell will also deliver high-performance computing systems and the latest client devices and workstations. These solutions will enable Ankabut to empower educational institutions across the UAE with secure, scalable and efficient technological infrastructures.

With this MoU, Ankabut aims to expand its capabilities as a Cloud Solution Provider, enabling more institutions to access high-performance computing and GPU resources that expand research frontiers, enhance teaching outcomes and foster innovation across the UAE’s education ecosystem.

Walid Yehia, Managing Director - South Gulf, Dell Technologies, said, “The UAE continues to set benchmarks for innovation in education through its commitment to technology-driven learning. Our collaboration with Ankabut aligns perfectly with this national vision, combining Dell’s advanced infrastructure solutions with Ankabut’s leadership in education networks to create a future-ready digital ecosystem for

students, researchers and educators across the country.”

Tarek Jundi, CEO, Ankabut, said, “This collaboration marks a pivotal step in advancing digital learning in the UAE. By integrating Dell Technologies’ advanced computing solutions with Ankabut’s cloud and connectivity expertise, we are empowering institutions to reimagine how education is delivered and experienced. Together, we’re enabling a smarter, more connected academic community that supports the UAE’s ambitions for a knowledge-based economy.”

KHAZNA AWARDED TIER III DESIGN CERTIFICATION FOR AJMAN FACILITY

Set to Achieve 1st Certified AI Data Center with Liquid Cooling in Middle East

Uptime Institute, the Global Digital Infrastructure Authority, announced that Khazna Data Centers, a global leader in hyperscale digital infrastructure, has achieved the Uptime Institute Tier III Certification of Design Documents (TCDD) award for its newest 100 MW AI-optimized data center, QAJ01 — set to be the first certified AI data center with liquid cooling in the Middle East and North Africa region.

This state-of-the-art development features 20 data halls, each delivering 5 MW of IT capacity, purpose-built to meet the demands of next-generation artificial intelligence (AI) workloads. The certification underscores Khazna’s commitment to designing world-class, resilient, and efficient data center infrastructure in alignment with the industry’s most rigorous global standards.

Located in Ajman, United Arab Emirates, the new facility has been designed with advanced liquid-cooling systems to support

the high rack densities and thermal loads required by large-scale AI training and inference applications, while optimizing energy efficiency and maintaining operational resilience.

“Achieving Tier III certification for our Ajman facility reflects Khazna’s deep commitment to engineering excellence and operational resilience as we scale to meet the AI era. QAJ1 sets a new regional benchmark, combining high-density readiness, advanced liquid cooling, and globally certified design to support the next generation of compute. It is a strategic milestone in our mission to deliver future-ready infrastructure, said Abdulmajeed Harmoodi, Chief Technology Officer, Khazna Data Centers.

“This Tier Certification marks an important advancement for the regional digital infrastructure ecosystem,” said Mustapha Louni, CBO, Uptime Institute. “Khazna’s AI-optimized facility integrates liquid cooling and high-density configurations

DXC LAUNCHES NEW PRACTICE TO HELP ACCELERATE AI ADOPTION

DXC proves AI at real enterprise scale through its own global deployment of Amazon Quick, supporting 115,000 employees across 70 countries.

DXC Technology, a leading enterprise technology and innovation partner, has announced the completion of DXC’s enterprise-wide deployment of Amazon Quick, the agentic AI-powered digital workspace, across its global workforce of 115,000 employees operating in 70 countries and the launch of the DXC Amazon Quick Practice, a new business unit focused on helping customers worldwide operationalize AI at scale across multivendor enterprise ecosystems.

Drawing on the same experience, operating models, and governance frameworks used inside DXC, the company helps customers move AI from pilot programs into full scale production with greater speed and confidence. As part of the rollout, DXC introduced an AI Advisor Agent that provides employees with a single access point for AI-related knowledge, tools, prototypes, and feedback and is now used by more than 40,000 engineers. The rollout

also includes role-based AI advisors, such as a Supply Chain Advisor that delivers fast, trusted operational guidance by connecting employees directly to validated knowledge, enabling teams to move faster with confidence.

“Deploying Amazon Quick across DXC’s global workforce gave us the opportunity to pressure-test at true enterprise scale. That experience now directly informs how we help our customers move beyond pilots and activate AI across their enterprises,” commented Russell Jukes, Chief Digital Information Officer, DXC.

DXC is launching the DXC Amazon Quick Practice to help enterprises deploy AI with greater speed, confidence, and control. Powered by more than 10,000 Amazon-certified professionals with over 1,000 trained and certified across Amazon AI specializations and DXC’s enterprise AI delivery programs, the practice com-

while maintaining Tier III level resilience. It demonstrates how data centers can evolve to meet the accelerating compute needs of AI without compromising reliability or efficiency.”

Uptime Institute’s Tier Certification of Design Documents (TCDD) is the first step in the Institute’s globally recognized Tier Certification process, validating that a facility’s design plans meet the requirements of its Tier Standard for Topology.

DXC

bines proven deployment methodologies, Amazon-native frameworks, and governance models validated within DXC’s own operations.

Cross-functional teams of AI architects, automation designers, and adoption leads partner with customers to identify high-impact use cases and rapidly deploy secure, pre-built AI capabilities spanning AI-powered research, advanced business intelligence, and agent-ready automation.

WSO2 LAUNCHES ‘FAIR PRICING FOR GOVERNMENTS’ INITIATIVE

New index-based pricing model aims to offer governments worldwide equitable access to world class technology, software independence

WSO2, a leader in enterprise digital infrastructure technology, announced a new ‘Fair Pricing for Governments’ initiative, aimed at supporting public-sector organizations’ worldwide access to high-quality, ethical digital services at prices aligned with local economic realities.

The initiative is based on a simple principle: government technology fees should be proportional to the average citizen’s income to ensure fairness. By introducing a standardized, transparent pricing model for government customers, WSO2 seeks to eliminate arbitrage, reduce opportunities for corruption, and ensure that all eligible public-sector entities receive the best possible price.

“Our goal is to facilitate software independence and help governments worldwide access high-quality, ethical digital services,” said Dr. Sanjiva Weerawarana, Founder, CEO and chief product officer at

WSO2. “We believe that fair pricing must reflect the economic context in which governments operate. By aligning fees with national income levels and applying a consistent, transparent framework globally, we are taking a concrete step toward enabling more equitable digital modernization.”

The initiative builds on WSO2’s long-standing commitment to open-source principles, ethical technology practices, and inclusive modernization. By standardizing government pricing globally and anchoring it to internationally recognized economic indicators, WSO2 aims to remove financial barriers to adoption, strengthen digital sovereignty, and enable public-sector organizations to securely and sustainably transform critical digital services.

At the core of the initiative is an index-based pricing methodology that aligns WSO2’s government subscription fees with World Bank Country Income Clas-

Dr. Sanjiva Weerawarana

Founder, CEO & Chief Product Officer, WSO2

sifications, using W3C government membership fees as a baseline for calculation. Under this model, public sector entities can choose to receive standardized, non-negotiable discounts that reflect national income levels. For example, high-income countries will receive a 20% discount; upper-middle-income countries will be eligible for a 35% discount; lower-middle-income countries will receive a 50% discount, and low-income countries will be entitled to a 62% discount.

CHECK POINT UNVEILS AI SECURITY STRATEGY

The company introduces its four-pillar approach to securing the AI transformation of enterprises

Check Point Software Technologies introduced its four-pillar strategy designed to help organizations securely navigate the AI era, alongside three strategic acquisitions that reinforce its platform and demonstrate execution of this vision.

At the core of this strategy are four pillars that reflect how organizations operate today:

Hybrid Mesh Network Security protects distributed enterprises across hybrid cloud, data centers, branch networks, and internet environments through a unified, AI-powered architecture.

Workspace Security focuses on securing the modern digital workspace — including endpoints, browsers, email, SaaS applications, and collaboration platforms — where users increasingly interact with AI technologies.

Exposure Management provides comprehensive visibility into organizational attack surfaces, enabling risk prioritization based on business context rather than isolated alerts.

AI Security protects the full lifecycle of AI adoption, including employee usage, enterprise AI applications, and autonomous AI agents.

These capabilities are delivered through Check Point’s open platform approach, often described as an “open garden” model, designed to integrate seamlessly with existing security ecosystems while providing prevention-first protection across multi-vendor environments.

To reinforce this strategy, Check Point announced the acquisitions of Cyata, Cyclops, and Rotate.

Cyata has developed an AI agent identity management platform that enables organizations to discover active AI agents, map permissions, monitor behavior, and enforce automated security policies. Cyclops provides a Cyber Asset Attack Surface Management platform that consolidates data across environments to deliver comprehensive asset visibility and risk prioritization. Rotate, an all-in-one platform purpose-built for managed service providers, enhances centralized protection across distributed workforces and SaaS environments.

Roi Karo Chief Strategy Officer, Check Point

Roi Karo, Chief Strategy Officer at Check Point, said, “As AI reshapes how organizations operate and how threats evolve, security must be fundamentally rethought. Our four-pillar strategy provides a clear framework to secure networks, workspaces, exposure risks, and AI-driven environments as a unified platform. The acquisitions we are announcing today demonstrate how we are executing on this vision and helping customers securely navigate the AI transformation.”

CENSYS APPOINTS MERIAM ELOUAZZANI AS META VP

ElOuazzani will lead Censys’s expansion and position it as the region’s trusted internet intelligence partner, along with Rajaee Al-Dalgamouni and Ahmed Ehlayel

Censys has appointed Meriam ElOuazzani as its first dedicated Vice President for the Middle East, Turkey, and Africa (META) region. In her new role, Meriam will lead the company’s end-to-end regional growth strategy, including revenue expansion, partnerships and ecosystem building, as well as establishing the organization’s position as the default external attack surface intelligence layer for organizations across the region.

With over two decades of extensive experience in cybersecurity and enterprise technology, Meriam ElOuazzani has consistently built and scaled markets across the region, assembling the teams, channel ecosystems, and marketing blueprints. Her career trajectory reflects her strong regional leadership through her roles as Senior Regional Director at SentinelOne and, before that, multiple leadership roles at VMware across MENA. She has also previously led the Regional Product Sales for Mobility across the Middle East at Cisco Systems. At Censys, Meriam will focus on expand-

ing strategic partnerships across government and enterprises, including channels, MSSP, and hyperscaler alliances, to scale efficiently across diverse markets.

Meriam ElOuazzani, VP META, Censys said, “Over the past two decades in this region, I’ve witnessed firsthand how the right intelligence transforms the security operations entirely. Censys’s internet intelligence platform equips security teams with authoritative, real-time insight into exposure and adversary activity, replacing assumptions with actionable confidence. My mission is to establish Censys as a trusted partner across META, enabling the shift from reactive defense to proactive intelligence.”

Censys helps security teams identify exposures, monitor changes, and detect threats before they are exploited by continuously mapping internet-facing assets, services, and critical infrastructure. In the Middle East, Censys has already partnered with Rilian Technologies to bring its internet in-

telligence and ICS/OT capabilities to sovereign nations and critical infrastructure.

Censys has also appointed Rajaee Al-Dalgamouni as Regional Sales Director, META and Ahmed Ehlayel as Manager, Solutions Engineering, META to strengthen the regional team with Meriam.

AMIVIZ AND VERACODE PARTNER TO SECURE AI-DRIVEN SOFTWARE DEVELOPMENT ACROSS MEA

Customers will benefit from capabilities that streamline secure software development and strengthen application protection at scale

AmiViz, a leading cybersecurity and AI-focused value-added distributor, announced a strategic partnership with Veracode, a global leader in application risk management, to distribute Veracode’s platform and help organisations secure modern software at scale. The partnership will enable Veracode to expand its presence across the Middle East, East Africa, and Libya. The alliance between trusted partners will leverage their complementary expertise to ensure cus-

tomers receive the highest standards of software security.

AmiViz has selected Veracode as a partner for its status as a pioneer in holistic application risk management, with nearly two decades of proprietary data, expertise, and innovation. Veracode empowers development and security teams to collaborate seamlessly, enabling them to build, secure, and maintain software from code to cloud. Leveraging Veracode’s cutting-edge technology and AI-powered remediation platform, organizations gain precise, actionable insights into exploitable risks, achieve real-time vulnerability remediation, and proactively reduce security debt at scale. This partnership underscores AmiViz’s commitment to

integrating advanced security solutions that align with modern software development and operational needs.

“Application security has become a board-level priority as organizations embrace AI-driven development,” said Ilyas Mohammed, COO of AmiViz. “By partnering with Veracode, we are equipping our partners and customers with a proven platform that embeds security directly into development workflows, enabling faster innovation with reduced risk.”

The partner programme provides solutions and services to get partners up and running straight away, with minimal impact to their existing business.

“Our partnership with AmiViz empowers security leaders across the Middle East and Africa to find, fix, and govern application risk at scale using Veracode’s integrated software security solutions,” said Michael Steinmetz, Senior Vice President of EMEA & APAC at Veracode. “Together, we’ll deliver transformative technology innovations, an enhanced customer experience, and deep, technical expertise to help organisations strengthen their security posture.”

Meriam ElOuazzani Vice President META, Censys

AI ACCOUNTABILITY RISES FOR UAE CIOS

98% of UAE CIOs say their professional reputation or career trajectory will be shaped by their success with AI, the highest globally

New global research from Dataiku, conducted by Harris Poll, The 7 Career-Making AI Decisions for CIOs in 2026, reveals that Artificial Intelligence is no longer just a business priority for CIOs; it is rapidly becoming their personal accountability test. Nowhere is the pressure more intense than in the UAE, where CIOs increasingly believe their careers, credibility, and organisational standing will be defined by how successfully they govern and deliver value from AI over the next 18 months.

Nearly all UAE CIOs (98%) say their professional reputation will be shaped by their success with AI, while 85% believe their role could be at risk if their organisation fails to deliver measurable business gains from AI in the next one to two years. This pressure is reinforced at the top, with 92% expecting CEO compensation to be directly linked to AI outcomes, signalling that accountability is cascading down from the boardroom.

This heightened scrutiny comes as UAE organisations surge ahead with AI adoption. Today, 65% of CIOs say AI agents are embedded in business-critical workflows, while reporting fewer day-to-day challenges with AI explainability than their global peers. Only 22% say they are frequently, or almost always, asked to justify AI outcomes they cannot fully explain (the lowest figure globally), suggesting a strong level of internal trust in AI-driven decision-making today.

However, the findings indicate that this confidence may mask growing exposure. The UAE ranks highest globally for concern that insufficient AI explainability could trigger a crisis that erodes

customer trust or brand credibility, with nearly two-thirds (63%) saying this outcome is very likely or certain. At the same time, three-quarters of UAE CIOs say their organisation would face high financial distress if the “AI bubble” were to burst, underscoring just how mission-critical AI has become to enterprise success in the country.

The pressure on CIOs is further compounded by the rapid decentralised adoption of AI across the workforce. More than three-quarters (78%) say employees are creating AI agents and applications faster than IT teams can govern them, while only one in five report having complete oversight of all AI agents in use across the organisation. This dynamic leaves CIOs personally accountable for systems they may not fully control, increasing the importance of traceability, governance, and visibility.

Encouragingly, the research suggests UAE organisations are beginning to respond. Two-thirds (67%) of CIOs say their organisations always require human sign-off before AI systems take action in business-critical workflows, and the UAE ranks first globally for having formal, documented human-in-the-loop procedures. Meanwhile, 65% believe it is at least very likely, if not certain, that governments will introduce AI explainability requirements this year, reinforcing the belief that the next phase of AI advancement in the country will be defined less by experimentation and more by defensibility.

“CIOs are moving from experimentation into accountability faster than most organisations expected,” said Florian Douetteau, Co-founder and CEO of Dataiku. “The pressure is real, and the timeline is tight, but there is a path to success. It favours CIOs who act decisively now, building AI systems they can explain, govern, and stand behind before accountability is imposed rather than chosen.”

Despite the rising pressure, UAE CIOs remain cautiously optimistic. They are the most confident globally that their current AI strategies will remain valid over the next year, suggesting that while the stakes are high, many believe they are moving in the right direction, provided they can maintain control as AI adoption accelerates.

“For CIOs in the UAE, the conversation is shifting from ‘how fast can we deploy AI?’ to ‘how confidently can we stand behind it,’” said Sid Bhatia, Area Vice President & General Manager –Middle East, Turkey & Africa at Dataiku. “If 2024 was the year enterprises proved they could build with AI, and 2025 was the year they proved they could deploy it, then 2026 is the year they must prove they can govern, defend, and measure it and do so at scale, under scrutiny, and with consequences attached. CIOs who focus on accountability and transparency now will be far better positioned to meet board expectations, regulatory scrutiny, and the realities of enterprise-wide AI adoption.”

ZOHO CORPORATION SURPASSES ONE MILLION CUSTOMERS GLOBALLY

MENA region becomes Zoho’s second fastest-growing market.

Zoho Corporation marked its 30th anniversary by announcing two major milestones: the company now supports more than one million paying customers and over 150 million users worldwide. The company — consisting of Zoho, ManageEngine, Qntrl, and TrainerCentral brands— is now a trusted technology provider to more than one million paying customers and over 150 million users globally.

In 2025, Zoho Corporation recorded a 32% year-on-year increase in customers and a 20% rise in revenue. Zoho also revealed that the Middle East and North Africa is the company’s second fastest-growing markets globally, which has seen 41% CAGR growth over the past five years with UAE being among its top markets, recording a CAGR of over 77% of customer growth and 45% revenue growth since 2021.

"Being bootstrapped, private, and built entirely in-house makes Zoho an outlier among competitors," said Sridhar Vembu, Co-founder and Chief Scientist, Zoho Corporation. "But vendors don't need our help, businesses do, which is why delivering customer value has, for 30 years, been Zoho Corporation's North Star. Before any innovation, strategy, or guiding principle becomes a product, pivot, or policy, it must first affirm the question, 'Will this help businesses?' We are incredibly grateful that companies around the world have responded so positively to our customer-first approach over the past three decades, and will continue to meet the evolving needs of businesses with powerful, scalable, and affordable solutions."

Since setting up its first regional office in Dubai, Zoho has strengthened its commitment through significant investments in local infrastructure, support, and product localisation as part of its ‘transnational localism’ strategy.

“Zoho’s expansion in the MENA region represents a significant milestone in our global journey. We are proud that our operations here, from our growing teams to strategic partnerships and local investments, have become a key driver of Zoho’s 30 years of success,” said Hyther Nizam, CEO of Zoho, Middle East and Africa (MEA). “Our commitment to the region goes beyond providing technology, it’s about empowering businesses, supporting digital transformation, and fostering innovation. The trust customers place in Zoho has been the foundation on which we continued to deliver scalable, secure, and locally relevant solutions that help organisations of all sizes thrive in an increasingly digital world.”

The company has committed AED 100 million to expand its presence in the UAE, including the setup of two data centres in Abu Dhabi and Dubai this year. Complementing this infrastructure, Zoho has invested nearly AED 80 million in strategic partnership initiatives in UAE to drive digital transformation across

Hyther Nizam CEO, MEA, Zoho

the Emirates, collaborating closely with key government entities such as the Department of Economic Development (DED), International Free Zone Authority (IFZA), Dubai Culture, Shams Free Zone, Umm Al Quwain Chamber of Commerce and Industry and Ras Al Khaimah Economic Zone (RAKEZ).

Since expanding in the MENA region, Zoho has enabled over 10,000 businesses adopt cloud technology through strategic partnerships with local governments from the public and private across many countries including UAE, KSA, Egypt, Bahrain, Qatar, Jordan, Morocco and Lebanon. These partnerships have been forged to support governments’ digitalisation agendas and incentivise businesses to adopt cloud technologies.

OPERATIONALISING INTELLIGENT UTILITIES

Through its CONNECT platform and AI-infused digital twin capabilities, AVEVA is enabling the utilities industry transition from fragmented operations to unified, intelligence-driven systems.

There is a fundamental shift in progress within the utilities industry. As energy systems become more distributed, data-intensive, and sustainability-driven, the traditional approaches to managing infrastructure are increasingly falling short. A new operating model is emerging that is defined by real-time intelligence, continuous optimisation, and the ability to turn data into decisions at scale.

As Nayef Bou Chaaya, Vice President, Middle East, Africa, and Turkey at AVEVA, notes, “There is a growing urgency around sustainability targets, particularly in emissions tracking and reporting.”

Within this transition unfolding in the industry, AVEVA is positioning industrial intelligence as a foundational layer for utilities navigating this complexity. Through its CONNECT platform and AI-infused digital twin capabilities, the company is bringing together data, analytics, and operational workflows to help utilities move from fragmented, reactive environments towards more unified, decision-driven systems. As the Middle East advances its sustainability agenda, supported by initiatives such as the UAE’s Net Zero 2050 strategy, this shift towards more connected and adaptive operations is becoming increasingly relevant and the need of the hour.

From visibility to operational intelligence

Across generation, transmission, and distribution networks, utilities already capture vast volumes of operational data, but much of it resides in distinct systems. Therefore, this can significantly limit the ability of organisations to translate data into timely and meaningful action. In complex environments, operators may have visibility into events as they occur, but they may not always have the exact context required to determine the next step of action or assess the immediate urgency.

Digital tools have significantly improved visibility across operations, with dashboards, alerts, and monitoring systems now wide-

Nayef Bou Chaaya Vice President, Middle East and Turkey, AVEVA

ly deployed. However, there is an increasing focus on moving beyond visibility towards making sense of data at scale.

AVEVA’s approach focuses on connecting data across IT, OT, and engineering environments into a unified operational layer. This reflects a broader emphasis on enabling more consistent, system-level understanding of operations, rather than isolated views tied to individual systems or functions.

Within this context, scaling digital capabilities beyond individual use cases appears to be an important factor in unlocking value. The shift from fragmented visibility towards more connected intelligence is increasingly shaping how utilities are evolving their operational models.

Digital twins: enabling real-time sustainability

At the core of this shift is the growing adoption of digital twins. “Digital twins are enabling utility organizations to transform sustainability tracking from periodic reporting to real-time, data-driven exercise,” says Nayef. “AVEVA's digital twin solutions combine data, models, and analytics to enable predictive maintenance, optimizing generation, transmission and distribution for utility companies.”

Digital twins are evolving from static visualisation tools into continuously updated representations of physical assets and systems. By integrating operational data with engineering models and analytics, they provide ongoing visibility into performance and system behaviour.

In the case of utilities, this capability appears to be particularly relevant for sustainability and operational efficiency. The ability to monitor performance continuously allows organisations to identify inefficiencies, optimise asset usage, and align operations more closely with environmental targets.

“This helps in increasing the use of renewables and reducing carbon footprint of existing facilities while strengthening regulatory compliance,” Nayef adds.

This suggests a broader shift in how sustainability is approached, which is moving from periodic reporting towards ongoing, operationally embedded tracking and optimisation.

Building a unified data foundation

While digital twins provide analytical depth, their effectiveness is closely tied to the availability of unified, high-quality data.

Utilities have traditionally operated across multiple domains, including IT, OT, and engineering systems. These domains often function independently, and therefore, this results in fragmented data environments that can limit visibility and coordination.

“Power producers and utilities face the challenge of integrating

renewables into generation portfolios and increasingly complex grids, and stringent sustainability mandates,” Nayef explains. “Our industrial intelligence platform, CONNECT aggregates real-time data and offers great visualization and analysis capabilities, enabling utilities to optimize asset performance, reduce operating costs, and ensure reliability.”

CONNECT is designed to integrate data across these domains into a unified edge-to-cloud architecture. This enables more consistent access to near real-time information across sites, systems, and teams.

“Built for the cloud, CONNECT streamlines the flow of nearreal-time data from diverse sources into a unified edge-to-cloud environment, giving teams flexible, secure access to high-quality data across sites, solutions, and trusted partners.”

This type of unified data foundation reflects a growing focus on enabling more coordinated and informed decision-making across the enterprise.

Connecting the value chain

Utility environments are increasingly described as interconnected systems that span generation, transmission, distribution, markets, and end users.

AVEVA’s broader perspective highlights the importance of connecting these elements into a coherent and adaptive system through technologies such as IoT, interoperable data platforms, digital twins, and analytics.

When these layers are connected, it becomes possible to create a shared view of operations across the organisation. This can support more consistent decision-making, improve coordination between functions, and enable faster responses to changing conditions.

The ability to connect operational data across the value chain also appears to play a role in improving efficiency and reliability. By enabling information to flow more freely between systems and stakeholders, utilities may be better positioned to manage complexity and respond to operational challenges.

Enabling operational resilience

Resilience continues to be a key focus area for utilities, particularly as operational environments become more dynamic.

Factors such as fluctuating demand, evolving infrastructure requirements, and changing risk landscapes are contributing to increased complexity. In this context, the ability to respond quickly and effectively to disruptions is becoming increasingly important. Integrated platforms and digital twins support this by combining real-time and historical data, enabling utilities to forecast demand, optimise resource allocation, and respond to operational changes more effectively.

Emerging approaches that combine edge-based decision-making with centralised analytics and modelling suggest a model where both local responsiveness and system-wide optimisation can be achieved. This reflects how operational intelligence is being applied across different layers of the energy system.

From pilots to platforms

Often, digital initiatives within utilities are implemented as targeted projects, addressing specific use cases or operational challenges. While these initiatives can deliver value, scaling them across the organisation can present challenges.

Fragmentation can arise when multiple tools and systems are deployed independently, limiting the ability to achieve system-level benefits.

Therefore, a move towards platform-based approaches reflects an effort to address this challenge by providing a more integrated foundation for digital capabilities. This may involve aligning technology, processes, and governance structures to support scalability and consistency across the enterprise.

Sustainability as an operational priority

As utilities continue to pursue net-zero targets, sustainability is increasingly being approached as an operational priority.

Digital twins and unified data platforms enable continuous monitoring of emissions, energy usage, and system performance. This

supports a shift from periodic reporting towards ongoing optimisation.

By integrating sustainability metrics into operational workflows, utilities may be able to align environmental objectives more closely with performance and efficiency goals.

Towards intelligent, connected utilities

The transformation underway in the utilities sector reflects a broader shift in how operations are structured and managed.

From fragmented systems to more unified platforms, and from reactive responses to predictive insights, the utilities industry is heading towards more intelligent and adaptive operating models. AVEVA’s approach of combining digital twins, unified data platforms, and AI-driven analytics, aligns with this shift, providing a framework for how utilities can connect data, systems, and decision-making.

As energy systems continue to evolve, the ability to translate data into actionable insight is emerging as a key capability. Operational intelligence is now viewed not as a useful or nice-to-have additional layer, but as a core component of how next-generation utilities manage performance, resilience, and sustainability in parallel.

"Digital twins are enabling utility organizations to transform sustainability tracking from periodic reporting to real-time, data-driven exercise. AVEVA's digital twin solutions combine data, models, and analytics to enable predictive maintenance, optimizing generation, transmission and distribution for utility companies."

» CISO OUTLOOK

RETHINKING SECURITY IN AN IDENTITY-FIRST WORLD

Siddhesh Nagaonkar, CISO at Direct Honest Safe International Exchange FZE, highlights how evolving threats and AI-driven attacks are reshaping how organisations approach identity security.

Do you agree that identity has become the primary attack surface in today’s enterprise environments? What is driving this shift?

Absolutely. In the modern perimeter less enterprise, identity is the new firewall. The shift is driven by the rapid adoption of cloud-native architectures, remote work and the proliferation of SaaS applications. When users, devices and workloads are everywhere, traditional network boundaries vanish. Attackers have realized that it is far easier to log in using compromised credentials than to break in through hardened network defenses.

How are attacks such as MFA fatigue, credential phishing, and AI-driven deepfakes changing the nature of identity-based threats?

These attacks exploit the human element the weakest link in the security chain. MFA fatigue turns a security control into a nuisance, tricking users into approving unauthorized access. AI driven deepfakes and sophisticated phishing have reached a level of realism where traditional spot the red flag training is no longer enough. We are moving from a world of technical hacking to psychological manipulation where the scale and speed of attacks are now powered by AI.

Are organizations in the region adequately prepared to detect and respond to these evolving identity threats?

Preparation is a spectrum. While many organizations in the UAE and the broader region have implemented basic IAM and MFA, there is still a significant gap in Identity Threat Detection and Response (ITDR). Many are prepared for the known (standard logins) but struggle with the unknown such as lateral movement after a credential compromise or session hijacking. We need to move beyond static access controls to continuous, behavior based monitoring.

How significant is the challenge of identity sprawl across multicloud, SaaS, and partner ecosystems?

Identity sprawl is a silent killer of security posture. Managing thousands of identities across Azure, AWS and dozens of SaaS platforms creates dark identities unused or over privileged accounts that are goldmines for attackers. Without a centralized identity plane, visibility is fragmented, making it nearly impossible to enforce a consistent security policy or perform effective audits.

What are the most common gaps you see in how organizations manage identity and access today?

The most prevalent gaps are over privileged accounts (lack of Least Privilege) and the neglect of non human identities (service accounts, APIs and bots). Additionally many organizations fail to integrate identity into their broader SOC operations, treating IAM as an administrative task rather than a core security function.

What does an effective identity-first security architecture look like in practice?

It is built on the foundation of Zero Trust, meaning that security is enforced through continuous validation and controlled access. In

Siddhesh Nagaonkar CISO, Direct Honest Safe International Exchange FZE

practice, this involves a centralized identity provider (IdP) serving as a single source of truth for all identities, adaptive multi-factor authentication (MFA) that is triggered based on risk factors such as location, device health, and user behavior, and just-in-time (JIT) access that grants elevated privileges only when required and for a limited duration. This is further reinforced by continuous verification, where no session is assumed to be secure solely based on the initial login, ensuring ongoing validation throughout the user’s interaction.

How should organizations integrate identity with broader security frameworks such as endpoint, network, and cloud security?

Identity should be the connective tissue. For example an endpoint security alert (e.g., malware detected) should automatically trigger a reauthentication requirement or a total lockout of that users identity across the network and cloud apps. This XDR plus Identity approach ensures that a compromise in one silo is instantly mitigated across the entire ecosystem.

How can AI be leveraged to strengthen identity security, and where does it introduce new risks?

AI is our greatest ally in Behavioral Analytics detecting a impossible travel scenario or a login at 3 AM that deviates from a user's normal pattern. However it introduces the risk of AI powered social engineering and the ability for attackers to automate credential stuffing at an unprecedented scale. We are in an AI arms race where our defensive models must be faster and smarter than the offensive ones.

ALIGNING TECHNOLOGY WITH OUTCOMES

As enterprises shift towards integrated, outcome-driven technology environments, MBUZZ is repositioning itself in the market. Sabir Saleem, CEO at MBUZZ, highlights how the company is expanding its focus across AI, datacentre, networking, security, and intelligent computing to align capabilities with real business requirements.

How is MBUZZ redefining its place in the market as a provider of complete, business-aligned technology solutions?

MBUZZ’s evolution has been both deliberate and market-led. Having built a strong foundation in telecommunications over more than a decade, the company has progressively expanded its capabilities in line with how technology demand has matured across the region. Today, that means going well beyond product availability into areas such as HPC, AI infrastructure, client compute, enterprise storage, cybersecurity, and smart city solutions. This transition did not happen by accident; it has been driven by a clear strategic direction, a committed team, and a deep understanding of where the market is moving.

What we have consistently observed is that businesses are no longer looking for isolated technologies. They are looking for complete, end-to-end solutions that can support performance, scalability, resilience, and long-term business value. MBUZZ has responded to this shift by strengthening its solution capabilities, aligning closely with market trends, and building dedicated business units around high-growth, high-impact technology domains. In parallel, we have formed strategic partnerships with leading global brands that complement this vision and enable us to deliver more relevant, workload-focused solutions into the market.

As a result, MBUZZ today is more as a business-aligned technology solutions provider. Today, our solutions are designed to serve a broad spectrum of users—from public sector and enterprise environments to private businesses, professionals, and even end consumers Our role is to bring together the right mix of technologies, expertise, and ecosystem partnerships to help customers build infrastructure that is not only technically sound, but also aligned with the realities of modern digital enterprise. That is where we believe our value increasingly lies — in helping the market move forward with greater clarity, integration, and confidence.

As enterprises increasingly invest in integrated, outcome-driven technology environments, how is MBUZZ positioning itself as a solutions provider across the domains of AI, datacentre, networking, security, and advanced computing?

As enterprises increasingly invest in integrated, business-aligned

technology environments, MBUZZ is positioning itself with a clear objective: to bridge the gap between business needs and the technologies required to address them effectively. Our role is not simply to offer access to products, but to create a structured pathway through which customers can identify, deploy, and scale the right solutions across AI, datacentre, networking, security, and advanced computing. That is enabled by a combination of

strong regional channel reach, responsive customer support, certified integration capabilities, strategic global partnerships, and an ecosystem designed to make even geographically complex deployments more seamless and practical.

A key part of this approach is MBUZZ Labs, which reflects how seriously we take end-to-end solution building. Through MBUZZ Labs, we provide a high-value environment for advanced integration, certified customization, solution validation, and functional alignment—ensuring that technologies are not only selected correctly, but engineered to perform in the way the business actually requires. From infrastructure tuning, workload-specific configuration, interoperability planning, to solution-level optimization, MBUZZ Labs strengthens our ability to approach customers with greater technical depth and a more complete execution model. It is an important extension of our strategic approach, allowing us to move beyond supply and into true solution enablement.

This is ultimately what defines MBUZZ’s position in the market. We have built a portfolio and partner ecosystem that are closely aligned with the priorities of the region, and we support that with the operational structure, technical capability, and market understanding needed to turn those technologies into meaningful outcomes.

How is the partnership with NVIDIA advancing its strategic AI agenda in the region, and what impact is this creating for partners and customers investing in AI-led growth?

MBUZZ’s partnership with NVIDIA is central to how we are advancing our AI agenda in the region. NVIDIA today offers a full-stack AI ecosystem spanning accelerated computing, AI software, DGX platforms, AI factories, high-performance networking, professional visualization, RTX workstations, virtual GPU and virtual workstation technologies, cloud AI, and enterprise AI deployment frameworks. That breadth is important because customers are entering AI from very different points — from gaming and creator platforms, graphics and laptops, and professional workstations, to AI-ready data centers, hybrid AI workspaces, cloud-connected infrastructure, and large-scale enterprise AI environments.

What makes the partnership especially relevant for MBUZZ is that it allows us to support the AI journey across multiple layers. For production AI, NVIDIA AI Enterprise provides an end-to-end software platform with frameworks, NIM microservices, orchestration, and infrastructure management for scalable deployment. For organizations building AI factories and model development environments, the NVIDIA DGX platform delivers full-stack infrastructure optimized for enterprise AI workflows. At the operational layer, NVIDIA Run:ai adds GPU orchestration and resource optimization, helping enterprises improve utilization, manage AI workloads dynamically, and scale across hybrid environments more efficiently. At the user and workspace layer, NVIDIA RTX Virtual Workstation, vGPU software, AI Virtual Workstation, and RTX PRO workstations extend GPU acceleration into distributed environments for AI development, simulation, rendering, visualization, and hybrid VDI use cases.

For MBUZZ, this creates real impact for both partners and customers. It means we can engage across the full NVIDIA ecosystem — from software and tools, networking, embedded and edge systems, data center infrastructure, cloud-aligned AI, graphics and GPUs, laptops, gaming and creative platforms, and professional workstations — with solutions that are technically aligned and deployment-ready. In practical terms, it strengthens our ability to help customers move from AI interest to AI execution with architectures that are more scalable, better managed, and far more relevant to real business adoption across the region.

Having built a strong focus across AI, HPC, and enterprise storage, how is MBUZZ now approaching cybersecurity as an important extension of what it offers to the market?

MBUZZ views cybersecurity as a natural and necessary extension of its broader technology vision. As digital environments become more complex, security must sit within the larger infrastructure strategy rather than outside it. Through our partnership with Acronis, we deliver an all-in-one cybersecurity and data protection platform that brings backup, disaster recovery, endpoint protection, anti-malware, ransomware protection, Microsoft 365 protection, email security, compliance, patch management, and secure remote workload access together under a single, unified console for simplified management and enhanced protection.

"Our role is to bring together the right mix of technologies, expertise, and ecosystem partnerships to help customers build infrastructure that is not only technically sound, but also aligned with the realities of modern digital enterprise."

Through Cloudmon, we provide a unified observability platform that enables proactive monitoring of IT infrastructure across cloud and on-premises environments. Its capabilities include digital experience monitoring (DEM) for end-user performance, network monitoring with topology and path tracing, network traffic analysis (NTA), availability monitoring of applications and endpoints, server and virtualization monitoring, desktop monitoring, syslog and log analytics, and network configuration management (NCM). Cloudmon also offers centralized dashboards, intelligent alerting, auto-remediation, and deep visibility into application, network, and user experience metrics to improve service levels while reducing operational costs. For us, this is not just about adding another business unit — it is about strengthening our ability to deliver complete, resilient, and business-aligned solution environments for the region.

MBUZZ today operates across multiple technology domains and vendor ecosystems. How is the company bringing cybersecurity solutions together to create more cohesive, workload-focused solutions for partners and end customers?

MBUZZ is bringing cybersecurity solutions together by designing them around the customer’s actual workload rather than around isolated tools. Through Acronis, for example, we can unify backup, disaster recovery, cybersecurity, and endpoint management within a single operational framework, which is especially relevant for service providers, businesses, and distributed work environments.

This enables us to build more cohesive solutions tailored to specific use cases. For Microsoft 365 environments, this includes protecting Exchange Online, OneDrive, SharePoint, and Teams with centralized backup and recovery. Similarly, for Google Workspace environments, it extends to securing Gmail, Google Drive, Google Calendar, and Contacts, ensuring comprehensive data protection and seamless recovery across platforms.

with centralized backup and recovery. For remote and distributed operations, Acronis Cyber Protect Connect adds secure remote access and workload management across Windows, macOS, and Linux. For resilience-led environments, hybrid and cloud disaster recovery can also be integrated into the same protection model.

What this means in practice is fewer security silos, better operational visibility, and a more workload-focused cybersecurity architecture. Instead of treating backup, endpoint protection, remote access, patching, and recovery as separate conversations, MBUZZ approaches them as part of one business continuity and cyber resilience strategy. That is how we are creating more practical, scalable, and business-aligned cybersecurity solutions for partners and end customers across the region

As the regional technology becomes more complex and opportunity-rich, how is MBUZZ defining its key role in helping customers navigate this shift through more complete and future-ready solutions?

MBUZZ is defining its leadership role by staying close to how technology is actually being adopted in the market. As customer environments become more complex, the need is no longer for isolated products, but for solution paths that are better aligned to

workload, scale, integration, and long-term business value. Our approach is built around understanding those shifts early, structuring dedicated business focus around them, and translating them into practical offerings across AI, HPC, cybersecurity, enterprise storage, networking, client compute, and smart infrastructure.

What makes this relevant in real market terms is the way MBUZZ operates. We combine strong vendor alignment, regional channel reach, technical integration capability, and solution-focused execution to help customers move from requirement to deployment with greater clarity. Through strategic partnerships, certified integration support, and MBUZZ Labs, we are able to approach opportunities not just from a product standpoint, but from an architecture and functionality standpoint — validating how technologies fit together, how they perform in real use cases, and how they can be adapted to the needs of enterprises, public sector organizations, professionals, and consumers.

Our leadership role is really being shaped by this ability to connect market demand with workable technology outcomes. From AI-ready infrastructure, hybrid workspaces, cyber-resilient environments, to high-performance enterprise platforms, MBUZZ’s strategy is to bring together the right technologies, the right expertise, and the right ecosystem support in a way that is scalable, future-ready, and grounded in real business needs across the region.

"From infrastructure tuning, workload-specific configuration, interoperability planning, to solution-level optimization, MBUZZ Labs strengthens our ability to approach customers with greater technical depth and a more complete execution model."

FROM AI TOOLS TO AI WORKERS

Sameer Joshi, Digitalization Director, NADEC, explores how the rise of AI agents is redefining the nature of work and what it means for organisations preparing to manage a hybrid workforce of humans and machines.

For the past two years, companies have been asking one question: How can AI help our employees work faster? But a far more disruptive question is now emerging. What happens when AI stops assisting work — and starts doing the work itself? Because the next phase of enterprise AI is not just about tools. It’s about AI agents becoming part of the workforce.

The Shift Most Leaders Are Missing

Most organizations still think about AI in one of three ways:

• a productivity tool • a chatbot interface • a digital assistant

And in many cases, that’s exactly how AI is being deployed today. Copilots that summarize documents. Assistants that draft emails. Systems that help analyze data. But something more powerful is quietly emerging.

AI agents.

Systems that can plan tasks, interact with enterprise systems, execute workflows, and deliver outcomes with minimal human intervention. This is not just an improvement in software. It represents a fundamental shift in how work gets done inside organizations.

From Copilots to Agents

To understand the difference, it helps to compare two models.

The Copilot Model:

AI assists a human performing a task. The human remains the primary executor.

Examples include:

• writing assistance • meeting summaries • code suggestions • data insights

In this model, AI augments human work.

The Agent Model

AI performs tasks on behalf of the human.

Examples already emerging include:

• AI agents resolving customer service tickets

• AI agents generating and updating reports • AI agents monitoring supply chain risks • AI agents coordinating workflows across systems

In this model, AI is no longer just augmenting work. It is executing work.

The Real Implication

Once AI begins executing tasks autonomously, the organizational question changes.

The challenge is no longer: “Where can we use AI?” The real question becomes: “Where will AI become part of the workforce?”

This shift has enormous implications. For the first time, organizations may have to manage two types of workers simultaneously:

• human employees • AI agents

And the structures most companies rely on today were never designed for this reality.

The Questions Leaders Are Not Asking Yet

If AI agents become operational workers, entirely new questions emerge. Who assigns work to AI agents? Who supervises them? How do we measure their performance? Who is accountable when an AI agent makes a decision? And perhaps most importantly: How do humans and AI work together inside the same operating model?

These are not just technology questions. They are leadership and organizational design questions.

The Next Transformation

Every major technology shift changes how work is organized. The industrial revolution changed physical labor. The internet transformed communication and knowledge work. AI may be the first technology shift that changes who the workers actually are. Not just humans. But humans working alongside intelligent systems.

Closing Insight

The organizations that succeed in the AI era will not simply deploy better models. They will redesign how work is structured across humans and machines. In other words, they will learn how to manage an AI workforce.

ENABLING AI-READY BUSINESS COMPUTING

As enterprise and consumer expectations shift towards mobility, performance, and AI-driven productivity, ASUS is evolving its commercial laptop strategy in the region. Tolga Özdil, Regional Commercial Director, Middle East, Turkey & Africa (META) at ASUS, outlines how the company is aligning its portfolio around AI-ready devices, hybrid work demands, and secure, high-performance computing.

How is ASUS positioning its laptop portfolio in the Middle East amid evolving enterprise and consumer demands?

We are seeing a shift happening towards flexible work models and AI-driven productivity. With this in mind, ASUS is aligning its commercial portfolio to cater to professionals and businesses who require performance and power all in one compact package. Here in the Middle East, where high-mobility roles are now becoming a norm, we continue to support businesses with devices that are durable, long-lasting, secure and include collaborative tools that fit the demanding workloads of distributed teams and modern workplaces. We are also positioning our portfolio as AIready PCs to make them future-ready as workers rely on AI tools for increased productivity.

The ExpertBook Ultra series targets premium business users. How do you see the demand for this series among Business professionals in the region picking up since launch?

We designed the ExpertBook Ultra to meet the needs of professionals who are looking for a device that is powerful enough to handle complex workflows but also portable enough to be brought anywhere. Because more businesses are now adopting hybrid work, demand for this category of notebooks has increased. There is an increased interest in the device from senior executives, consultants and the like, and this response has been very encouraging to see. Moreover, we also see the device as a great addition to any modern workplace.

Overall, how has ASUS’s commercial laptop business grown in the Middle East over the past year?

Our commercial business growth is driven by enterprises investing in devices to support AI workloads locally while keeping sensitive data secure. We are also supporting the public sector in

Tolga Özdil

Regional Commercial Director, META, ASUS

rolling out its digital transformation initiatives, which require secure and scalable devices to support smart services and paperless operations. Another factor in our growth is the increasing refresh cycle for organizations upgrading to devices that support AI-driven platforms, enhanced security and modern collaboration tools. Our energy-efficient and responsibly designed devices have been a top choice for organizations looking to reduce their environmental impact.

The ExpertBook lineup emphasizes on-device AI capabilities with dedicated NPUs. How do you see enterprises in the region leveraging on-device AI to improve productivity while addressing concerns around data privacy and performance?

AI tools initially are cloud-based, which also requires constant connectivity. Since it is online, there are concerns about security and privacy. Because of this, on-device AI is becoming a preferred choice for enterprises, which is also becoming a practical advantage. Dedicated NPUs on laptops allow local AI processing, and this can be beneficial since it reduces speed and bandwidth dependency. AI tasks like real-time transcription and automated workflows are more responsive, and data is processed locally, so there is less risk of data getting exposed, which is beneficial in industries like healthcare and BSFI.

Apart from AI, discuss very briefly some of the other performance and design enhancements that Asus has introduced in its business laptops in the past year or so.

Beyond on-device AI support, we have designed our ExpertBook line to be the ultimate business device. We use magnesium-aluminum alloys that make the device lightweight and durable. Plus, our devices meet military-grade standards, so they’re guaranteed to withstand daily wear and tear. Even with its compact form, our devices have an advanced cooling architecture that delivers consistent performance plus long battery life for all-day usage. Lastly, we’ve fitted our notebooks with a fingerprint reader, BIOS protection and firmware-level security that can detect malicious threats and automatically restore them to the last trusted version.

ASUS Expert Connect, held last December, brought together end-users across sectors such as healthcare, BFSI, education, and enterprise. How important are such experiential events in moving conversations beyond device specifications to real business outcomes?

Events like ASUS Expert Connect allow us to engage with our customers directly and learn firsthand about how our products are helping them, and gather essential feedback that we can address in future products. It also provides an opportunity for us to talk with our partners and know what challenges they are facing and how they can be solved, leading to more informed decisions and

forging stronger relationships. We do these events with the aim of showing our customers and partners a better understanding of the solutions that we offer and the results that they can expect.

How is ASUS tailoring its commercial device strategy to support national AI ambitions and industry-specific use cases in the region?

ASUS fully supports digital transformation initiatives, which is why we tailor our commercial offerings to fit the needs of the region’s long-term goals. Many countries in this region have national AI agendas, where they are embedding AI in the public sector, healthcare, education and more. We collaborate with our ecosystem partners to deliver AI-ready hardware and software solutions that are ready for their use cases instead of just for general productivity. It’s all about making sure that the technology we provide is future-ready and reliable and delivers real value.

"AI tasks like real-time transcription and automated workflows are more responsive, and data is processed locally, so there is less risk of data getting exposed, which is beneficial in industries like healthcare and BSFI."

BRINGING AI TO DATA, ANYWHERE

Ahmad Shakora, Group Vice President at Cloudera how Cloudera is enabling organisations to bring AI to their data while maintaining governance, flexibility, and hybrid control.

How would you describe the current state of AI adoption in the region?

AI adoption in the region is growing, and it’s growing at a faster pace than any other region in the world. Cloudera is right in the heart of it.

AI, at the end of the day, is powered by data. Where we are uniquely positioned is in our ability to run data anywhere and bring AI to your data, whether it’s on-premise, on the cloud, any cloud, or private cloud.

The acceleration of AI adoption is significant, and we’re in a unique place where it’s really taking off. We are positioned very well in the region to help customers adopt AI faster.

Is this growth concentrated in the UAE and Saudi Arabia, or are you seeing it across the region?

The UAE is obviously leading the way, followed by Saudi Arabia. But we are definitely seeing adoption across the entire region, Turkey, Qatar, Africa, including South Africa.

So it’s not limited to just one or two markets; it’s happening broadly across the region.

Has agentic AI created additional momentum for adoption?

Absolutely. When you look at AI and where you eventually want to get to, it’s about creating agents that can help accelerate AI use and promote AI-driven outcomes faster.

Agentic AI and generative AI are both at the heart of this. Agentic AI is where you want to get to in order to drive these use cases and start extracting value from the AI solutions you are developing.

You mentioned bringing AI to data; how does Cloudera approach this?

It’s about bringing AI to data. That’s where we are uniquely positioned.

With some of the recent acquisitions and where we are heading, we are enabling customers to adopt AI without risking data sovereignty or compliance.

We allow customers to bring AI into their data without having to

move the data around. This removes complexity, reduces latency, and accelerates how quickly they can adopt AI.

What are some of the key use cases you are seeing across customers?

We work across all major industries in the region. In telecom, we have most of the top customers. In BFSI, we are very strong; most of the top banks across the region are customers.

We are also growing very fast in the public sector and government.

Use cases include monetisation, risk mitigation, customer 360, churn analysis, next-best action, and revenue optimisation.

AI is at the heart of all of this, but data is the foundation. Customers are generating large volumes of data every day, and we play a key role in helping them derive business value from it.

» INTERVIEW

How do you help customers manage governance while scaling AI deployments?

If you don’t have secure, governed data, then it means nothing.

This is one of our key strengths. With our shared data experience, we help customers master and secure their data no matter where it sits, on cloud, on-prem, or multi-cloud.

We provide a consistent and secure experience so customers can rely on trusted data to power their AI models.

How are you contributing to talent development in the region?

Talent development is always top of mind.

We are working with local institutions and our education ecosystem to drive these initiatives. We also work closely with our partner ecosystem, which is one of the strongest in the region.

Through our partners, we support customers and help develop young talent. This will continue to be a focus area as we grow our presence and market share.

How important are partnerships to your strategy in the region?

Partnerships are critical to our success. They help us scale and meet customer demand.

We work with global system integrators like Accenture and TCS, and we have strong partnerships with Dell, AWS, and NVIDIA.

At the same time, we also work with local partners in Saudi Arabia and the UAE. This gives us a combination of global scale and local presence, which is very important.

How strategic is the region for Cloudera, and what about your expansion in Saudi Arabia?

The region is one of the fastest-growing markets for Cloudera.

We announced the opening of our office and legal entity in Saudi Arabia, which reflects our commitment to the market and our customers there.

We are continuing to invest in the region as we accelerate growth. Having been part of this journey, I can say we are very well positioned to continue delivering strong momentum.

What is your key message to customers and partners at the event?

The key message is that we are just getting started.

We are uniquely positioned with our capabilities. We are the only provider that can truly offer hybrid capabilities, giving customers choice without vendor lock-in.

We allow customers to bring AI workloads to their data in a secure

and governed way, ensuring trusted data and trusted AI models.

The region is seeing strong investments in AI, and we are very well positioned to accelerate on that—delivering greater value to customers and strengthening our partner ecosystem.

Who do you see as your competition?

When you look at the broader cloud space, you will see hyperscalers and cloud-native players.

But when you focus on hybrid capabilities, the competition narrows significantly.

I’m not sure there is anyone who can cover the entire data journey end-to-end, across on-prem, cloud, multi-cloud, and hybrid, while enabling AI in the way we do. That’s where we are uniquely positioned.

Our foundation is open source, which gives customers flexibility and choice. They can deploy anywhere and integrate with anyone. If a customer wants to use their own storage, we support that. It’s about giving customers openness and avoiding vendor lock-in as they accelerate their AI journey.

"With our shared data experience, we help customers master and secure their data no matter where it sits, on cloud, on-prem, or multi-cloud.We provide a consistent and secure experience so customers can rely on trusted data to power their AI models."

LLMS IN THE SOC: BEYOND BENCHMARK SCORES

As AI adoption accelerates across cybersecurity, enterprises are increasingly relying on benchmarks to evaluate model performance. However, these metrics often fail to reflect how AI systems behave in real-world security operations. Gabriel Bernadett-Shapiro, Distinguished AI Research Scientist at SentinelOne, examines the gap between benchmark scores and actual security outcomes.

For security leaders today, it is impossible to escape the flood of claims around AI. Every new model comes with colorful charts, long benchmark names, and bold numbers that promise state-of-the-art performance. On paper, it looks impressive, but in a live SOC, the reality is far more complicated.

Over the past few years, LLMs have been positioned as game changers for cyber defense. They are marketed as tools that will identify and patch vulnerabilities, write secure code, shorten investigations, and remove the friction from daily security processes. Their value proposition is to raise the cost for attackers and reduce it for defenders.

To support these claims, vendors rely heavily on benchmarks. They assume that if a model scores highly on a benchmark that is labelled “security”, then it must be ready to help real analysts in real SOCs. Independent research, such as the SECURE benchmark, highlights that many existing evaluations focus on general language abilities and fail to assess realistic security understanding and reasoning, underscoring the gap between benchmark scores and operational cybersecurity value. We don't have a shortage of benchmarks, but there is a mismatch between what they measure and what defenders actually need.

From neat exams to messy reality

The first generation of LLM benchmarks in 2023 focused on multiple choice exams over clean text. These tests were useful in the early days. They provided simple, reproducible numbers that allowed researchers to compare models. Over time, two things happened. Firstly, models grew more capable, and scores started to bunch at the top. Many benchmarks became “saturated”, with leading models all scoring close to perfect. Second, the tests themselves drifted further away from the day-to-day experience of a SOC analyst.

In response, more specialized cybersecurity benchmarks emerged. Some simulate realistic logs in a cloud tenant. Others transform malware sandbox reports or cyber threat intelligence (CTI) documents into multiple choice questions. A few attempt to map model behavior to risk themes such as phishing, exploit generation, or insecure code suggestions. Taken together, these efforts are a step forward from generic language exams. Howev-

er, they share an important limitation of measuring isolated tasks, not the continuous workflows that define real security operations. Across the industry, several well-known benchmarks have tried to evaluate LLM performance for cybersecurity, including Microsoft’s ExCyTIn-Bench, Meta’s CyberSOCEval and CyberSecEval 3, and Rochester Institute’s CTIBench. Each offers valuable insights, yet all share the same constraint: they measure tasks, not the real, continuous workflows that define SOC and CTI operations.

Microsoft’s ExCyTIn-Bench simulates a realistic Azure-like environment and sees whether LLM agents can run multi-step investigations across logs. Despite the controlled setup, models struggled, with top scores below 40%, suggesting that even structured investigations are challenging.

Meta’s CyberSOCEval transforms malware sandbox logs and CTI reports into multiple-choice questions. While models perform above random baselines, they still miss most malware classifications and nearly half of threat-intel questions, showing that LLMs can surface signals but cannot reason like analysts.

CTIBench, designed for threat-intelligence workflows, assesses tasks such as mapping vulnerabilities, interpreting attacker techniques, and assigning severity. These are useful knowledge checks, but don’t reflect the extended investigations, evolving intelligence, and judgment calls that define real CTI work.

Together, these benchmarks have the same main issue. They reduce complex, multi-stage investigations into isolated exam-style questions with predefined answers, a structure that does not match the messy reality of live security operations.

Security is a workflow, not a question bank

In the SOC, work is not a clean PDF with a single question at the end. It is a queue of alerts from different tools, fragments of telemetry, chat messages between teams, and external intelligence that may or may not be relevant. Analysts must triage, correlate, pivot, escalate, and sometimes start again when a hypothesis fails. Most current benchmarks compress this reality into a static question-and-answer format. The model is given a carefully prepared slice of context and asked to pick the correct answer or generate a short response. There is no cost for missing a subtle but critical indicator. There is no way it decides when to stop, when to escalate, or when to challenge the premise of the question itself.

Some log-based benchmarks go further by placing the model in a constrained environment and asking it to issue queries over time. Even there, the fundamental unit of evaluation remains a question with a predefined ground truth answer. The model is not asked to run an incident to completion, to weigh conflicting signals, or to trade speed against certainty the way a human team must.

In other words, we are measuring how well models perform on security-themed tasks, not how much they improve the end-toend security workflow. For defenders, that distinction matters. The goal is not to pass the exam but to reduce the overall business risk.

General reasoning is not analyst reasoning

Another pattern that appears across several evaluations is the gap between general reasoning and security-specific reasoning. Models that perform extremely well on coding and math benchmarks do not automatically excel at malware analysis, CTI interpretation, or multi-step investigations over heterogeneous logs. In multiple studies, reasoning models that think in more steps don't show the same uplift on security tasks as they do on pure math or programming challenges.

This tells us something important. LLMs can store a significant amount of security knowledge. They can recognize familiar patterns and restate context. But it doesn't mean they can think like an experienced analyst who has spent years deciding which questions to ask, signals to trust, and alerts to discard safely. If organizations assume that high scores on generic reasoning benchmarks translate directly into analyst-level performance, they risk overestimating what the model can safely automate.

"LLMs are here to stay in cybersecurity. The question is not whether they will be used, but how intelligently and safely we deploy them."

The illusion of precise numbers

Benchmarks are attractive because they seem objective, but practically, many evaluations have gaps in basic statistical hygiene.

Results are often reported from a single run with one set of parameters, without confidence intervals or robustness checks. Contamination, where benchmark data overlaps with what the model saw during training, is rarely tested in a systematic way. Many benchmarks also rely on LLMs to generate questions or grade answers, frequently using a model from the same vendor that is being evaluated.

This creates a closed loop. If the judging model has a certain bias or blind spot, that bias is baked into the evaluation. If the prompts for the judge are public, it becomes easier to tune a model or a prompting strategy that performs well on the benchmark, without necessarily performing better in real incidents. Numbers that appear precise may be fragile under small changes in setup.

For governance decisions that affect customers, regulators, and critical infrastructure, that is not a strong foundation.

Single vendor sandboxes, multi-vendor realities

Real SOCs operate across hybrid environments and a patchwork of tools like on-prem infrastructure, cloud providers, identity platforms, EDR agents, ticketing systems, collaboration tools, wikis, and more. Telemetry is uneven, sensors are sometimes misconfigured, and documentation is incomplete.

Most benchmarks do not reflect this complexity. They focus on a single fictional tenant, very few logs, or a curated collection of reports. PDFs and JSON logs are flattened into text and turned into questions. Time and change are not included. New intelligence never arrives halfway through an investigation. Attacks will not adapt to the model’s behavior. These simplifications are understandable in a research context, but they are not enough to understand how an LLM will behave as a copilot for live operations or as a component in an autonomous detection and response stack.

What defenders should really be asking

None of this means that LLMs have no place in security. They can add value as assistive tools by helping with summarizing long reports, standardizing narrative formats, generating candidate hypotheses, or drafting initial responses with human supervision.

The issue is not whether we should use AI but whether our current benchmarks allow us to make informed decisions on how it should be used.

For that, defenders need evaluations that measure the workflow level outcomes, such as time to detect, time to contain, and mean time to remediate, not just answer accuracy on isolated questions. It should also show heterogeneous, noisy environments with incomplete and conflicting data, not only clean and fully labelled examples. Also, assign real costs to missed signals, unnecessary data collection, and incorrect escalation decisions, so that tradeoffs become visible. Additionally, treat checking and validating

as a needed and sometimes optimal behavior, rather than forcing a confident answer every time. Lastly, using diverse, well-calibrated judges and incorporating human validation where there is a high risk.

As vendors, we also have a responsibility to align our claims with this reality. Benchmark charts are useful, but they should never be the only input to a deployment decision. Security teams should feel empowered to ask how a model was evaluated, what assumptions were made, and how those conditions compare to their own environment.

Raising the bar for AI in the SOC

LLMs are here to stay in cybersecurity. The question is not whether they will be used, but how intelligently and safely we deploy them.

Today’s benchmarks represent an important step beyond generic language testing. They also reveal how far we still have to go in terms of multi-hop investigations. General reasoning skills do not automatically translate into security reasoning and evaluation pipelines can introduce their own blind spots.

Until we design benchmarks that reflect the lived experience of defenders, high scores on security exams will remain a poor proxy for real-world uplift. For organizations across the META region and beyond, choose AI systems that are tested against operational outcomes, not just academic metrics. Also, make sure that every claimed improvement can be traced back to tangible risk reduction. Only then will AI in the SOC move from promise to proof.

"We don't have a shortage of benchmarks, but there is a mismatch between what they measure and what defenders actually need."

WHY DIGITAL TWINS ARE NOW A BUSINESS IMPERATIVE IN THE GCC

Tommaso Stefano Tini, Head of Digital Twin Market Growth and Consulting at Omnix International write that digital twins are no longer viewed as optional technologies but as foundational capabilities that support better governance and performance across complex built environments.

Digital twin today is moving from researching and testing to becoming a strategic necessity, and it is no different in the GCC. Regional transformation agendas such as Saudi Vision 2030 and UAE Centennial 2071 are reshaping expectations around how cities, infrastructure, and real estate assets are planned, delivered, and operated.

The GCC hosts some of the most ambitious development programs globally, including smart cities, transport corridors, energy infrastructure, and mixed-use destinations. While these projects are delivered at unprecedented scale, organizations often struggle to translate digital ambitions into operational outcomes. Data generated across planning, design, construction, and operations remains fragmented, reducing visibility into asset performance and limiting the ability to anticipate risks or optimize long-term outcomes.

Asset owners today face increasing pressure to deliver projects faster, operate assets more efficiently, and ensure long-term value while meeting higher standards for sustainability, resilience, and transparency. Some of the key challenges include managing highly complex, long life-cycle assets, with fragmented data across planning, design, construction and operations. There is also the aspect of significant capital investment which increases pressure to demonstrate ROI, cost efficiency and performance transparency.

Siloed organizational models separating delivery (AEC), operations and commercial functions are hindering progress. Other challenges include limited real-time visibility into asset performance, utilization and future scenarios. And lastly, growing regulatory and

policy expectations particularly around sustainability and urban performance need to be dealt with.

In such scenarios, digital twins are no longer viewed as optional technologies but as foundational capabilities that support better governance and performance across complex built environments. The value of digital twins in the built environment lies not in technology adoption alone. The real shift in the region is from tool-driven implementations to outcome-driven strategies. It also includes approaches that connect physical assets and environment, digital representations including BIM, GIS, simulations as well as operational, commercial and contextual data.

What it means is that AEC activities provide a foundational data and delivery layer, enabling smart city outcomes by connecting assets at scale. Thus, business value is realized through better informed decisions and not just technology deployment alone.

Without this shift, many initiatives remain limited to design visualization or isolated use cases, delivering limited business impact. When executed correctly, a digital twin enables organizations to understand, predict and optimize asset behaviour across development, construction, operation and long term portfolio planning.

When aligned with business objectives, the impact is tangible in the built environment. During planning and development, digital twins support scenario analysis for master planning, phasing, and capital allocation, reducing downstream risk. During construction and delivery, they improve coordination, progress tracking, and stakeholder collaboration. In operations and asset performance, predictive insights help reduce costs, improve safety and reliability, and extend asset life, while also supporting commercial activities such as leasing, space optimization, and tenant engagement.

At a portfolio level, digital twins provide decision-makers with visibility across assets, enabling benchmarking, redevelopment planning, and alignment with sustainability and policy objectives. For organizations operating in the GCC ecosystem, this capability is increasingly essential to meeting regulatory expectations and national development goals.

Breaking it down, the outcomes the built environment can experience from a business-driven digital twin strategy include clear prioritization of investments, faster time-to-value with reduced implementation risk, and improved decision making across the organization. A digital twin strategy helps create stronger alignment between business, technology, and operations, while laying a scalable foundation to support long-term smart city and asset strategies.

As the region continues to invest heavily in complex development programs, organizations that adopt a structured, business-first digital twin strategy will be better positioned to deliver resilient, high-performing assets and sustain long-term value. In the GCC, digital twins are no longer about visualizing assets—they are about governing them more intelligently.

MODERNIZING SOCS FOR RESILIENCE AND RELEVANCE

The clock is ticking for security operation centers in the Middle East, with relevance and resilience hanging in the balance, writes Ahmad Alshaer, Security Leader, Middle East & Africa, DXC Technology

Adecade ago, dedicated Security Operations Centers (SOCs) were a rarity across the Middle East. Few organizations had the in-house capability or budget to monitor and respond to cyber incidents around the clock. That has changed dramatically. Rapid digital transformation, the growth of cloud-first strategies, and an expanding regional regulatory landscape have all accelerated cybersecurity maturity. Today, leading enterprises in sectors such as banking, energy, and government see SOCs as the nerve centers of their cyber defenses.

According to Gartner, end-user spending on information security in the Middle East and North Africa (MENA) is projected to reach US$4 billion in 2026. This reflects continued investment in resilience as the threat landscape grows more complex. Yet, for a paradigm that gained momentum in the region only recently, many business leaders may be surprised to learn that their SOCs are already falling behind. Facing constantly evolving threats,

SOCs must now adapt quickly – or risk becoming outdated.

SOCs are under pressure

Organizations face rising pressure to modernize SOCs while managing shrinking budgets, a shortage of skilled talent, and a growing attack surface. Traditional models built on siloed tools and reactive monitoring, struggle to keep up with increasingly advanced adversaries.

At the same time, SOC teams are overwhelmed by the sheer volume of low-value alerts, making it harder to spot genuine threats. With many breaches still detected by outsiders rather than the SOC itself, confidence in traditional approaches is eroding. These challenges create the perception that security programs are a necessary cost of doing business rather than a driver of resilience and trust.

To remain effective, SOCs must evolve from reactive operations to strategic enablers of business resilience, fully integrated with risk management and organizational priorities.

Evolving SOCs for today’s enterprise

For a business to succeed, data must be treated as a competitive differentiator. For SOCs, this means using analytics not just to report on past events, but to generate real-time insights that help organizations anticipate risks, respond faster and make smarter decisions.

When security data is turned into intelligence, it becomes an enabler of business resilience and informed decision-making for leaders, shareholders, and customers.

To do this, the SOC must shift to support an organization’s broader business objectives. It must be fully integrated with the enterprise risk management function and broader security operations. This ensures that threats, vulnerabilities and security insights are connected to business priorities, giving leaders a clear view of the organization’s overall risk posture.

Six strategies for the modern SOC

In the Middle East, cybersecurity budgets are rising but under

closer scrutiny. Gartner’s forecasted 10% increase will be driven largely by government modernization and critical infrastructure protection initiatives. This means organizations must strike a careful balance, investing in modern SOC capabilities while optimizing for cost, automation, and measurable outcomes. The modernizing of SOCs should be a gradual and evolutionary process. These six strategies provide a practical foundation for aligning security operations with business needs.

Align fragmented systems

SOCs often rely on disconnected tools and processes, which makes it hard to see the full security picture. By integrating systems and automating governance, risk and compliance processes, SOCs gain enterprise-wide visibility, prioritize threats more effectively, and generate metrics that connect security to business outcomes.

Many global organizations follow guidance from the National Institute of Standards and Technology (NIST), which emphasizes information classification, continuous monitoring, and system authorization. In the Middle East, organizations are increasingly aligning with frameworks such as the UAE Information Assurance Standards (IAS), Saudi Arabia’s National Cybersecurity Authority Essential Cybersecurity Controls (NCA ECC) framework, and Qatar’s National Information Assurance Policy (NIAP), all of which set clear expectations for monitoring, response, and reporting.

Gain situational awareness

You can’t protect what you don’t understand. SOCs need a current map of assets, data, processes and external partner networks. Combined with tailored threat intelligence, this context helps organizations prioritize defenses and address vulnerabilities.

True situational awareness also considers sociotechnical factors such as culture and regulations, which must be continuously refreshed as the organization and its threat landscape evolve.

Fuel intelligence-driven operations

Traditional threat indicators often leave organizations reacting to alerts without understanding attacker intent or the business impact. By combining technical data with business, regional, and industry insights, SOCs can transform information into actionable intelligence. Sharing that intelligence internally and externally improves defenses and helps shorten the life cycle of new attack techniques.

Indicators of compromise alone provide limited insight. SOCs must combine technical data with business, regional, and industry context to guide smarter investigations and responses. This means enriching alerts, triaging with automation, and curating use cases for IOC analysis and deployment.

Extending this intelligence outward, by sharing curated insights

with trusted partners, further strengthens defenses and reduces attacker advantages.

Predict risks and proactively defend

Using intelligence to anticipate threats shifts the SOC from reactive defenses to active defenses. This means identifying adversaries, modeling potential risks and continuously testing systems to stay ahead of attacks.

Globally, many organizations follow NIST guidance as a framework for this proactive approach. In the Middle East, SOC teams can also draw on intelligence shared by regional authorities such as the UAE Cybersecurity Council, Saudi Arabia’s NCA, and Organisation of The Islamic Cooperation – Computer Emergency Response Teams (OIC-CERT), which regularly publish advisories and threat feeds relevant to local industries. Integrating these insights helps organizations stay ahead of region-specific attack campaigns and improve early warning capabilities.

Use human and AI synergies

AI should augment, not replace, human analysts. Automating routine detection and triage frees up SOC staff to focus on higher-value work. Modern SOCs can now operate at machine speed by using agentic AI technology that can detect, investigate and respond faster than traditional processes, eliminating bottlenecks caused by manual alert handling. These AI agents operate 24/7 without fatigue, continuously learning the environment, while humans focus on more complex investigations.

Deliver a future-proof architecture

Modern SOCs must be designed to evolve with the business. This means adopting modular, open architectures that integrate across IT, OT, IoT and cloud environments. Automation and orchestration standardize routine processes, while reusable playbooks ensure consistent, repeatable responses.

Following open standards and a hybrid operating model allows organizations to scale, adapt to new threats and incorporate future technologies without rebuilding the foundation. By building the SOC with flexibility, integration, and forward-looking principles, organizations ensure it remains resilient and relevant as both business and threat landscapes evolve.

The big picture

The SOC has always been at the heart of cyber defenses. But with expanding attack surfaces, tighter budgets and rising expectations, it can no longer operate as a reactive, technical silo. To stay relevant, it must become a proactive, intelligence-driven function that supports enterprise resilience and aligns with business priorities.

MANAGING AI AGENTS AT SCALE

As organisations accelerate the adoption of AI agents, governance and accountability frameworks are struggling to keep pace. Mortada Ayad, VP – META at Delinea, examines why enterprises are managing AI agents with far less discipline than human employees, and the risks this creates.

In the Middle East, organisations are scaling AI far faster than they are expanding their human workforce. PwC estimates that AI could contribute more than US$320 billion to regional economies by 2030, and much of that value is expected to come not from pilots, but from AI agents embedded directly into day-to-day operations. By contrast, human hiring across many sectors remains tightly controlled, heavily regulated, and deliberate.

That imbalance matters. When a new employee joins your organisation, a lot happens before they ever open their laptop. Contracts are signed. A manager is assigned. Access is approved. Roles are defined. Someone, somewhere, is accountable for what that person does and what they can see.

Now imagine hiring hundreds of new workers overnight. No contracts. No managers. No clear record of what they can access or what decisions they’re allowed to make, yet they’re operating inside your systems, moving data, and acting on your organisation’s behalf. That, in effect, is how many organisations are deploying AI agents today.

Across enterprises, AI has moved well beyond experimentation. Autonomous and semi-autonomous agents are being embedded into everyday workflows, helping draft documents, analyse data, trigger actions and, increasingly, make decisions. They’re often described as digital assistants, but in practice they behave more like an army of digital interns. They’re fast and capable, but still learning the boundaries of the organisation they’ve just joined. For the region, the agentic AI challenge, therefore, isn’t adoption. It’s management.

Amplified Risk

In most organisations, and especially those that operate in highly regulated, reputation-sensitive markets, the cost of an untraceable decision is magnified. When something goes wrong, boards aren’t asking whether AI was innovative. They’re asking who approved it, who owns it, and who is accountable. This is where a familiar governance gap quietly opens.

When AI stops behaving like software

Traditional identity and access models were built for two types of

actors: humans and predictable machines. Humans log in, follow roles and report to managers. Machines run repetitive tasks with tightly scoped access. Agentic AI fits in neither category.

Some AI agents act on behalf of employees, using delegated access to draft emails, pull reports or interact with applications.

Others operate independently with their own credentials, autonomously accessing systems and data to complete tasks. From a business perspective, both can trigger real outcomes yet neither fits neatly into existing governance structures.

When access is shared, inherited or unclear, visibility disappears. And when visibility disappears, accountability soon follows.

What HR gets right that IT needs to borrow

HR would never allow anonymous employees. Every hire has a clearly defined lifecycle that covers when they join, move, and eventually leave. Along the way, their access is reviewed, their performance is monitored, and their role evolves all under clear managerial oversight.

AI agents deserve the same discipline. Before an AI agent is allowed to operate, organisations need to know it exists. That sounds obvious, yet many enterprises are already running multiple agents and large language models without a clear inventory (an issue increasingly compounded by shadow AI). Discovery is critical: the digital equivalent of knowing who is on your payroll.

Onboarding comes next. Just as HR assigns an employee number, job title and manager, AI agents need unique identities, clearly defined ownership and explicit permissions. Without that, every action becomes harder to trace, explain or defend.

Roles matter, too. HR doesn’t give interns unrestricted access to sensitive systems, and neither should IT. AI agents should be granted only the privileges they need for the tasks they perform, and nothing more. That access should be reviewed regularly, based on what the agent actually uses, not what it might need “just in case”.

Finally, there’s offboarding. When an employee leaves, access is revoked. When an AI agent is retired, paused or abandoned, the same should happen. Otherwise, organisations are left with orphaned identities that remain active, powerful, and worryingly, forgotten. The uncomfortable truth is this: many organisations govern AI with far less rigour than they apply to junior staff.

The hidden risk of “AI acting as employees”

For employees, AI assistants are quickly becoming indispensable. They draft, summarise, analyse and automate at a pace no human can match. But when assistants act indistinguishably from their users, oversight becomes almost impossible. If an AI assistant accesses sensitive data on behalf of an employee, who did it — the employee or the agent? If it triggers a transaction or modifies a record, who is responsible?

IT has faced this problem before. Administrative accounts were separated from personal ones for clarity. By isolating identities, organisations reduced risk, improved traceability and limited exposure. The same logic applies to AI. When assistants and autonomous agents have their own clearly defined identities, organisations can distinguish human intent from machine execution and apply appropriate controls to both.

Identity is how organisations stay human at scale

Most leaders would never allow a new employee, no matter how capable, to wander the office unsupervised, rummaging through filing cabinets, sitting in on board meetings and signing documents. Yet that is effectively what happens when AI agents are deployed without identity, ownership and boundaries.

AI agents are joining the workforce whether organisations are ready or not. They already operate like interns, analysts and administrators. What’s missing is not intelligence, but supervision. To turn this into a true advantage, while mitigating the inherent risk, organisations must manage these AI workers like people, assigning them clear identities, defined roles and visible accountability.

"Most leaders would never allow a new employee, no matter how capable, to wander the office unsupervised, rummaging through filing cabinets, sitting in on board meetings and signing documents. Yet that is effectively what happens when AI agents are deployed without identity, ownership and boundaries."

RETHINKING CLOUD MANAGEMENT IN MEA

As hybrid and multi-cloud environments become the norm, many organisations are discovering that the real challenge lies not in infrastructure, but in how it is managed. Raif Abou Diab, General Manager – South Gulf & Sub-Saharan Africa at Nutanix, examines why cloud operations are struggling to keep pace and what organisations in the MEA region must rethink to regain control.

Most organisations assume they have a cloud problem. In reality, the challenge is far more specific and far more persistent: cloud management. Infrastructure did not suddenly become unmanageable because enterprises embraced hybrid or multi-cloud strategies. The real issue emerged when environments evolved faster than the tools responsible for operating them. Over time, platforms multiplied, workloads dispersed across locations and accountability became fragmented. The long-promised “single pane of glass” quietly turned into a collection of disconnected views.

What stands out is that control was not lost the moment organisations adopted hybrid or multi-cloud models. Control was lost when cloud management approaches stalled at partial, platform-specific visibility, despite the environment becoming increasingly interconnected.

Seeing Everything Still Isn’t the Same as Being in Control

Most cloud management tools provide extensive visibility— dashboards, alerts, metrics and visualisations that explain what is happening across environments. That information is valuable, but only to a point.

Operations teams quickly learn that visibility without context creates noise. Alerts arrive from multiple platforms, each offering a fragment of the truth. The real challenge is rarely identifying what is happening, but understanding what action to take and how quickly it needs to happen.

This is where the concept of a “single pane of glass” often breaks down. In practice, it frequently means a unified view limited to a single vendor’s environment. As soon as workloads extend beyond that boundary or span locations, teams revert to switching between tools to piece together the full picture.

A cloud management platform should do more than observe. It should respond when pressure builds, take action when thresholds are reached and give operations teams the time they need to resolve issues properly.

Why Day-to-Day Operations Break Down Under Pressure

Infrastructure teams do not usually struggle with design—they struggle with time. They are expected to maintain stability, manage growth, control costs, enable new digital initiatives and re-

Raif Abou Diab General Manager – South Gulf & Sub-Saharan Africa, Nutanix

spond to incidents, often all at once. In Middle East and Africa, this pressure is amplified by rapid digital transformation, regional expansion and the introduction of data-intensive workloads, including early AI initiatives, into already busy environments.

When issues occur, they tend to surface at the worst possible moment—during critical business processes that cannot be paused.

Payroll is a straightforward example. It runs periodically, consumes significant resources in a short window and must complete successfully. If it fails, the impact is immediate and highly visible across the organisation. In those moments, teams are not interest-

ed in architectural diagrams or long-term capacity trends. They need the platform to adapt, absorb the spike and give them breathing space to address the root cause.

When a management platform can automatically respond as resources come under pressure, the situation changes dramatically. The issue still exists, but the urgency and panic disappear—and that alone has a tangible operational impact.

Building Guardrails That Scale with the Environment

Governance is another area that is often misunderstood. It is still framed as restriction or approval, when effective governance is really about consistency. As cloud estates become more distributed, relying on manual enforcement becomes unsustainable. Rules drift, exceptions accumulate and outcomes vary depending on who is involved and how stretched teams are at that moment.

Embedding governance into the management layer removes that variability. It ensures workloads are deployed, scaled and managed in line with agreed standards wherever they run, while enabling safe self-service without sacrificing control.

This becomes even more important as automation is introduced. Automation is rarely immediate or effortless. It requires upfront design, scripting and planning, which can feel challenging when teams are already under pressure. What is often overlooked is that automation is not a recurring burden. Once implemented, it delivers returns quietly over time. Most organisations already automate informally through scripts and scheduled tasks. Formalising those efforts is less about ambition and more about ensuring actions are consistent, auditable and secure.

From Cost Visibility to Confident Decision-Making

Cost visibility has also evolved rapidly. As FinOps practices mature, organisations have become far more aware of consumption, but awareness is sometimes mistaken for restriction. In reality, understanding cost is about enabling better decisions, not halting progress.

The analogy is familiar. Reviewing a bank statement does not mean stopping spending altogether—it provides clarity on where money goes. That awareness supports better trade-offs. The same principle applies to infrastructure. When teams understand what workloads consume and what that consumption costs the business, discussions become more constructive. Overprovisioning can be challenged, resources can be right-sized with confidence and growth can be planned deliberately rather than guessed.

Why Hybrid Isn’t a Phase, but the Default State

One of the clearest lessons from organisations managing complex estates successfully is that hybrid is no longer a transitional phase—it is the operating model. Early public cloud enthusiasm has given way to more balanced considerations around cost predictability, data sovereignty and resilience, while on-premises environments continue to evolve rather than disappear.

Success in this model does not come from forcing everything into

a single environment. It comes from managing different environments consistently. Platforms that treat each location as a separate problem tend to introduce friction. Those that recognise them as variations of the same operational challenge reduce it.

This is where cloud management must refocus: away from labels and architectural debates, and toward outcomes such as stability, predictability and the ability to respond calmly when things do not go to plan.

Putting Cloud Management Back Where It Belongs

Cloud management lost direction when it became more focused on describing environments than running them. Organisations that succeed use platforms that fade into the background—quietly enforcing governance, supporting automation and enabling better decisions under pressure.

What I consistently hear from customers across the MEA region is that while technology matters, outcomes matter more. When organisations focus on the benefits derived from management capabilities—and invest the time to implement them properly— they regain control without slowing the business. As infrastructure continues to become more distributed, that balance is what ultimately defines effective cloud management.

"Cloud management lost direction when it became more focused on describing environments than running them. Organisations that succeed use platforms that fade into the background— quietly enforcing governance, supporting automation and enabling better decisions under pressure."

PHILIPS EVNIA 200HZ SPEED GAMING MONITOR

MMD Singapore the manufacturer of Philips displays, announced the regional launch of its latest competitive gaming monitor, the 24M2N3200FQ and 27M2N3200FQ, designed to deliver championship-level performance and immersive visuals to the passionate gaming community across the Middle East. This 24 and 27 inch Fast IPS monitor combines blistering 200Hz speed with cutting-edge image clarity technologies, offering gamers the critical edge needed for victory.

The Middle East's gaming scene is renowned for its intensity and competitive spirit. The Philips Evnia gaming monitor meets this demand head-on with its ultra-fast 200Hz refresh rate and a near-instant 0.3ms (Smart MBR) response time, effectively eliminating motion blur and ghosting. This ensures every panning shot in an FPS and every high-speed turn in a racing game is rendered with stunning sharpness, giving players a seamless and lag-free advantage.

Beyond raw speed, the monitor features Stark ShadowBoost, a proprietary technology that illuminates dark scenes in

games without overexposing bright areas, ensuring enemies lurking in shadows are clearly visible. The Smart Crosshair feature dynamically changes color based on the background for maximum visibility, enhancing targeting accuracy.

For a truly captivating visual experience, the monitor supports HDR10 content, delivering a wider range of colours, superior contrast, and more lifelike images. Gamers can further personalize their experience through the Evnia Precision Center software, which offers intuitive controls to fine-tune settings for different game genres or create custom profiles.

ethos.

Highlights:

• Fast IPS panel with 200Hz refresh rate and 0.3ms response time delivers smooth, blur-free gameplay.

Designed with players well-being in mind, the monitor incorporates LowBlue Mode and Flicker-Free technology to reduce eye strain during marathon gaming sessions. It’s sustainable design, featuring chassis made with 85% post-consumer recycled plastic, aligns with a forward-thinking

Seagate Mozaic 4+

Seagate Technology announced its next-generation Mozaic 4+ platform, the industry’s only heat-assisted magnetic recording (HAMR)–based storage platform deployed at-scale, is now qualified and in production with two leading hyperscale cloud providers. Supporting capacities up to 44TB, these qualifications reflect production-scale deployments in hyperscale environments.

With additional customer qualifications under way, Seagate is delivering on its roadmap to scale from today’s 4+TB per-disk toward a future 10TB per-disk – enabling hard drive capacities of up to 100TB. The platform incorporates a

next-generation suspension architecture and an enhanced system-on-a-chip that enables precise recording at higher densities while maintaining enterprise-class reliability. Each platform generation allows continued gains in capacity without requiring disruptive architectural shifts.

With a majority of the world’s largest cloud storage providers already qualified on Seagate’s Mozaic platform, this milestone underscores the platform’s critical role in data center infrastructure.

Seagate’s custom-designed and manufactured laser technology reflects years of investment in nanophotonic engineering

• Features like Stark ShadowBoost and Smart Crosshair improve clarity in dark scenes and targeting accuracy.

• HDR10 support and advanced image technologies provide richer colours, contrast, and detail.

• Includes LowBlue Mode, Flicker-Free technology, and eco-friendly build using 85% recycled materials.

of critical components used in HAMR recording. This vertically integrated, inhouse innovation strengthens both design and control over yield, reliability and supply chain resilience, all of which are essential as unprecedented growth in data pushes storage demand beyond historical levels. Vertical integration also shortens qualification timelines and supports predictable manufacturing economics.

Highlights:

• The incremental increases in per-disk capacity delivered by Mozaic 4+ enable high-capacity, cost-efficient storage that

D-LINK DGS-1250 SERIES GIGABIT SMART MANAGED SWITCHES

D-Link Corporation announced the launch of its DGS-1250 Series Gigabit Smart Managed Switches. Developed to address the growing demand for greater network visibility and control without the cost and complexity of fully managed solutions, the DGS-1250 Series empowers organizations to scale their networks with confidence while maintaining operational simplicity.

Positioned between unmanaged switches and advanced Layer 3 platforms, the DGS-1250 Series delivers a balanced combination of Gigabit performance, enhanced security, and intuitive management. It is designed for businesses seeking to modernize their network infrastructure with smarter capabilities while keeping deployment and ongoing operations efficient and straightforward.

The DGS-1250 Series supports both PoE and non-PoE models, enabling flexible deployment across a wide range of environments. It is well suited for powering wireless access points, IP phones, and IP surveillance systems, helping organizations build secure and high-performance networks without unnecessary complexity or overhead.

To accommodate diverse deployment requirements, the series offers a flexible lineup including the DGS-125028X, DGS-1250-28XP, DGS-1250-52X, and DGS-1250-52XP. Each model features four 10G SFP+ fiber uplink ports, supporting high speed backbone connectivity and long-distance transmission. This scalability makes the DGS-1250 Series an ideal choice for small to medium sized businesses, branch offices, and campus networks looking to future proof their infrastructure.

Highlights:

(ACLs), and Storm Control.

• Advanced QoS ensures prioritization of latency-sensitive traffic such as voice and video, while Loopback Detection and IEEE 802.3az Energy Efficient Ethernet help maintain network stability and reduce power consumption.

• An intuitive web-based GUI enables easy configuration of advanced Layer 2 features, including VLANs, Link Aggregation (LACP), QoS, and IGMP Snooping.

• Protect critical data and network resources with enterprise-grade security features such as 802.1X port-based authentication, Access Control Lists

scales without increasing infrastructure footprint or energy consumption – strengthening the economic foundation of AI at scale.

• The platform advances capacity per-rack and perwatt, improving data center efficiency, lowering total cost of ownership and enabling organizations to preserve and reactivate data over time, sustainably.

• In a one-exabyte deployment, Mozaic improves infrastructure efficiency by approximately 47 percent compared to standard 30TB deployments, reducing required data center footprint by about 100 square feet and lowering annual energy consumption by roughly 0.8 million kilowatt-hours. At AI scale, these efficiencies compound into meaningful economic advantage.

• Comprehensive monitoring tools—including port mirroring (SPAN), SNMP, and detailed web-based traffic statistics—provide deep visibility in network performance and health.

• Features such as DHCP Auto Surveillance ensure continuous connectivity for IP cameras and other critical security devices, even during DHCP server disruptions.

MORE COUNTRIES WILL BE LOCKED INTO REGION-SPECIFIC AI PLATFORMS BY 2027

Geopolitical, Regulatory, and Security Pressures Spur Governments to Boost Investment in Independent AI

Infrastructure

By 2027, 35% of countries will be locked into region-specific AI platforms using proprietary contextual data, according to Gartner, Inc., a business and technology insights company. Gartner also predicts that platform lock-in will rise from 5% to 35% by 2027.

“Countries with digital sovereignty goals are increasing investment in domestic AI stacks as they look for alternatives to the closed U.S. model, including computing power, data centers, infrastructure and models aligned with local laws, culture and region,” said Gaurav Gupta, VP Analyst at Gartner. “Trust and cultural fit are emerging as key criteria. Decision makers are prioritizing AI platforms that align with local values, regulatory frameworks, and user expectations over those with the largest training datasets.”

Localized models deliver more contextual value; regional LLMs outperform global models in applications such as education, legal compliance, and public services, especially in non-English languages.

Nations Will Need to Invest 1% of GDP in AI Sovereignty by 2029

With non-Western customers changing alignment due to concerns of overly Western influence, AI sovereignty will lead to reduced collaboration and duplication of effort. Because of this, Gartner predicts that nations establishing a sovereign AI stack will need to spend at least 1% of their GDP on AI infrastructure by 2029.

AI sovereignty refers to the ability of a nation or organization to independently control how AI is developed, deployed, and used related to its geographical boundaries.

Regulatory pressure, geopolitics, cloud localization, national AI missions, corporate risks and national security concerns are driving governments and corporations to accelerate investments in sovereign AI. A fear of falling behind in the technological AI race will also push nations and companies to innovate rapidly and invest in an attempt to achieve self-sufficiency in all aspects of the AI stack.

“Data centers and AI factory infrastructure form the critical backbone of the AI stack that enables AI sovereignty, " said Gupta. “As a result, data centers and AI factory infrastructure will see explosive build-up and investment going forward, propelling a

few companies that control the AI stack to achieve double-digit, trillion-dollar valuations.”

Because of this, CIOs must:

• Design model agnostic workflows using orchestration layers that enable switching between LLMs across regions and different vendors.

• Ensure AI governance, data residency, and model tuning practices can meet country-specific legal, cultural, and linguistic requirements.

• Establish relationships with national cloud providers, regional LLM vendors, and sovereign AI stack leaders in priority markets and build a vetted list of partners.

• Monitor AI legislation, data sovereignty rules, and emerging standards that may affect where and how they can deploy AI models and process users' data.

“Trust and cultural fit are emerging as key criteria. Decision makers are prioritizing AI platforms that align with local values, regulatory frameworks, and user expectations over those with the largest training datasets.”

Turn static files into dynamic content formats.

Create a flipbook
CXO DX March 2026 by Leap Media Solutions - Issuu