Skip to main content

AI April 2026 Edition

Page 1


AFRICA’S AI MOMENT

The race to scale

INSIDE: BUILD VERSUS BUY I WHY ADAPTABILITY BEATS TALENT I WHERE ROBOTICS MEETS THE REAL ECONOMY I THE NEW ARMS RACE I and more

African banks are scaling AI, tackling

AI helps utilities unlock data, predict demand, optimise operations and strengthen water resilience. 16 THE NEW AI ARMS RACE

Chips, data centres and compute power are shaping the future of arti cial intelligence globally.

22 BUILT HERE

Control over data, infrastructure and compute will determine Africa’s long-term AI independence.

23 BUILD VERSUS BUY

Owning data and governance while renting tools de nes modern enterprise AI strategy.

26 CAN AFRICA RUN AI AT SCALE?

43 GETTING AI RIGHT

Skills, structure and strategy are key to unlocking sustainable AI value in organisations.

44 WHERE ROBOTICS MEETS THE REAL ECONOMY

64 AI TURNING HEADS

Four industries show how AI delivers competitive advantage, ef ciency gains and growth opportunities.

70 WHEN ATTACKERS USE AI

Infrastructure, energy and rising compute costs challenge large-scale AI adoption across Africa. 28 AFRICA’S AI ENERGY CHALLENGE

Power constraints and rising demand raise questions about sustainable AI growth across the continent. 34 MAKING IT PAY

AI success depends on cost control, integration and governance to deliver measurable business value.

35 AI’S EXPENSIVE BACKBONE

Demand for GPUs, networks and data centres is driving unprecedented AI  deployment costs.

Robotics creates value where systems, economics and environments are ready for adoption.

48 AUTOMATING THE BEATING HEART

AI-driven work ows enhance employee experience, improving ef ciency across core business operations.

50 WOULD YOU LIKE TO SPEAK?

AI tools support customer service teams, improving ef ciency and overall customer engagement.

52 FROM DATA TO DECISIONS

Turning data into measurable outcomes remains uneven, impacting revenue, cost and risk.

56 SCALING ENTERPRISE AI IN AFRICA

Organisations are moving from adoption to execution, focusing on practical frameworks to scale AI successfully.

60 AI AT WORK

AI success depends on data quality, systems integration, skills, regulation and risk tolerance.

Cybercriminals exploit AI tools, accelerating phishing, deepfakes and increasingly sophisticated attacks.

71 THE BACKBONE BEHIND BOTS

Chips, connectivity and energy systems power AI growth and South Africa’s evolving digital future.

76 WHY ADAPTABILITY BEATS TALENT

Adaptability, collaboration and continuous learning are essential as AI reshapes work and future skills.

Picasso Headline,

A proud division of Arena Holdings (Pty) Ltd, Hill on Empire, 16 Empire Road (cnr Hillside Road), Parktown, Johannesburg, 2193 PO Box 12500, Mill Street, Cape Town, 8010 www.businessmediamags.co.za

EDITORIAL

Editor: Brendon Petersen

Content Manager: Raina Julies rainaj@picasso.co.za

Contributors: Tiana Cline, Trevor Crighton, Clifford de Wit, Lynn Grala, Trevor Kana, Itumeleng Mogaki, Busani Moyo, Semone Peacock, Anthony Sharpe, Rodney Weidemann

Copy Editor: Brenda Bryden

Content Co-ordinator: Natasha Maneveldt

DESIGN

Head of Design: Jayne Macé-Ferguson

Senior Designer: Mfundo Archie Ndzo

Project Designer: Annie Fraser

DIGITAL

Online Editor: Stacey Visser vissers@businessmediamags.co.za

SALES

Project Manager: Tarin-Lee Watts wattst@arena.africa | +27 87 379 7119 +27 79 504 7729

PRODUCTION

Production Editor: Shamiela Brenner

Advertising Co-ordinator: Shamiela Brenner

Subscriptions and Distribution: Fatima Dramat fatimad@picasso.co.za

Printer: CTP Printers, Cape Town

MANAGEMENT

Management Accountant: Deidre Musha

Business Manager: Lodewyk van der Walt General Manager, Magazines: Jocelyne Bayer COPYRIGHT: Picasso Headline.

DEBUNKING THE MYTHS AROUND AI

Every technology moment arrives with a certain amount of theatre.

Arti cial intelligence (AI) has had more than most. For the past few years, the conversation has been full of bold forecasts about transformation and disruption. Spend enough time around the technology sector, and it can start to feel as if AI has already remade the world. It hasn’t. At least not yet.

the publisher. The publisher is not responsible for unsolicited material. AI is published by Picasso Headline. The opinions expressed are not necessarily those of Picasso Headline. All advertisements/advertorials have been paid for and therefore do not carry any endorsement by the publisher.

This issue aims to step away from the spectacle and spend some time on the practical side of the story. What actually happens when organisations try to use AI in their day-to-day operations? What does it look like once the demos end and the work begins?

The answers are rarely glamorous.

Deploying AI in the real economy quickly becomes a conversation about infrastructure, power availability and the cost of compute. Questions about where data sits, how systems are governed and who inside the organisation is responsible for them begin to matter more than the model itself. Many companies discover that building a prototype is relatively straightforward. Running it reliably

at scale is something else entirely. Across this issue, we look at the ecosystem that sits behind modern AI. The global race for chips and specialised hardware. The sectors where automation is beginning to produce measurable returns.

The operational realities that determine whether ambitious projects survive contact with everyday business environments. There’s also a question that sits close to home.

Where Africa ultimately sits in the emerging intelligence economy will depend on decisions being made now about infrastructure, skills and localisation. The story of AI here will be shaped less by grand predictions and far more by the practical work of building the systems that make it possible.

Happy reading.

Brendon Petersen

AI IN ACTION

Insights from KPMG’S 2025 GLOBAL CEO OUTLOOK show that African financial institutions are deploying AI at scale, tackling legacy systems, workforce readiness and cyber-risk in 2026

African nancial services are entering 2026 at a turning point. CEOs across banking, capital markets and insurance are no longer experimenting with technology – AI, cybersecurity and regulatory resilience are shaping real strategies for growth and transformation.

on measurable transformation rather than experimentation.

INSURANCE: TECHNOLOGY AND SUSTAINABILITY DRIVE CONFIDENCE

changing skills required for entry-level roles. For insurers, the ability to equip staff with the right skills is increasingly central to capturing AI’s potential.

Despite geopolitical uncertainty and economic volatility, leaders are showing a clear appetite to modernise operations, integrate advanced technologies and strengthen risk management. According to KPMG’s 2025 Global CEO Outlook, these shifts are already in uencing investment priorities and operational focus across the continent’s nancial institutions.

Here are the key insights from the report:

• AI is no longer a pilot project, but a strategic lever.

• Legacy systems, workforce-readiness and cyber-risk remain major barriers.

• Investment is increasingly focused

Insurance CEOs are entering 2026 with growing optimism. Globally, 82 per cent of insurance CEOs are con dent in their company’s growth, up from 74 per cent in 2024, re ecting stronger earnings across health, life and specialty lines, including cyber and business interruption coverage.

AI adoption is accelerating across underwriting, onboarding, claims processing and cyberdefence. Globally, 67 per cent of CEOs expect returns from AI investments within 1–3 years, compared to 21 per cent last year, while two-thirds plan to allocate 10–20 per cent of budgets to AI initiatives. These gures underline that insurers are treating AI as a core operational tool rather than a speculative experiment.

Workforce transformation remains a critical challenge. Seventy-seven per cent of global insurance CEOs cite AI workforce-readiness and upskilling as a top constraint, while 83 per cent report that AI is reshaping training and development, and 79 per cent say it is

Sustainability and environmental, social and governance (ESG) compliance are also shaping strategy. Over half (55 per cent) of global insurance CEOs identify ESG reporting and compliance as their primary ESG priority. In Africa, where regulations often follow European trends, insurers must navigate both global expectations and local frameworks, making ESG a non-negotiable part of strategic planning.

Cybersecurity is another top concern. Eighty-three per cent of insurance CEOs name cybercrime as the biggest barrier

83% of CEOs say the biggest barrier to organisational growth is cybercrime and cyberinsecurity.

to growth, with digital risk resilience emerging as the leading area for risk mitigation investment.

Mark Danckwerts, head of insurance, KPMG One Africa, said: “Insurance leaders across Africa are navigating a complex operating environment, but they are doing so from a position of growing con dence. AI presents enormous opportunity to improve ef ciency, risk assessment and customer engagement. However, sustainable success will depend on responsible adoption, workforce-readiness and strong cyber-resilience. Insurers that balance innovation with trust will be best placed to outperform.”

77%

agree that a top constraint on growth is AI workforce-readiness and upskilling.

Inorganic growth is also on the rise, with Africa’s insurance sector showing some of the highest levels of high-impact mergers and acquisitions globally – a trend that highlights insurers’ willingness to combine scale with technology-driven ef ciency.

BANKING AND CAPITAL MARKETS: AI AS A STRATEGIC IMPERATIVE

For African banks, AI is no longer a theoretical discussion; it is the backbone of strategic reinvention.

“Technology, in particular AI, presents a huge opportunity, but also a challenge in terms of where to prioritise, how to achieve a measurable return on investment (ROI), and how to ensure responsible and safe adoption to maintain trust,” said Pierre Fourie, KPMG One Africa head of nancial services.

“Banks need to modernise legacy IT, cope with rising nancial crime risk, made more dif cult by sophisticated scams using AI, address new competitive threats from ntechs and nimble cloud-native banks, and comply with complex and changing regulations.”

AI functions both as an enabler and a risk ampli er. It can enhance customer engagement and deepen understanding of client needs, yet banks must avoid depersonalising interactions and maintain the human touch. At the same time, AI strengthens detection of bad actors, while increasing the complexity of the cyberthreat landscape.

Investment in AI is growing rapidly:

•70 per cent of banking CEOs plan to spend 10–20 per cent of their budgets on AI in the next 12 months.

•69 per cent expect ROI from AI within 1–3 years, up from 13 per cent last year.

•78 per cent cite workforce-readiness or upskilling as a potential risk if not addressed.

The top ve factors threatening banking prosperity highlight the operational challenges:

•86 per cent – cybercrime and cyberinsecurity.

•78 per cent – AI workforce-readiness.

•77 per cent – integration of AI into business processes.

•75 per cent – competition for AI talent.

•75 per cent – cost of technology infrastructure.

Fourie added: “For African banks, AI is not a theoretical discussion; it is a strategic imperative. The ability to integrate AI into core processes, manage cyber-risk and build the right talent base will determine competitive advantage. At the same time, banks must modernise legacy systems and manage infrastructure costs, all while protecting trust in an increasingly digital ecosystem.” Strategic M&A (mergers and acquisitions) continues to be a growth lever.  With 25 per cent of banking CEOs citing “strategic differentiation” as primary driver of AI adoption, investment is increasingly linked to long-term competitive positioning rather than ef ciency, reinforcing that

AI strategy is now inseparable from business strategy.

A PAN-AFRICAN MOMENT FOR TRANSFORMATION

Across insurance and banking, a clear picture emerges: AI is moving from experimentation to industrial-scale deployment. Con dence is backed by disciplined transformation – measured investment in AI, prioritisation of cybersecurity, attention to ESG compliance and strategic M&A.

56% say ethical challenges are the biggest obstacle to AI implementation.

For African nancial institutions, the opportunity lies in balancing innovation with resilience, and growth with governance. Those that succeed will integrate AI into core operations, modernise infrastructure, equip the workforce and manage risk – demonstrating that AI is no longer a buzzword, but a critical tool for shaping the future of nance on the continent.

Mark Danckwerts
Pierre Fourie
FINANCIAL SERVICE INSIGHTS FROM KPMG’S 2025 GLOBAL  CEO OUTLOOK AND CAPITAL MARKETS: CEO OUTLOOK

SMARTER WATER

Insights and analysis from XYLEM SOUTH AFRICA’S WATER TECHNOLOGY

TRENDS

2025 REPORT reveal how AI is helping water utilities unlock data, predict demand, optimise operations and boost resilience

Water utilities are under growing pressure. Ageing infrastructure, climate variability and rising demand mean that managers need smarter, more adaptive ways to operate. While digital monitoring and analytics have already improved ef ciency, arti cial intelligence (AI) is now taking water management to the next level. By identifying patterns in large datasets, AI enables predictive insights and supports better decision-making.

demand and optimise energy consumption by adjusting operations according to predicted peaks.

3. Advanced metering infrastructure

Smart meters have improved distribution ef ciency, but advanced metering infrastructure (AMI) takes this further.

AMI performs remote readings and integrates information into AI systems, providing near real-time monitoring and feedback for operations.

4. Decision support systems

Decision support systems use AI to analyse large datasets from hydrological and meteorological stations, expert knowledge and local inputs. These tools support planning and management at real-time, medium- and long-term levels, modelling different scenarios such as water body behaviour and consumption patterns.

OVERCOMING DEPLOYMENT CHALLENGES

Despite clear bene ts, deploying AI in water management is not always straightforward. Success depends on data quality, integration with existing infrastructure and organisational-readiness. Deployment can become complex, which is why leading water technology companies develop and maintain extensive software platforms designed speci cally for utility challenges.

Chetan Mistry, strategy and marketing manager at Xylem South Africa, WSS, explains: “Water distribution and treatment sites produce far more data than they use. But that data gets neglected because of capacity. It would take an enormous amount of time to organise and study the data for patterns and insights. Digital and AI systems are solving those problems. Digital systems record and share accurate and reliable data, which AI systems use to rapidly produce planning information, automation and other improvements.”

Adoption is on the rise. Around 15 per cent of large water utilities worldwide currently use AI, a  gure expected to reach 30 per cent by 2026 and 75 per cent by 2035, according to Xylem South Africa’s Water Technology Trends 2025 Report. This growth re ects the sector’s recognition that AI can turn data into actionable insights.

UNLOCKING THE POTENTIAL OF UTILITY DATA

Experts see huge potential in AI-enabled water management. Digital systems are already delivering measurable results.

Similar capabilities are expanding into industrial water and wastewater operations. Predictive monitoring and process optimisation help improve compliance, reliability and resource ef ciency, showing the hidden capacity at every water management site.

HOW AI IS APPLIED IN WATER SYSTEMS

Water utilities are using AI and smart data in several ways:

1. Real-time process adjustment

For example, Yorkshire Water Services in the United Kingdom, using Xylem Vue digital services, reduced visible leaks by 57 per cent while cutting annual distribution main repairs by 30 per cent.

Water treatment systems must maintain consistency even as ows change constantly. AI allows operators to de ne scenarios that automatically adjust operations, such as reagent dosing and treatment line control, using data from water management applications and business intelligence systems.

2. Predictive demand and optimisation

Predictive maintenance systems use AI-driven models and performance data, often integrated with digital twins, to anticipate equipment needs. AI also helps water managers forecast

“Companies like Xylem invest substantially in developing water management platforms that are secure, simple to deploy and ensure the data remain with the utility,” says Mistry.

“They create interactive and customisable dashboards and reports, which authorised staff and contractors can access on-site through smart devices and computers.”

The real advantage lies not just in new features, but in making existing data useful.

“Data that does nothing only takes up space. But data made useful through cloud-based management software opens additional dimensions for planning and predictive actions such as maintenance.”

AI’S MISSING LINK: FOCUS

While most organisations are investing in AI, only a few are experiencing immediate transformational impact. The challenge isn’t the technology itself – it’s knowing the right processes and focus areas, writes TERTIUS ZITZKE , Group CEO of 4Sight Holdings

For more than a century, the Pareto Principle, commonly known as the 80/20 rule, has held true across industries: roughly 80 per cent of outcomes are driven by 20 per cent of activities. In the era of arti cial intelligence, this principle is more relevant than ever. The golden rule of growth will always be 20 per cent effort to keep and maintain a customer and 80 per cent effort to gain a new one.

AI does not create value by automating everything. It creates value by amplifying the critical few decisions, actions and processes that matter most. This insight sits at the heart of how forward-looking organisations are now approaching AI-driven business automation.

AI HAS A FOCUS PROBLEM, THE 80/20 RULE EXPLAINS WHY

Arti cial intelligence is no longer a novelty in business. It is also no longer scarce. What is scarce is clarity.

Most organisations today are experimenting with AI across HR, sales, marketing, operations, nance and innovation, yet many leaders quietly admit that the results feel incremental rather than transformational. The issue is not the capability of AI. The issue is how leaders decide where intelligence should be applied. This is where an old idea becomes newly relevant.

FROM “AI EVERYWHERE” TO “AI WHERE IT MATTERS”

Many early AI initiatives failed because they focused on breadth instead of leverage:

• Automating low-impact tasks.

• Deploying tools without strategic alignment.

• Chasing innovation without measurable business outcomes. The next phase of AI adoption is different. Leading organisations are using AI to identify the vital 20 per cent inside each business function, and then systematically applying intelligence to those areas rst.

At 4Sight, this approach is structured through the Seven Stages of AI for Business, a maturity model that moves organisations from basic automation to autonomous, self-reinforcing intelligence.

APPLYING THE 80/20 PRINCIPLE ACROSS CORE BUSINESS FUNCTIONS

<strap><bold>IN PARTNERSHIP ITH 4SIGHT HOL INGS

Let’s look at applying the 80/20 principle to the ve pillars of business transformation: people, growth, operations, nance and innovation.

<header> HY THE 0/20 RULE IS THE MISSING LIN IN AI RIVEN USINESS TRANSFORMATION

<blurb> ost organisations are investing in AI. Few are seeing immediate transformational returns. The reason is not technology it is the process and where to focus By <bold>TERTIUS IT E<unbold>, group CEO of 4Sight oldings

<strap><bold>IN PARTNERSHIP ITH 4SIGHT HOL INGS

1. Human resources: from administration to talent advantage In people/HR, the highest value is not created by administration. It is created by hiring quality talent, retaining top performers and developing leadership. AI enables faster, fairer talent screening, predictive insights into attrition risk and data-driven workforce planning. The result is an HR function that moves beyond process management to measurable talent return on investment.

<body>

4Sight Stage HR focus (20per cent) Automation/intelligence impact

<header> HY THE 0/20 RULE IS THE MISSING LIN IN AI RIVEN USINESS TRANSFORMATION

<blurb> ost organisations are investing in AI. Few are seeing immediate transformational returns. The reason is not technology it is the process and where to focus By <bold>TERTIUS IT E<unbold>, group CEO of 4Sight oldings

Talent screening, CV matching Improves recruiter throughput

Performance insights Better promotion and reward decisions 5 Attrition prediction Retains top 2 per centperformers 6 Autonomous workforce planning ynamic role andskills allocation

7 Org design innovation Continuous talent model evolution

<body> 4Sight Stage HR focus (20per cent) Automation/intelligence impact 1–2 Payroll, leave, compliance Removes low-value admin work 3 Talent screening, CV matching Improves recruiter throughput

Outcome: R shifts from a cost centre to a talent return on investment engine.

Performance insights Better promotion and reward decisions

1–2 CR updates, reporting Frees selling time

Attrition prediction Retains top 2 per centperformers 6 Autonomous workforce planning ynamic role andskills allocation 7 Org design innovation Continuous talent model evolution

3 ead scoring Focus on high-conversion

4Sight Stages Sales focus (20per cent) Automation/intelligence impact

Outcome: R shifts from a cost centre to a talent return on investment engine.

2. Growth: sales: focusing on the revenue that actually matters In most organisations, a small portion of customers generate most of the revenue, and a minority of opportunities deliver the majority of margin. AI allows sales teams to prioritise high-probability opportunities, predict churn before it happens and guide sales actions in real-time. This shifts sales from activity volume to revenue precision.

4Sight Stages Sales focus (20per cent) Automation/intelligence impact

ext-best-action guidance Improves win rates

eal andchurn prediction Protects revenue concentration

Autonomous pipeline management Self-optimising sales execution

ew revenue model creation AI-enabled offerings pricing

Outcome: sales effort concentrates where the best margin exists

<bold>

Tertius Zitzke

Marketing: precision over spend

Marketing has long been governed by the 80/20 rule: only a few channels drive most conversions, and a small number of messages create real engagement. With AI, organisations can identify which audiences convert before campaigns launch, allocate spend dynamically to high-performing channels and continuously optimise content and messaging. The outcome is higher impact with lower waste.

4Sight Stage Marketing focus (20per cent) Automation/intelligence Impact

4Sight Stage Marketing focus (20per cent) Automation/intelligence Impact

1–2 Campaign execution Faster, cheaper delivery

1–2 Campaign execution Faster, cheaper delivery

3 Content optimisation igher engagement rates

3 Content optimisation igher engagement rates

4 Attribution intelligence Budget shifts to winning channels

4 Attribution intelligence Budget shifts to winning channels

5 Predictive segmentation Anticipates demand

5 Predictive segmentation Anticipates demand

6 Autonomous spend allocation Continuous ROI optimisation

6 Autonomous spend allocation Continuous ROI optimisation

7 arket creation AI-generated products andnarratives

7 arket creation AI-generated products andnarratives

Outcome: less spend, higher conversion density

Outcome: less spend, higher conversion density

3. Operations: eliminating bottlenecks, not just costs

<bold> . Operations: eliminating bottlenecks not ust costs<unbold>

<bold> . Operations: eliminating bottlenecks not ust costs<unbold>

Operational inef ciency rarely comes from everywhere. It comes from a small number of bottlenecks, exceptions and failure points. AI makes it possible to predict disruptions before they occur, automatically rebalances workloads and creates self-healing operational processes. Operations evolve from cost ef ciency to resilience and scalability.

4Sight Stage Finance focus (20per cent) Automation/intelligence impact

4Sight Stage Finance focus (20per cent) Automation/intelligence impact

1–2 Transaction processing Faster close cycles

1–2 Transaction processing Faster close cycles

3 Variance analysis Early anomaly detection

3 Variance analysis Early anomaly detection

4 Scenario modelling Better executive decisions

4 Scenario modelling Better executive decisions

5 Forecasting and risk prediction Capital protection

5 Forecasting and risk prediction Capital protection

6 Autonomous controls Continuous compliance

6 Autonomous controls Continuous compliance

7 Strategic finance innovation AI-driven business models

7 Strategic finance innovation AI-driven business models

Outcome: finance becomes predictive and strategic, not retrospective

Outcome: finance becomes predictive and strategic, not retrospective

5. Innovation: making breakthrough repeatable

<bold>5. Innovation: making breakthrough repeatable<unbold>

<bold>5. Innovation: making breakthrough repeatable<unbold>

Innovation is often treated as accidental. In reality, most future value comes from a small number insights and experiments. AI helps organisations detect emerging patterns and trends, prioritise right ideas early and scale successful innovation faster This turns innovation from chance into a system.

Innovation is often treated as accidental. In reality, most future value comes from a small number of insights and experiments. AI helps organisations detect emerging patterns and trends, prioritise the right ideas early and scale successful innovation faster. This turns innovation from chance into a system.

Innovation is often treated as accidental. In reality, most future value comes from a small number insights and experiments. AI helps organisations detect emerging patterns and trends, prioritise right ideas early and scale successful innovation faster This turns innovation from chance into a system.

4Sight Stage Innovation focus (20per cent) Automation/intelligence impact

4Sight Stage Innovation focus (20per cent) Automation/intelligence impact

1–2 Idea intake automation Faster experimentation

Operational inefficiency rarely comes from everywhere. It comes from a small number of bottlenecks, exceptions and failure points.AI makes it possible to predict disruptions before they occur, automatically rebalances workloads and creates self-healing operational processes. Operations evolve from cost efficiency to resilience and scalability. Operations evolve from cost efficiency to resilience and scalability.

1–2 Idea intake automation Faster experimentation

Operational inefficiency rarely comes from everywhere. It comes from a small number of bottlenecks, exceptions and failure points.AI makes it possible to predict disruptions before they occur, automatically rebalances workloads and creates self-healing operational processes. Operations evolve from cost efficiency to resilience and scalability. Operations evolve from cost efficiency to resilience and scalability.

4Sight Stage Operations focus (20per cent) Automation/intelligence impact

4Sight Stage Operations focus (20per cent) Automation/intelligence impact

1–2 RPA for repetitive steps Cost and error reduction

1–2 RPA for repetitive steps Cost and error reduction

3 Process visibility Root-cause clarity

3 Process visibility Root-cause clarity

4 ecision support Faster throughput

4 ecision support Faster throughput

5 Failure prediction Prevents disruption

5 Failure prediction Prevents disruption

6 Autonomous orchestration Self-healing operations

3 Pattern detection Better idea quality

3 Pattern detection Better idea quality

4 Portfolio decisioning Smarter bets

4 Portfolio decisioning Smarter bets

5 Trend prediction Early-mover advantage

5 Trend prediction Early-mover advantage

6 Autonomous experimentation Rapid scaling

6 Autonomous experimentation Rapid scaling

7 Continuous reinvention Innovation as a system

7 Continuous reinvention Innovation as a system

<crosshead><bold>THE 4SIGHT SEVEN STAGES OF AI: A PRACTICAL MATURITY

THE 4SIGHT SEVEN STAGES OF AI: A PRACTICAL MATURITY PATH

6 Autonomous orchestration Self-healing operations

7 Operational innovation ew operating models

7 Operational innovation ew operating models

Outcome: operations move from efficiency to resilience and scale,

Outcome: operations move from efficiency to resilience and scale,

4. Finance: from retrospective control to predictive insight –hindsight to 4Sight

<bold>4. Finance: from retrospective control to predictive insight hindsight to 4Sight<unbold>

<bold>4. Finance: from retrospective control to predictive insight hindsight to 4Sight<unbold>

Traditional finance looks backwards. igh-performing finance functions look forward.

Traditional finance looks backwards. igh-performing finance functions look forward.

PATH<unbold>

The real power of AI is unlocked progressively:

<crosshead><bold>THE 4SIGHT SEVEN STAGES OF AI: A PRACTICAL MATURITY PATH<unbold>

1. Task automation – removing manual effort.

The real power of AI is unlocked progressively:

2. Process automation – streamlining work ows.

The real power of AI is unlocked progressively:

3. Assisted intelligence – supporting human decisions.

<start numbered list>

<start numbered list>

4. Augmented decision-making – improving judgement quality.

1.Task automation –removing manual effort

5. Predictive intelligence – anticipating outcomes.

1.Task automation –removing manual effort

6. Autonomous intelligence – self-managing systems.

2.Process automation –streamlining workflows

2.Process automation –streamlining workflows

7. Innovative intelligence – continuous reinvention.

3.Assisted intelligence –supporting human decisions

3.Assisted intelligence –supporting human decisions

Each stage builds on the previous one, and each stage increases the organisation’s ability to apply AI to the 20 per cent that drives 80 per cent of results.

4.Augmented decision-making –improving judgement quality

4.Augmented decision-making –improving judgement quality

AI enables predictive cash-flow forecasting, early detection of financial riskand scenario-based decision modelling Finance becomes a strategic partner to the business, not just a reporting function.

A NEW LEADERSHIP IMPERATIVE

5.Predictive intelligence –anticipating outcomes

5.Predictive intelligence –anticipating outcomes

AI enables predictive cash-flow forecasting, early detection of financial riskand scenario-based decision modelling Finance becomes a strategic partner to the business, not just a reporting function.

Traditional nance looks backwards. High-performing nance functions look forward. AI enables predictive cash- ow forecasting, early detection of nancial risk and scenario-based decision modelling. Finance becomes a strategic partner to the business, not just a reporting function.

THE FUTURE OF BUSINESS AUTOMATION IS NOT ABOUT REPLACING PEOPLE. IT IS ABOUT AMPLIFYING HUMAN IMPACT WHERE IT COUNTS.

AI is no longer an IT conversation; it is a leadership discipline. The organisations that will outperform in the next decade are not those with the most AI tools, but those with the clearest focus on where value is truly created, which decisions matter most, and how intelligence should be applied responsibly. The future of business automation is not about replacing people. It is about amplifying human impact where it counts.

4Sight enables organisations to design, deploy and scale AI-driven business automation through structured maturity models, data intelligence and responsible AI adoption.

AT THE FRONTIER OF HUMAN-CENTRED AI TRANSFORMATION

4SIGHT HOLDINGS enables human-centred AI transformation, uniting data, automation and enterprise systems to deliver intelligent, scalable solutions that enhance performance, decision-making and sustainable business growth

AI transformation is most powerful when it elevates people, not replaces them. This principle underpins 4Sight’s approach to intelligent enterprise solutions, where data, automation and systems are seamlessly integrated. The result is scalable, outcome-driven transformation that enhances decision-making, performance and long-term business resilience.

4Sight lists on the JSE Main Board.

At 4Sight, AI is not a workforce replacement strategy; it is a people investment strategy. The group’s philosophy is grounded in the belief that sustainable AI transformation is achieved by elevating people, not removing them. By automating repetitive, low-value tasks and embedding AI as an intelligent assistant within everyday work ows, 4Sight enables employees to focus on innovation and decision-making. This people- rst approach ensures that AI adoption drives productivity and resilience while strengthening skills, accountability and human expertise across the organisation.

DIFFERENTIATING THROUGH STRATEGY

4Sight is unique in its ability to operate across not only information technology (IT) environments, but also across mission-critical operational technology (OT), the digital layer that runs, monitors and optimises physical industrial operations in real-time. Crucially, it connects these environments back into the business environment (BE), where strategy, governance, people, nancial performance and decision-making reside, ensuring that insights generated on the operational edge translate directly into measurable business outcomes.

The business is structured across business clusters that cover:

• Operational technologies – mission-critical industrial systems.

• Information technologies – Core technologies like finance, operations and HR systems.

• Business Environment – data and AI, automated intelligence and solutions for knowledge workers.

• Channel partners – scale distribution channel and ecosystem of partners, including independent software vendors, Sage and Microsoft partners.

Traditionally, these domains operate as silos, resulting in fragmented decision-making and limited visibility. 4Sight’s breadth of experience across all these technology pillars enables it to drive a connected strategy, unifying data, automation and operations across business functions. This convergence underpins many of the group’s most impactful solutions, particularly in asset-intensive and data-rich environments.

One

transformation framework, applied across industries.

INDUSTRY-AGNOSTIC, OUTCOME-DRIVEN

While 4Sight operates across a wide range of sectors, its approach is industry-agnostic but outcome-driven. The same transformation principles apply whether the challenge is optimising production schedules, modernising nance functions, enhancing customer experience or improving workforce productivity.

Key focus areas include:

• Intelligent automation of core business processes.

• AI-enabled nance, HR and operational platforms.

• Data-driven decision-making across executive and operational levels.

• Secure, hybrid-cloud architectures that support scale and compliance.

• AI governance frameworks aligned with organisational risk pro les.

DELIVERING AUTOMATED INTELLIGENCE SOLUTIONS

4Sight’s focus on automated intelligence is delivered through a structured set of solution areas spanning the full enterprise,

recognising that meaningful AI impact only occurs when technology, data and people work together.

Across the Business Environment, 4Sight enables organisations to modernise how work gets done through modern digital enterprise, intelligent automation, data and AI enablement and software and application development, ensuring AI is embedded into day-to-day decision-making rather than operating in isolation.

Within Information Technologies, the group applies automated intelligence to core business systems, such as enterprise resource planning (ERP), corporate resource planning, human capital and customer relationship management (CRM), helping organisations move from transactional processing to insight-driven operations.

In Operational Technologies, automated intelligence is applied to industrial environments through optimisation, automation and simulation, supporting safer, more ef cient and more predictable operations.

AT 4SIGHT, AI IS NOT A WORKFORCE REPLACEMENT STRATEGY; IT IS A PEOPLE INVESTMENT STRATEGY. THE GROUP’S PHILOSOPHY IS GROUNDED IN THE BELIEF THAT SUSTAINABLE AI TRANSFORMATION IS ACHIEVED BY ELEVATING PEOPLE, NOT REMOVING THEM.

This is complemented by 4Sight’s Channel Partner (CP) ecosystem, which allows these AI-enabled solutions to be scaled across industries and geographies through leading global vendors and independent software providers. Together, these clusters form an integrated automated intelligence capability – one that connects operational reality with enterprise data and human expertise to drive sustained, measurable business outcomes.

DATA AND INSIGHTS: FROM NO SIGHT TO FRONTIER

Many organisations operate with limited visibility, siloed data, manual processes and reactive decision-making. Others have invested in systems and dashboards but struggle to convert information into insight or insight into action.

4Sight frames this progression as a journey:

• No sight: zero digital visibility, fragmented systems and manual execution.

• Hindsight: decisions based on historical data and reporting.

• Insight: near real-time visibility and predictive analytics.

• Foresight: continuous, forward-looking decision-making.

• 4AI: automated intelligence, where AI-driven systems not only recommend actions, but also implement decisions and execute within governed parameters.

• 4frontier: the next-generation, AI-transformed organisation – driven by a culture that embraces AI to unlock productivity and innovation through intelligent agents, automation and data insights. This evolution is not theoretical. It is grounded in decades of operational experience across industries, such as mining, manufacturing, energy, nance, telecommunications and the public sector.

4SIGHT’S GO-TO-MARKET: SCALE, REACH AND CAPABILITY

4Sight operates at scale, with:

• Over 4 500 customers globally.

• Presence across 70-plus countries.

• 440 permanent employees.

• A partner ecosystem of more than 1 000 registered partners.

• Relationships with leading global technology vendors and independent software vendors.

DIGITAL AI TRANSFORMATION AS ORGANISATIONAL DNA

True transformation occurs when AI, automation and data are embedded into the operational DNA of the business – spanning strategy, governance, people, processes and technology. This requires more than software. It requires deep domain expertise, change management and a disciplined focus on value creation.

4Sight’s DNA-based transformation model focuses on:

• Foundational controls and governance to ensure trust, compliance and resilience.

• Data enablement across IT and OT environments.

• Process automation to remove repetitive, manual work.

• Predictive and prescriptive intelligence to support decision-making.

• Human-centric adoption, ensuring people understand, trust and use AI responsibly. Through a structured, experience-led approach, 4Sight helps businesses:

• Move beyond isolated AI pilots to embedded intelligence across the enterprise.

• Maintain control, transparency and accountability as automation scales.

• Translate innovation into measurable operational and strategic value. The result is not just AI adoption, but a transformation in how they operate, decide and compete – becoming frontier organisations where people lead, and intelligence is seamlessly automated across the business.

DELIVERING AGAINST THE 4SIGHT DNA

Across its 14 specialist divisions, 4Sight delivers in direct alignment with the core pillars of its DNA:

• People are enabled through robust HR and human capital solutions that support workforce management, skills development and organisational resilience.

• Sales and marketing is empowered through CRM platforms and a strong CP ecosystem that drives engagement, growth and long-term value creation.

• Operations are transformed through OT focused on asset automation, optimisation and simulation, enabling safer, more ef cient and data-driven environments.

• Finance is strengthened through enterprise-grade ERP solutions that enhance visibility, control and governance. Innovation runs across every division, with AI enablement embedded throughout the group to augment decision-making, accelerate outcomes and ensure technology serves people, performance and sustainable growth.

4SIGHT’S AI MATURITY MODEL

4Sight builds frontier rms – next-generation, AI-transformed organisations, driven by a culture that embraces AI to unlock productivity and innovation through intelligent agents, automation and data insights. 4Sight has developed the Seven Stages of AI for People in Business, recognising that the adoption of AI in organisations is a journey and an evolution, from individual use, through team and process enablement, to enterprise-wide orchestration. Importantly, it is a journey that remains human-led, governed and aligned with real business outcomes.

This framework re ects how organisations actually adopt AI in practice – often unevenly, sometimes cautiously, but always under pressure to deliver measurable value while managing risk.

WHY A STAGED APPROACH TO AI MATTERS

The Seven Stages model provides a common language for leadership teams to answer critical questions:

• Where are we today, really?

• What does “progress” look like in practical terms?

• How do we balance governance and innovation?

• How do people remain central as AI capability increases?

Crucially, organisations do not need to start at Stage 1, nor do they need to progress at the same speed across all functions. Finance may be at a more advanced stage than HR; operations may move faster than compliance. The value of the model lies in its exibility and clarity.

4Sight channel partners across 70-plus countries.
Digital AI transformation embedded into the DNA of the enterprise.

Stage 1: experiment: simple prompts and tasks

At the rst stage, organisations begin experimenting with AI at an individual level. AI is used to support employees with simple, low-risk tasks, such as drafting content, summarising information, translating text or generating basic insights. The focus is on building awareness and curiosity while demonstrating immediate productivity gains. Employees start to use AI as a simple copilot or personal assistant, experiencing productivity gains in existing tasks, but without changing core processes or decision-making structures.

Stage 2: learn: build confidence, set guardrails

In the second stage, AI adoption expands beyond individuals into teams and departments. AI begins acting as a digital colleague, supporting routine knowledge work while operating within de ned guardrails.

Organisations focus on establishing acceptable-use policies, data controls and governance frameworks. Employees gain con dence in using AI responsibly, and leadership ensures that AI use aligns with organisational standards, compliance requirements and ethical considerations.

Stage 3: innovate: pilot real-use cases

At this stage, organisations move from experimentation to intentional innovation, initiating pilots across a variety of business processes and functional areas. The emphasis

shifts to testing tangible use cases that improve quality, reduce errors and enhance decision-making. AI begins integrating with business systems, supporting users at the point of work while maintaining human oversight. Stage 4: compete: achieve productivity, quality and scale

Stage four marks a turning point where AI begins executing supervised tasks. Automation is introduced to handle repetitive, rules-based activities, such as data processing, transaction handling and operational work ows.

Organisations invest in solutions that provide material cost savings or introduce innovation or business process transformation. AI operates under clear accountability, with controls, human approval mechanisms and audit trails in place. This enables organisations to scale productivity, improve consistency and reduce operational bottlenecks, creating a measurable competitive advantage.

Stage 5: institutionalise: AI default in routine work

In the fth stage, AI becomes embedded as the default way of working for routine processes. Unattended agents operate independently within de ned service levels, handing control back to humans only when exceptions or anomalies arise.

AI is no longer an add-on; it is institutionalised into day-to-day operations. Organisations see signi cant gains in ef ciency, reliability, and speed, while governance and

performance monitoring ensure continued control and accountability.

Stage 6: mature: governed, measured, responsible AI

At maturity, AI co-ordinates end-to-end processes across functions, supported by strong governance and measurement frameworks. AI decisions are transparent, auditable and aligned with regulatory and ethical standards.

Organisations actively measure AI impact on performance, risk and outcomes. Human-in-the-loop oversight remains central, ensuring AI augments expertise and supports better decisions rather than replacing accountability. Governance reaches a stage of pervasive maturity.

Stage 7: fully integrated: embedded end-to-end

In the nal stage, AI, people and systems operate as a single, integrated enterprise intelligence layer. AI orchestrates work ows across departments, technologies and environments – from information systems to operational and industrial platforms. This is the frontier rm: human-led but AI-operated. Intelligence scales across the organisation as seamlessly as cloud infrastructure, enabling resilience, adaptability and continuous transformation in a rapidly changing world.

GOVERNANCE, TRUST AND THE HUMAN ROLE

Across all seven stages, one principle remains constant: AI must serve and augment people, not replace them. Governance, ethics and accountability are not add-ons; they are foundational. As AI capability increases, so too must transparency, oversight and organisational discipline.

The Seven Stages model ensures that progress is intentional, not accidental.

Seven Stages of AI for People in Business: a structured, governed journey from foundational digitalisation to automated intelligence.

THE NEW AI ARMS RACE: CHIPS AND THE BATTLE FOR COMPUTE

Artificial intelligence may look like a software revolution, but the real competition is unfolding in chips, data centres and computing power. The companies and countries controlling AI infrastructure may ultimately shape the technology’s future, writes BRENDON PETERSEN

Arti cial intelligence (AI) is often framed as a software revolution.

However, beneath the algorithms, chatbots and automation lies something far more physical. The real AI race is increasingly about hardware: specialised chips, massive data centres and the energy required to run them. In other words, the future of AI may depend less on who writes the best models and more on who controls the infrastructure that powers them.

At the centre of this shift is a simple reality. AI runs on hardware, and the scale of that hardware is growing rapidly.

THE COST AND DEMANDS OF TRAINING AND INFERENCE

Two workloads dominate the AI landscape: training and inference. Training involves building and re ning models using enormous datasets and compute clusters. Inference happens once that training is complete. It is the process of

running trained models repeatedly with new inputs.

Training large frontier models has become extraordinarily expensive. Systems such as Google’s Gemini, OpenAI’s ChatGPT and Anthropic’s Claude require vast computing clusters running thousands of specialised

hands of only a handful of global technology companies.

Training is expensive and episodic, while inference happens continuously. Every AI query, recommendation or automated decision triggers another inference workload. That demand is placing unprecedented pressure on the global

“Training frontier models is becoming increasingly sophisticated and expensive,” says Clifford de Wit, chief innovation of cer at Accelera Digital Group.

“For most companies, the opportunity lies not in building these giant models from scratch, but in building specialised models or solutions

Hyperscale cloud providers, such as Microsoft, Google and Amazon, are purchasing enormous quantities of graphic processing units (GPUs), central processing units (CPUs) and storage to support AI services.

HARDWARE AND INFRASTRUCTURE NEEDS DRIVE COMPETITIVENESS

The scale of AI infrastructure continues to grow rapidly.

At the Mobile World Congress this year, Huawei unveiled its Atlas 950 SuperPoD system, capable of linking up to 8 192 processors into a single computing cluster designed for AI training and inference workloads.

Huawei executives say infrastructure like this will underpin the AI economy. “Cloud computing is becoming the public power grid for the AI era,” said Tim Tao, president of Huawei Cloud Solution Sales.

Systems like these show how the AI race is increasingly de ned by the ability to deploy massive computing clusters rather than simply writing better algorithms.

Clifford de Wit
Tim Tao

The race for AI hardware is also becoming geopolitical. The United States and its allies have introduced export restrictions on advanced semiconductor technologies in an effort to limit China’s access to the most powerful AI chips. In response, Chinese technology companies have accelerated efforts to develop their own computing infrastructure and semiconductor capabilities. As AI becomes a strategic technology, access to chips and compute capacity is increasingly being treated as a matter of national competitiveness.

Specialised chips such as GPUs have become the workhorses of AI computing because they can perform the parallel mathematical operations required for machine learning.

Demand for these chips has surged as AI workloads expand.

The semiconductor supply chain has already faced disruption in recent years. The pandemic exposed the fragility of chip manufacturing and logistics networks, and AI demand is adding new pressure.

Industry executives say hyperscalers are purchasing large portions of the world’s supply of advanced processors and memory. Prices for GPUs and high-performance RAM have risen accordingly.

Yet this does not necessarily mean organisations cannot run AI workloads.

“There is still capacity available,” says de Wit. “The large hyperscalers and regional cloud providers continue to expand infrastructure. The challenge comes when organisations try to procure their own hardware.”

The rise of small language models illustrates this shift. Models such as Gemini Nano, Microsoft’s Phi-3 Mini and Meta’s Llama variants are compact enough to run directly on mobile devices.

In practice, the economics of AI favour shared infrastructure. Instead of building their own clusters, most businesses consume AI capabilities through cloud platforms.

At the same time, the industry is exploring ways to reduce reliance on massive models.

Smaller and more specialised AI models are gaining traction because they require far less compute while still delivering strong performance.

Many models are now optimised for different levels of complexity. Some are designed for rapid responses, while others handle deeper reasoning tasks that require more computing power.

South Africa has its own example. Lelapa AI’s InkubaLM model was designed for African languages and is signi cantly smaller than traditional large language models while outperforming them in languages such as isiZulu

This shift is enabling AI to move directly onto consumer devices rather than relying entirely on cloud infrastructure. Samsung is betting heavily on on-device AI as the next phase of the industry. The company says it aims to reach 800 million Galaxy devices globally, re ecting how rapidly AI capabilities are spreading across smartphones and

Yet supply chains still shape what

“Global component availability can change, and manufacturers have to be able to adapt to those supply realities,” says Justin Hume, vice president of Mobile eXperience at Samsung South Africa.

Hume says decisions around which chipsets power Galaxy devices can shift depending on global component availability.

Behind the scenes, those dynamics extend beyond chips themselves. The production of advanced semiconductors depends on complex global supply chains

and specialised minerals used in modern chip manufacturing.

Control over these resources is becoming increasingly entangled with geopolitics.

iNCREASED ENERGY CONSUMPTION

Meanwhile, the energy required to run AI infrastructure is emerging as a strategic issue.

Large data centres consume enormous amounts of electricity and require sophisticated cooling systems.

AI training can involve thousands of accelerators running continuously for days or weeks, while inference workloads add ongoing energy demand as services scale globally.

Research into AI infrastructure highlights the environmental implications of these systems. Training large models requires signi cant electricity, while inference workloads add ongoing energy demand as services expand.

For the Global South, this raises important questions.

Much of the world’s AI infrastructure is concentrated in North America, Europe and parts of Asia, where hyperscale data centres and semiconductor industries are already established.

Demand for AI services is growing rapidly across Africa, but infrastructure limitations remain a challenge. Power grid reliability, data centre capacity and connectivity will shape how quickly AI adoption accelerates.

De Wit believes demand will continue to rise. “AI is moving from experimentation to real business value,” he says.

If current projections hold, the future of AI will depend not only on better algorithms, but also on the ability to scale the physical systems behind them.

Chips, compute and infrastructure are becoming the strategic assets of the AI era.

Justin Hume

AI WITHOUT LIMITS

How ALTRON ARROW is enabling the next wave of intelligent innovation

Arti cial intelligence is no longer a future ambition; it is a present-day competitive advantage. Across South Africa and the broader African continent, organisations are increasingly recognising AI as a catalyst for economic growth, operational ef ciency and digital transformation. Yet, despite this momentum, one critical challenge continues to slow adoption: access to the right infrastructure.

AI is fundamentally a compute problem. The ability to train, ne-tune and deploy AI models depends heavily on

high-performance computing environments, particularly graphics processing units (GPUs)-accelerated infrastructure. Without the right foundation, even the most promising AI strategies fail to scale beyond proof-of-concept.

This is where Altron Arrow plays a pivotal role.

As a leading distributor of enterprise technology solutions, Altron Arrow enables organisations across Africa to unlock the full potential of AI by providing access to world-class infrastructure, deep technical expertise and strategic partnerships.

ALTRON ARROW ENABLES ORGANISATIONS ACROSS AFRICA TO UNLOCK THE FULL POTENTIAL OF AI BY PROVIDING ACCESS TO WORLD-CLASS INFRASTRUCTURE, DEEP TECHNICAL EXPERTISE AND STRATEGIC PARTNERSHIPS.

At the heart of this capability lies a strong collaboration with ASUS, delivering cutting-edge GPU solutions and next-generation AI platforms tailored for both enterprise-scale deployments and emerging AI innovators.

THE

INFRASTRUCTURE

GAP IN AFRICA’S AI JOURNEY

Africa’s AI opportunity is immense. In critical industries, such as nancial services and telecommunications, mining, logistics and public sector transformation, the potential use cases are vast. However, the reality is that many organisations face signi cant barriers when it comes to infrastructure-readiness.

Traditional IT environments are not designed to handle the demands of modern AI workloads, and in many cases, organisations are forced to rely on cloud-based AI services. While cloud offers exibility, it also introduces challenges around cost, data sovereignty, latency and long-term scalability, especially in regions where connectivity and regulatory considerations are critical.

Altron Arrow addresses this challenge by bringing AI infrastructure to organisations across the African continent, enabling on-premise, hybrid and edge AI environments that are both scalable and cost-effective.

BUILDING THE AI BACKBONE WITH ASUS GPU INFRASTRUCTURE

At the core of Altron Arrow’s AI offering is its GPU business, powered by ASUS.

ASUS has emerged as a global leader in high-performance computing infrastructure, delivering solutions that span enterprise-grade GPU servers to compact AI development platforms.

Through this partnership, Altron Arrow provides organisations with access to advanced GPU-accelerated systems designed to support a wide range of AI workloads, from model training and inference to data processing and simulation.

These solutions are not just about raw performance; they are about enabling a full AI life cycle. ASUS GPU infrastructure provides the scalability and reliability required to move from experimentation or development to production.

For enterprises, this means the ability to build private AI environments that offer greater control over data, improved security and predictable cost structures. For industries, such as nance, healthcare and government, where data sensitivity is paramount, this capability is critical.

BY ENABLING LOCAL AI DEVELOPMENT, THE GX10 REDUCES DEPENDENCE ON CLOUD INFRASTRUCTURE, LOWERS COSTS AND ADDRESSES DATA SOVEREIGNTY CONCERNS.

Moreover, GPU infrastructure enables organisations to reduce reliance on external compute resources, accelerating time-to-value and ensuring AI initiatives can scale sustainably.

INTRODUCING THE ASUS ASCENT GX10: DEMOCRATISING AI DEVELOPMENT

While enterprise infrastructure is essential, the AI journey does not begin at scale. It begins with experimentation, prototyping and innovation.

Recognising this, Altron Arrow, in collaboration with ASUS, is bringing to market the ASUS Ascent GX10, a groundbreaking desktop AI supercomputer designed to empower developers, researchers and organisations at the start of their AI journey.

The ASUS Ascent GX10 represents a signi cant shift in how AI capabilities are accessed. Traditionally, developing AI models required access to large, expensive data centre infrastructure. The GX10 changes this paradigm by delivering peta op-scale AI performance in a compact, desktop form factor.

Powered by the NVIDIA GB10 Grace Blackwell Superchip, the GX10 integrates a high-performance central processing unit (CPU) and GPU into a uni ed architecture, enabling seamless data processing and model execution. With up to 128GB of uni ed memory and the ability to handle models with hundreds of billions of parameters, it provides developers with the tools needed to build and test advanced AI applications locally. This is particularly signi cant for the African market.

By enabling local AI development, the GX10 reduces dependence on cloud infrastructure, lowers costs and addresses data sovereignty concerns. Developers can prototype, ne-tune and validate models on their desks before scaling to larger environments, whether on-premise or in the cloud.

In addition, the GX10 supports a full AI software stack, including popular frameworks such as PyTorch and TensorFlow, making it accessible to both experienced data scientists and emerging AI talent.

FROM EDGE TO ENTERPRISE: A SCALABLE AI ECOSYSTEM

One of the key strengths of Altron Arrow’s AI offering is its ability to support organisations across every stage of the AI maturity curve.

For start-ups and developers, solutions like the GX10 provide an accessible entry point into AI development. These platforms enable rapid experimentation and innovation without the need for signi cant upfront investment.

As organisations mature, they can scale into more powerful GPU-accelerated servers and clusters, leveraging ASUS infrastructure to support production workloads, large-scale model training and real-time inference.

The GX10 itself is designed with scalability in mind. It allows clustering of two units for more compute power to handle larger models and more complex workloads, providing a pathway from desktop experimentation to distributed AI environments.

This seamless progression, from edge to enterprise, is critical in ensuring AI initiatives do not stall after initial success. Instead, organisations can evolve their infrastructure in line with their growing needs, supported by Altron Arrow’s expertise and partner ecosystem.

THROUGH ITS PARTNERSHIP WITH ASUS AND THE FOCUS ON GPU-ACCELERATED INFRASTRUCTURE, ALTRON ARROW IS PLAYING A CRITICAL ROLE IN SHAPING AFRICA’S AI FUTURE.

ENABLING INDUSTRY TRANSFORMATION ACROSS AFRICA

The impact of AI infrastructure extends far beyond technology; it drives real-world transformation across industries.

In the nancial sector, AI-powered analytics enable better risk assessment, fraud detection and personalised customer experiences. In mining and manufacturing, computer vision and predictive maintenance improve safety and operational ef ciency. In logistics and ports, AI enables real-time optimisation of supply chains, reducing costs and improving throughput.

For public sector organisations, AI has the potential to enhance service delivery, improve decision-making and drive inclusive growth.

However, none of these outcomes are possible without the right infrastructure.

By providing access to GPU-accelerated computing and AI development platforms, Altron Arrow is enabling organisations across Africa to move from concept to impact, turning AI from a strategic ambition into a practical reality.

BEYOND TECHNOLOGY: A PARTNER FOR THE AI JOURNEY

What sets Altron Arrow Enterprise Computing Solutions (ECS) apart is not just its technology portfolio, but its approach to partnership.

AI adoption is not a one-size- ts-all journey. Each organisation has unique requirements, challenges and objectives. Altron Arrow works closely with customers to understand their speci c use cases, design tailored infrastructure solutions and provide ongoing support throughout the AI life cycle.

This includes:

• Assessing AI-readiness and infrastructure requirements.

• Designing and deploying GPU-accelerated environments.

• Enabling hybrid and edge AI architectures.

• Supporting developers with tools and platforms like the GX10.

• Scaling solutions from pilot to production. By combining technical expertise with a deep understanding of the African market, Altron Arrow ensures organisations are not only equipped with the right technology, but also positioned for long-term success.

SHAPING THE FUTURE OF AI IN AFRICA

As AI continues to evolve, the importance of infrastructure will only increase. The organisations that succeed will be those that can access, deploy and scale compute resources effectively.

Through its partnership with ASUS and the focus on GPU-accelerated infrastructure, Altron Arrow is playing a critical role in shaping Africa’s AI future.

ALTRON ARROW WORKS CLOSELY WITH CUSTOMERS TO UNDERSTAND THEIR SPECIFIC USE CASES, DESIGN TAILORED INFRASTRUCTURE SOLUTIONS AND PROVIDE ONGOING SUPPORT.

From enabling developers with powerful desktop AI systems like the GX10 to delivering enterprise-grade GPU infrastructure, Altron Arrow ECS is bridging the gap between ambition and execution.

In doing so, it not only supports individual organisations, but also contributes to the broader development of an AI-driven economy across South Africa and the continent.

The future of AI in Africa is not just about algorithms; it is about access.

With Altron Arrow, that access is becoming a reality.

START YOUR AI JOURNEY WITH ALTRON ARROW

AI innovation begins with the right technology partner.

At Altron Arrow, we work with customers across industries to enable AI infrastructure designed for modern workloads and intelligent edge environments.

Explore our AI solutions:

ASUS AI GPUs

https://arrow.altron.com/asus-ai-gpus

ASUS GX10

https://arrow.altron.com/asus-gx-10

Speak to Altron Arrow to discuss how AI can power your next generation of innovation.

BUILT HERE

As Africa adopts AI built and hosted abroad, experts warn that control over data, infrastructure and compute will determine the continent’s digital independence.

Africa is largely consuming AI that was built elsewhere, runs on infrastructure it does not control and is governed by priorities that are not its own. For most organisations on the continent, that is not a deliberate choice. It is simply what happened when cloud adoption moved faster than the conversation about where workloads should actually sit.

“Sovereignty is not just about data location. It is about operational authority across the AI life cycle,” says Asif Valley, national technology of cer at Microsoft. In other words, how data is handled, where models run and how security is enforced. Valley describes it as a continuum, with global cloud suited to frontier-scale training and rapid innovation, local or sovereign environments suited to regulated data, latency-sensitive workloads and anything requiring accountability within jurisdiction.

“Every time data moves across borders, it incurs cost,” explains Valley. Egress charges can accumulate quickly, so for organisations running AI at scale, these costs shift the economics of where workloads belong. For real-time use cases, such as fraud detection or customer engagement, routing data to a distant cloud and back also introduces latency that directly affects outcomes. This is one of the reasons that in sectors like  nancial services and healthcare, AI pilots routinely stall in production. “It is because the compliance process was never built for offshore infrastructure,” says Valley. “Moving into a sovereign environment resolves that. It allows

organisations to put these things into production much faster.”

PORTABILITY, PERFORMANCE AND POLICY

There’s also the fact that without local adaptation to African languages, datasets and context, the intelligence organisations deploy is less accurate and less relevant to the people it serves. “The tweaking and ne-tuning is de nitely what you need to be doing locally,” says Valley. His advice here is portability: workloads should be architected to shift between global and sovereign environments as requirements change. Where data lives shapes what AI can do with it, which is why data residency should be at the centre of both performance and compliance conversations. Still, most African organisations have not yet fully understood what they hand over when workloads go offshore. “This is not just ignorance,” says Professor Stella Bvuma, director of the School of Consumer Intelligence and Information Systems at the University of Johannesburg. “It is a symptom of systemic challenges.”

Cost-driven cloud adoption, limited

“WORKLOADS SHOULD BE ARCHITECTED TO SHIFT BETWEEN GLOBAL AND SOVEREIGN ENVIRONMENTS AS REQUIREMENTS CHANGE.”
– ASIF VALLEY
“MOVING INTO A SOVEREIGN ENVIRONMENT RESOLVES THAT. IT ALLOWS ORGANISATIONS TO PUT THESE THINGS INTO PRODUCTION MUCH FASTER.”
– ASIF VALLEY

regulatory maturity and a persistent digital divide have made dependency on foreign cloud infrastructure the default. Although the Protection of Personal Information Act and the 2024 National Data and Cloud Policy are shifting the conversation in both nance and healthcare, Professor Bvuma believes awareness remains uneven across the broader business landscape. Small and medium businesses in particular lag behind, she says, facing both cost barriers and uneven regulatory enforcement.

According to Valley, building local AI capacity comes down to four fundamentals: power, AI-ready data centres, connectivity and skills. Africa holds a very small share of AI-grade compute globally, and that compute is being absorbed by large vendors at scale.

The continent does train smaller models, particularly for local languages, says Bruce Bassett, Wits AI chair in science, but the regional compute capacity at the scale needed to train and host advanced models remains limited.

“Unfortunately, I don’t see that commitment,” Bassett says, who puts the minimum funding requirement at R5-billion, from government, industry, academia, or a combination of all three. AI is now embedded in credit approvals, hiring, insurance pricing, logistics and public services. “You cannot meaningfully govern intelligence that runs on infrastructure you do not control,” adds Kutlwano Ngwarati, head of AI and intelligent automation at Exxaro Resources. “When the models, compute and data pipelines sit outside your jurisdiction, accountability becomes aspirational rather than enforceable.”

Asif Valley
Professor Stella Bvuma

BUILD VERSUS BUY: WHEN OFF-THE-SHELF AI ISN’T ENOUGH

In the agentic AI era, the old “build versus buy” debate is outdated. South African corporates must strategically own proprietary data, governance and skills while renting commoditised AI components, writes BUSANI MOYO

In the rapidly evolving AI landscape, organisations grapple with a deceptively simple choice: build custom AI systems or buy off-the-shelf solutions. As Andre Strauss, CEO of CohesionX, argues, this binary framing is outdated. AI is no longer monolithic, but a modular stack of components – models, tools, data pipelines, guardrails and orchestration engines. The real question is: which layers to own, rent or commoditise? For South African corporates facing talent shortages and Protection of Personal Information Act (POPIA) compliance, the answer often lies in strategic ownership. Insights from Strauss and Agile Bridge, illuminate when custom builds make sense, their pitfalls and the cost trade-offs.

PRIMARY FACTORS DRIVING CUSTOM AI BUILDS

Organisations opt for in-house development when off-the-shelf tools fall short in terms of differentiation. Strauss emphasises the need to own proprietary layers, such as data structuring, governance and evaluation logic, in systems like retrieval-augmented generation (RAG) assistants.

“Very little of a RAG system is inherently differentiating,” he notes, the IP resides in unique data boundaries and work ows. Koegelenberg echoes this, pinpointing four triggers:

• Competitive edges from proprietary datasets and domain knowledge unavailable to generic models.

•When the AI is the product, like a bespoke fraud-detection engine.

• Needs for tight integration with legacy systems, superior performance and customisation.

•“Service road versus main road” thinking, where proprietary tech creates a defensible moat.

In agentic AI, where systems reason, act via application programming interfaces – a set of rules and protocols that allow different software applications to communicate, exchange data or

signi cant hurdles. Strauss warns it’s “not a project, but a permanent capability”, requiring model evolution, security hardening and audit trails amid South Africa’s talent scarcity, infrastructure limits and budget pressures. When it comes to the challenges, Koegelenberg highlights acute talent gaps –machine learning engineers, data engineers and domain experts are rare – and a prolonged time-to-market, with 90 per cent of effort on data quality, infrastructure, integration and monitoring, not model-building. Risks amplify: technical lock-in, model obsolescence, compliance breaches under POPIA and operational overload. AI failures often stem not from models, but from poor adoption and governance, says Strauss. He adds that building monolithic systems invites “engineering ego”, diverting focus from core strengths.

DETERMINING COST-EFFECTIVENESS

Cost decisions hinge on life-cycle economics, not upfront spend. Strauss advocates “rent rst”: commoditise embedding models, vector databases and foundation models through vendors to slash risk and burden. He emphasises the importance of building selectively when vendor pricing becomes “structurally irrational at scale” or when logic forms a moat, such as proprietary scoring agents.

Based on these insights, organisations should focus on these questions when determining whether to rent or build:

• Internal skills: got talent for full ownership long term?

• Total costs (TCO): factor in 90 per cent effort on data, set-up and monitoring, not just the model.

• Scalability: does custom beat vendors at scale?

• Strategic fit: unique IP edge? Koegelenberg stresses the need to evaluate whether off-the-shelf suf ces in the long term. The 2026 rule, according to Strauss, is to rent commodities, own differentiation and orchestrate the rest. Winning rms build component libraries, control data/governance and integrate fast, avoiding unnecessary builds. In the agentic era, success favours modular portfolios over ego-driven monoliths. South African leaders who master layer ownership will thrive, turning AI constraints into agile advantages.

www.linkedin.com/in/andrestrauss www.linkedin.com/in/deon-koegelenberg-37335a4

Andre
Deon Koegelenberg

MEET ALEX

Your next executive hire isn’t human

Executive

capacity without executive headcount. By THE AI SHOP

There’s a problem every growing company experiences. You need a chief nancial of cer (CFO) who can tell you which customers are bleeding the business. A chief operating of cer (COO) who sees the bottleneck before it shuts down the production line. A head of legal who can ag that regulatory change before it costs you. A chief risk of cer who isn’t just ticking boxes. However, your leadership team is already stretched thin, or those roles simply don’t exist on your organogram.

You’re not alone. Most South African companies operate without a full C-suite. Not because they don’t need one, but because they can’t afford one. The result? Strategic blind spots. Decisions made on gut feel. Board packs assembled in a panic at midnight. Critical questions that never get asked because there’s nobody whose job it is to ask them.

ENTER ALEX

Alex isn’t a chatbot. It’s an entire leadership team – one that works around the clock, knows your business intimately and costs a fraction of a single executive salary.

Alex can manifest as any member of the C-suite – CFO, COO, chief of staff, chief human resources of cer (CHRO), head of legal, chief risk of cer, chief technical of cer, chief marketing of cer (CMO) – or as a board-level advisor helping directors navigate complex documentation and governance responsibilities. Each role brings the analytical

depth and strategic judgement you’d expect from a seasoned executive in that function.

What makes Alex fundamentally different from generic AI tools is what’s happening underneath. Alex connects to the knowledge sources you choose to give it: board packs and governance documents for a director, enterprise resource planning (ERP) data and operational systems for an executive, or both for a CEO who straddles the two worlds. From these, Alex builds a continuously updated context graph that maps the people, roles, decisions, relationships and history that matter to your role. Alex doesn’t just answer questions. Alex understands your company – at the level relevant to you.

And, your data stays yours. Your information is completely isolated – no cross-contamination, no shared environments. Alex never trains on your data. Everything is hosted in a secure cloud environment, fully compliant with the Protection Of Personal Information Act. The intelligence Alex builds about your business serves your business alone.

Powered by recent research from MIT, Alex’s analytical capabilities, particularly as CFO, are now on a par with a listed-company analyst, delivering investment-grade nancial analysis while openly agging information gaps. This isn’t AI that pretends to know everything. It’s AI that thinks like an executive.

NOT JUST ANOTHER CHATBOT

Business meetings aren’t quiz shows, so you don’t need an AI that regurgitates facts. You need one that synthesises, challenges and advises. Here’s what sets Alex apart.

Alex’s executives think together. This is the breakthrough. When you ask Alex’s CMO about growth strategy, the CMO doesn’t answer in isolation. It responds with awareness that the CFO has agged cash constraints, the head of legal has noted a pending regulatory change, and the CHRO has observed the sales team is understaffed. That’s because Alex’s executives build a shared, living picture of your business –each insight validated and re ned across every relevant perspective, deepening with every interaction. You experience a coherent leadership team that’s been conferring behind the scenes. No other AI tool works this way.

Alex adapts to the user. A board director gets governance-focused insight drawn from strategy documents and board materials. An operations manager gets actionable detail from ERP and operational data. The user decides what context Alex works with, and Alex calibrates its depth accordingly.

Alex teaches, not just tells. A built-in Socratic mode helps users build genuine understanding and retention of complex material, not just skim a summary and move on.

Alex watches the horizon. Competitors, regulations, market shifts – Alex monitors the external environment so you’re never blindsided by changes outside your four walls.

IN PRACTICE: TIME CLOTHING

Time Clothing is a mid-sized South African manufacturer navigating the complexities of a

“ALEX IS LIKE HAVING A CONSULTANT ON DIFFERENT LEVELS. OUR EXECUTIVES GET STRUCTURED ANSWERS ON DEMAND. MOREOVER, INSTEAD OF KNOWLEDGE CLUSTERED IN A FEW HEADS, WE NOW HAVE A MUCH MORE DEMOCRATIC, YET CONTROLLED ACCESS TO KNOWLEDGE. MIDDLE MANAGEMENT CAN TAKE SWIFT DECISIONS WHERE PREVIOUSLY IT WOULD HAVE NEEDED ANOTHER MEETING WITH THEIR LINE MANAGER.”
– PHILIPP DREYER, TIME CLOTHING

highly competitive, margin-sensitive industry. Like many businesses of its size, the company generates rich operational data through its ERP system, but turning that data into strategic insight has traditionally required specialist skills the team doesn’t always have on hand.

With Alex connected to its ERP data, Time Clothing’s leadership can now ask the questions that matter: “Which customers are uneconomical to the factory?” “Where are the margin leaks across our product lines?” “What does the order pattern of the last 12 months

tell us about where to focus next?” These are exco-grade queries – answered in seconds, with the nuance of someone who knows the business intimately.

TRY ALEX. FREE FOR 14 DAYS

Alex is available in two tiers, plus enterprise deployment. Every plan includes the full AI executive platform; the only difference is how much of Alex you use.

Alex Essential gives you the complete AI executive platform – board preparation, strategic analysis, operational oversight, risk identi cation – with standard monthly capacity.

Alex Executive is for power users and leadership teams who rely on Alex daily, with extended capacity and priority support.

Enterprise Deployment brings Alex deep into your organisation with custom integrations, unlimited users and dedicated support.

Start your 14-day free trial at www.smart-alex.ai No executive salary required.

CAN AFRICA RUN AI AT SCALE?

To fully leverage AI, companies need to consider aspects such as how infrastructure, energy, sovereignty and escalating compute costs impact a large-scale AI roll-out, writes RODNEY WEIDEMANN

AHowever, there remains a chasm between the proof-of-concept (PoC) projects that have highlighted these bene ts and the practical issues that impact AI roll-out at a larger scale.

Dr Ntsako Baloyi, data and AI lead for Accenture notes that moving from pilot to production-grade AI requires several foundational elements. The  rst is strong AI

governance, meaning organisations must put in place a robust ethical AI framework that addresses security, safety, privacy, sustainability, explainability, bias, model drift

“It also requires the right infrastructure, whether on-premise, cloud-based or hybrid, to support scalable and secure deployment. Equally important are the skills needed to design, build, test, productionise and maintain AI solutions effectively,” he says.

“Then, from a practical perspective, organisations must be clear on which use cases to prioritise and understand the implications of

“ORGANISATIONS MUST BE CLEAR ON WHICH USE CASES TO PRIORITISE AND UNDERSTAND THE IMPLICATIONS OF THOSE CHOICES ACROSS SKILLS, INFRASTRUCTURE, GOVERNANCE AND OPERATING MODELS.” – DR NTSAKO BALOYI

those choices across skills, infrastructure, governance and operating models.”

A STRATEGIC CHALLENGE

A pilot usually operates in a controlled environment with limited data and a dedicated team, and scaling potentially strips away that comfort, suggests Christiaan Nel, AI Africa leader, PwC South Africa. Scaling AI is therefore an organisational and strategic challenge, not just a technology one.

“There are ve aspects to such a roll-out that form a tightly coupled system, meaning that a decision in one area will impact the others. First, there is the infrastructure component: AI requires access to data centres with specialised processors, high-speed networking and high-performance storage. In Africa, enterprise-grade data centres are concentrated in a few markets, forcing others to choose between costly local builds or routing workloads internationally – each with trade-offs like latency, cost and sovereignty,” he continues.

“Second, there is energy and water. AI infrastructure is energy-intensive, and cooling systems consume signi cant water. Since many African countries face unreliable grids and water scarcity, this creates a problem in certain regions.”

Third, Nel points to data residency, noting that Africa’s regulatory landscape increasingly requires data to remain in-country, potentially forcing costly multiregion architectures and limiting the datasets available for model training.

“Then there is compliance – beyond mere data protection, there are also sector-speci c regulations, emerging AI-speci c legislation and ethical considerations around bias, all of which impose additional requirements that in uence infrastructure choices and costs.

Dr Ntsako Baloyi

Finally, compute costs escalate dramatically at scale, as specialised hardware, cloud usage, data storage and technical skills all need to be factored in.”

Dr Jania Okwechime, partner: Africa AI and data leader at Deloitte, adds that, while anyone can use Copilot or ChatGPT on their own and experience business bene ts, doing the same at scale requires far more robust infrastructure.

“Running a PoC is far different to extending this across employees, departments and an entire enterprise.

For example, at scale, you need to be able to ensure a quality user experience to gain traction and obtain the desired business value, meaning you also have to architect for the customer experience

“Technical guardrails are critical – and These guardrails are vital to ensure that AI agents utilise accurate and veri ed data, otherwise your outputs will be compromised and thus, useless.”

Moreover, explains Dr Okwechime, the power and water consumption required by AI at scale is another challenge. It takes a lot of resources to run this type of compute, so this de nitely impacts an organisation’s ability to scale its AI solutions properly.

COSTING, DESIGN AND SKILLS

Scaling up obviously impacts costs signi cantly, explains Ryan Boyes, governance, risk and compliance of cer at Galix. There are a lot of elements that come into the costing issue, too, he says, so organisations need to think carefully about how and why various issues may impact the roll-out, and how these will affect costs.

“Ultimately, success will be driven by a clear understanding of what your business speci cally wants to leverage the AI for. Also, it needs to provide a sensible use case and should be undertaken slowly, otherwise it is easy to bite off more than you can chew.”

It is also imperative to implement such a project with effective user buy-in – without which, it will not be used ef ciently by staff, or at least not used properly.

Simone Larsson, head of enterprise AI for EMEA at Lenovo, points out that three pillars form the strategic tripod that supports any successful, long-term AI initiative.

“First, a ‘cloud-smart’ strategy, often hybrid or multicloud, is key to long-term viability. It allows an organisation to keep sensitive data on-premise to satisfy data sovereignty rules, run stable workloads on cost-effective private infrastructure and burst to the public cloud for massive-scale training,” she states.

“AI skills are at a premium, so this is another cost factor to consider – businesses need to ensure their people are competently skilled, and that they retain these skills in-house,” he says.

“Workload design, essentially architecting AI workloads for ef ciency, is crucial. Using cloud-native principles and optimising the models themselves is key – an ef cient design directly improves performance and reduces operational expenditure, making the entire AI initiative more viable.”

Governance, she adds, is the ultimate guarantor of long-term viability. A strong framework that manages the entire AI life cycle, from data acquisition to model retirement, is what mitigates the most signi cant risks.

“This prevents the deployment of biased, unethical models that create reputational and legal crises. Without governance, an AI strategy is high-risk and disconnected. With it, you have a strategic and sustainable

AFRICA’S DILEMMA

Christopher Geerdts, MD at BMI–T, indicates that nding a balance between the constraints – such as water and power – and the enormous value that AI offers, is the key dilemma for South Africa and the SADC and Africa as a whole.

Without its own AI infrastructure, South Africa becomes more dependent on international AI systems, but within an increasingly polarised global world order, he states. On the other hand, SA already cannot meet the food, power and water needs of a signi cant portion of its population.

FAST FACT

Most organisations are still in the piloting phase: Nearly two-thirds of organisations state they have not yet begun scaling AI across the enterprise.

Source: https://www.mckinsey. com/capabilities/quantumblack/ our–insights/the–state–of–ai

“Diverting investment to AI infrastructure will most likely take away from meeting the basic needs of people. This is why I think smaller-scale facilities, optimised for edge AI and aimed at meeting local challenges, will likely prevail over large, general-purpose AI centres.

“South Africa could, potentially, also leapfrog early AI deployments in terms of both the standards and the technologies used. Ultimately, we cannot afford AI, but we also cannot afford to be excluded – we must, therefore, make wise choices and not just copy other countries if we wish to leverage this technology effectively to the bene t of the continent,” he concludes.

Follow: Ryan Boyes www.linkedin.com/in/ryan-boyes-37b09028 Jania Okwechime linkedin.com/in/janiaaghajanian Accenture www.linkedin.com/company/accenture-in-south-africa Simone Larsson www.linkedin.com/in/csimonelarsson Christopher Geerdts www.linkedin.com/in/christophergeerdts

Dr Jania Okwechime
Ryan Boyes
Simone Larsson

AFRICA’S AI ENERGY CHALLENGE

AI data centres leverage far more power than conventional ones.

RODNEY WEIDEMANN asks, given the energy constraints Africa faces, if this means AI is out of reach for the continent

Arti cial intelligence (AI) data centres are fundamentally different because of their extreme power density, cooling intensity and workload pro les.

Unlike traditional enterprise facilities, AI environments are built around graphics processing unit clusters that draw very high power per rack, creating concentrated thermal loads that require advanced cooling.

Professor Rennie Naidoo, research director and professor of information systems at Wits Business School, notes that, as a result, these facilities operate less like conventional IT environments and more like energy-intensive industrial infrastructure.

“Electrical engineering, grid integration and thermal management sit at the centre of design decisions. AI workloads grow exponentially, meaning that compute requirements escalate rapidly, driving signi cantly higher energy consumption,” he explains.

Mark Walker, director at T4i, says the power requirements for AI data centres are signi cant in terms of capacity, reliability

South Africa's power constraints and water scarcity do present serious challenges, suggests Wojtek Piorko, MD for Africa at Vertiv, but they also create an opportunity to accelerate the adoption of more ef cient and environmentally conscious infrastructure.

“Technologies such as liquid cooling and closed-loop cooling systems can cut water consumption while improving thermal ef ciency. In addition, integrating alternative energy sources, battery energy storage solutions and intelligent power management can enhance resilience and reduce dependence on the grid,” he says.

Saša Slankamenac, architect in the of ce of the CTO and practice lead: AI at Dariel Group, adds that Africa cannot afford wasteful builds in water-stressed regions on constrained grids – the environmental and social licence to operate will depend on this.

HOMEGROWN ANSWERS

“Africa equally cannot afford total dependency on offshore compute, due in part to higher costs, but also to latency issues, data-sovereignty constraints and falling behind in innovation. One thing that may work is sparser network distribution, with many nodes hosting distinct, t-for-purpose models in smaller, distributed inference hubs that are closer to the demand,” says Slankamenac.

absorb said demand, keep up with it and increase capacity in tandem.

Naturally, policy would also need to align to reward ef cient compute rather than merely more compute.”

Professor Naidoo cautions that it is about more than just the environmental costs.

The question is whether Africa can afford to ignore the governance and capacity constraints that determine whether those costs will translate into growth or yet another cycle of mismanagement.

“The critical issue is whether Africa can do so in a way that strengthens national systems, while building a culture of maintenance, technical excellence and accountability? If approached strategically, AI infrastructure could accelerate both renewable investment and digital capability. If mismanaged, it will simply amplify scarcity,” he indicates.

Meanwhile, Simone Larsson, head of enterprise AI, EMEA at Lenovo, suggests we are seeing a pattern of African businesses “innovating from constraint”, which holds lessons for the entire world.

“Instead of replicating the power-hungry, centralised infrastructure of the West, they are leapfrogging to more resilient and distributed architectures. There is a strong focus on edge AI, to reduce reliance on intermittent connectivity and expensive bandwidth, and a mobile- rst approach that is inherently suited to the continent's technology landscape,” she notes.

minimised and incremented gradually over time, allowing the electricity suppliers to

“In sectors like ntech, AI-powered platforms are providing credit to millions of unbanked individuals by building novel risk models. In healthcare, AI is being used for remote diagnostics over mobile phones. In agriculture, it’s helping smallholder farmers optimise yields based on satellite imagery. The common thread is a pragmatic focus on solving high-impact, local problems with technology that is designed for the reality of the local environment,” concludes Larsson.

Saša Slankamenac
Wojtek Piorko
Prof Rennie Naidoo

DECISIONS REPRICED

WORK THAT MOVES FASTER, SMARTER, SAFER

Adopting AI doesn’t mean getting rid of existing personnel and systems; it means working with them to augment and improve operational efficiency and reduce costs, explains VAXOWAVE

Afew years ago, AI felt like a distant sci- roommate: mysterious, slightly creepy, and likely to raid your fridge. Today, it’s more like that quietly brilliant colleague who spots broken things you didn’t even know were broken, xes them, and then asks for access to your shared drive. Most business conversations about AI still begin in the wrong place: “What can AI do?” That’s a vendor question. A far better one is: “What decisions do we make every day that are expensive, slow, inconsistent or quietly wrong?”

AI doesn’t just automate tasks. It compresses the time between insight and action.

AI IS A MIRROR, NOT A MAGICIAN

When companies pilot AI and end up disappointed, it’s because AI has an annoying habit of exposing uncomfortable realities:

• Your data is scattered and rife with con icting de nitions and viewpoints.

• Your processes rely on tribal knowledge rather than clear design.

• Your approvals were built to manage risk, but they stall progress.

AI ampli es whatever you feed it. Feed it mess, and it accelerates chaos. Feed it discipline, and it becomes a powerful multiplier.

DECISION VELOCITY AND ADAPTABILITY

Speed without control is chaos. Control without speed is bureaucracy. AI offers a third path: velocity with guardrails. Even more powerful is adaptability. Call it “Executive Darwinism”, but survival favours the adaptable. AI lowers the cost of learning so you can:

• Test ideas faster.

• Model scenarios earlier.

• Spot patterns sooner.

• Recover from mistakes more quickly.

THE TWO-TEAM EXAMPLE

Imagine two teams starting the same project on day one:

• Team A: manual reports, alignment meetings, spreadsheet tracking, layered approvals.

• Team B: automated insights, rapid drafts, instant scenario modelling, searchable knowledge, frictionless work ows.

After 90 days, Team B is more adaptable. Experimentation costs less. Learning loops tighten. Mistakes become cheaper. Output feels like magic, but it’s simply superior system design.

True AI transformation rarely arrives via one grand launch. It spreads through pockets of excellence that create cultural tension. Smart leadership treats that tension as a signal, not a threat. Organisations need to ask themselves the dif cult questions, such as:

• What outcomes are we optimising for: pro t, speed, customer trust, risk reduction?

•Which work ows are decision-heavy and repeatable?

• What data is reliable enough to trust?

• Where must we embed human accountability?

• How do we measure impact beyond “we deployed it”?

• What must never be automated?

An organisation’s accountability should never be delegated to an algorithm simply because it’s convenient.

AI DOESN’T REPLACE PEOPLE. IT REPLACES DRIFT

Organisations fail due to slow decisions, bloated processes, unclear ownership and information that arrives too late. AI is their chance to reclaim momentum, not by turning everyone into prompt engineers, but by redesigning work, so people spend less time chasing updates and more time making sharp decisions and adapting when the world shifts. So, if you’re asking whether AI is worth it, don’t start with what it can do. Start with what it can eliminate: the rework, the waiting, the guessing, the copy-pasting, the meetings that should have been a dashboard.

WHERE VAXOWAVE FITS

Vaxowave guides organisations from AI curiosity to real advantage. We focus on the unglamorous foundations that make it stick: decision design, operating-model shifts, robust guardrails, data-readiness and measurable outcomes.

Not AI for AI’s sake, but AI that drives performance, reduces risk and builds organisations that move fast and adapt faster because in the age of AI, the true differentiator isn’t raw intelligence; it’s intentional adaptability.

Interested in learning more about our services as a leading new-age technology company? Contact us at info@vaxowave.com

FROM PILOT TO PRODUCTION WHY AI NEEDS SITE RELIABILITY ENGINEERING

AI is moving fast. Many organisations can launch pilots that impress in meetings, but the real challenge is making AI dependable in day-to-day operations –where demand is unpredictable, environments are noisy and information constantly changes.

There is a meeting that plays out in boardrooms worldwide with striking regularity. An AI Proof of Concept is demonstrated.

The outputs are impressive. Executives nod. Budget is allocated. Weeks later, the deployment team discovers that what worked beautifully in a controlled environment behaves rather differently when exposed to real customers, live data, shifting policies and the unpredictable rhythms of business operations.

This is not a technology failure. It is an operational one. And it is the de ning challenge of enterprise AI in 2026.

The question leaders are now confronting is not whether AI can do something impressive. It is whether AI can be trusted, consistently, safely and economically, when it is no longer a pilot but a permanent part of the way the business functions.

THE QUESTIONS THAT ACTUALLY MATTER

When Vaxowave works with senior technology and operations leaders, the conversation rarely begins with models or architectures. It begins with a far more familiar set of concerns.

Will the system continue working when usage spikes? Will it stay accurate when policies, product details or regulatory requirements change? Will sensitive data remain within appropriate boundaries? Can we explain and defend decisions when challenged by regulators or customers? Will costs remain predictable as adoption grows across different business units? Who is accountable when something goes wrong?

These are not AI questions. These are reliability questions. They are the same questions that organisations have always asked about critical services, such as payments, customer portals, trading systems and claims platforms. The fact that AI now demands the same rigour should not come as a surprise. It should come as a signal. This is why the next phase of AI needs a reliability discipline and why Vaxowave is framing that discipline as AI SRE (arti cial intelligence site reliability engineering).

AI SRE means applying the site reliability engineering discipline to AI. SRE is how modern organisations keep important services available, fast enough to be useful, safe to operate and continuously improving. Pioneered at scale by technology companies managing infrastructure on which millions depended, SRE offered a set of operating principles that translated engineering complexity into business accountability.

The core ideas are accessible: de ne what good performance looks like in terms the business can measure, monitor whether you

AI SRE MEANS APPLYING THE SITE RELIABILITY ENGINEERING DISCIPLINE TO AI.

are meeting it, respond with discipline when you are not and improve the system so the same failures become progressively less likely. It is a framework built on evidence, ownership and continuous learning.

Vaxowave’s AI SRE brings that same operating discipline to AI so it can move beyond pilots into dependable services that leaders can scale with con dence.

For many organisations, the challenge is not choosing an AI model or tool. It is building an AI service that works within real enterprise conditions. Clear data boundaries, identity and access, audit requirements, operational support and controlled change. These factors determine whether AI becomes business as usual or remains an isolated experiment.

Kume Luvhani and Peter Rix, co-founders of Vaxowave.

WHY AI NEEDS A RELIABILITY DISCIPLINE

Traditional software is rules-based and generally behaves consistently. AI services behave more like living systems. They respond to context, and that context never stops changing. Information, user behaviour, demand and risk changes as new use cases and data sources are introduced. Quality can drift quietly, costs can scale quickly, and con dence in AI as a model or tool is extraordinarily fragile. It is built slowly through consistent performance, but it can be lost in a single visible incident, especially when AI is exposed to customers or used in high-stakes decisions.

There is also the compounding effect of adoption velocity. Once employees discover AI is useful, demand spreads quickly, across departments, work ows and channels. Organisations that address reliability only after AI has proliferated pay a steep price: pausing roll-outs, redesigning work ows that teams already depend on, retro tting governance into live services and managing the reputational fallout of inconsistent performance in the eld.

The lesson from two decades of enterprise software is clear: designing reliability from the beginning is always substantially cheaper than engineering it in after scale has been achieved.

THE AI PRODUCTION GAP AND WHAT CAUSES IT

The gap between a successful pilot and a scalable service is where many AI initiatives slow down. It usually comes down to operational-readiness.

If boundaries are unclear, risk friction increases. If leaders cannot answer simple questions about what data the AI can access, what it is allowed to do and how outputs are reviewed, risk teams will understandably ask for more assurance before the organisation scales adoption. If quality is inconsistent, trust drops. Teams may be excited by a strong demo, then disappointed if the AI produces uneven results across real business scenarios.

AI SRE KEEPS INNOVATION MOVING WHILE PROTECTING THE ORGANISATION FROM INSTABILITY.

If ownership is unclear, accountability is unclear. If an AI system is treated as a project rather than a service, day-to-day responsibility can be fragmented and response can be slow when issues arise.

If costs are not measured and managed, they can surprise the organisation. AI costs often scale with usage. If there are no clear measures and guardrails, spend can rise quietly, then trigger a sudden reaction that stalls progress.

If incident-readiness is weak, recovery will be slow. Every service has incidents. The difference is whether the organisation can detect issues early, respond consistently and improve so the same issue is less likely to happen again.

Vaxowave sees these patterns repeatedly in the eld, especially in regulated environments where the tolerance for risk surprises is low. AI SRE is a practical response. It provides an operating model for AI that supports scale without losing control.

WHAT AI SRE MEANS IN PLAIN ENGLISH

AI SRE is best understood as ve outcomes that business leaders care about.

First, AI is available when people need it. If AI is embedded into operations or customer channels, it must work during peak periods, not only in quiet test conditions. AI SRE designs the service to handle demand spikes and dependency issues without collapsing, with sensible fallbacks so users can still complete work when parts of the system degrade.

Second, AI is consistent enough to rely on. Reliable AI is not perfect. It is dependable for de ned tasks. AI SRE clari es what the AI is meant to do, what it must not do and the conditions under which it performs well, then tests against realistic scenarios so users get fewer surprises.

Third, AI stays within safety and privacy boundaries. AI often touches sensitive information. AI SRE treats boundaries as a design requirement, so the AI only accesses approved information and handles con dential content appropriately. When the organisation needs to justify an output, it can show what it was based on.

AI SRE MAKES GOVERNANCE TANGIBLE BY FOCUSING ON EVIDENCE, TRACEABILITY AND ACCOUNTABILITY, WHICH SHIFTS GOVERNANCE FROM ABSTRACT DEBATE TO PRACTICAL CONFIDENCE.

Fourth, AI costs are predictable and tied to value. AI SRE introduces business measures such as cost per document processed, cost per report generated, cost per case triaged, or cost per customer interaction resolved. This turns AI cost into a steerable metric, so leaders can set guardrails and scale the use cases that deliver the most value.

Fifth, when things go wrong, the organisation recovers quickly and improves. Reliability is not the absence of incidents. It is fast recovery and continuous learning, with clear ownership and a rhythm of improvement that strengthens the service over time.

WHY SRE IS THE RIGHT FOUNDATION FOR AI

SRE became mainstream because digital services became too important to run informally. Organisations needed

operational discipline that balanced speed and stability.

SRE introduced a simple set of ideas that executives can relate to. First, you de ne what good performance looks like. Not perfection, but a practical target. Second, you measure whether you are meeting it. Third, you decide how to respond when you are not. Fourth, you improve the system, not just to x the immediate problem, but to support a never-again mindset where root causes are addressed and repeat incidents become less likely.

Applied to AI, these ideas are powerful because AI is often adopted in fast-moving environments with high expectations and limited tolerance for risk surprises. AI SRE keeps innovation moving while protecting the organisation from instability.

HOW AI SRE DIFFERS FROM TRADITIONAL AI DELIVERY

Many AI projects begin by proving usefulness. Building the model or assistant, integrating it into a work ow and showing that it can produce valuable outputs. That is an important rst step, but on its own it rarely delivers sustained business value. Value comes when the capability is trusted enough to be used consistently, embedded into real processes and scaled without creating new risk, cost volatility or operational burden.

AI SRE builds early success with additional emphasis. AI is not only something you deliver; it is something you operate as a service. That means reliability includes the day-to-day experience for users and the dependability of outputs over time, not only whether the system is available. When AI is treated as a service, con dence grows, adoption sticks, and the organisation can capture value repeatedly rather than in isolated moments of impact.

AI SRE begins by de ning clear service expectations in business terms. What reliability is required, what outcomes matter, what risks are acceptable, what costs must stay predictable and what data boundaries and review requirements apply. It then designs for operability from day one, with monitoring, usage controls and governance evidence built in, rather than added after roll-out. This matters because AI behaviour can shift as models, prompts, sources and work ows evolve. A dependable AI service therefore needs controlled change and testing that re ects real usage, with guardrails that keep the service within approved boundaries.

AI SRE only works when it is grounded in platform reality. AI has to t into the way the organisation already runs services, with enterprise identity and access, clear data governance, operational monitoring and security and compliance constraints that are enforced in practice, not only documented. Vaxowave works at the intersection that determines whether AI becomes a dependable business service or an isolated experiment. We bring together security, data, AI and operational excellence to help organisations scale AI with con dence and control.

TYPICAL LEADERSHIP CONCERNS AND HOW AI SRE ANSWERS THEM

Leaders sometimes worry that AI will embarrass the organisation in front of customers. AI SRE reduces this risk by setting clear boundaries, applying realistic testing and introducing review mechanisms, where needed, with a clear plan for fast recovery if issues occur.

They worry that risk and compliance will block progress. AI SRE makes governance tangible by focusing on evidence, traceability and accountability, which shifts governance from abstract debate to practical con dence.

Leaders worry that costs will get out of control. AI SRE measures unit economics and introduces guardrails so leaders can see the cost per outcome and steer investment toward the highest-value use cases.

They worry they cannot support another complex platform. AI SRE designs for operability, so ownership is clear, monitoring is meaningful and incident handling is predictable.

AI SRE GIVES LEADERS A WAY TO MOVE FAST WITHOUT LOSING CONTROL AND TO SCALE AI

WHILE PROTECTING TRUST, COST PREDICTABILITY AND OPERATIONAL STABILITY.

VAXOWAVE’S APPROACH TO AI SRE

Vaxowave helps organisations implement AI SRE in a way that matches business reality. We de ne the AI service in business terms, align boundaries and controls to keep the service within governance expectations, establish observability and incident-readiness to detect and handle issues quickly and implement unit economics to keep cost and value visible as adoption grows.

For organisations looking to move from pilots to dependable services, Vaxowave offers an AI SRE Readiness Review. It assesses reliability under real demand, boundaries and controls, cost guardrails and the operational ownership, then provides a prioritised plan to stabilise and scale one or two high-value AI use cases with con dence.

WHY THIS MATTERS NOW

Organisations that win with AI will not be the ones with the most pilots. They will be the ones who can run AI as a dependable service. AI is becoming part of daily work. That makes reliability a leadership requirement, not a technical preference. AI SRE gives leaders a way to move fast without losing control and to scale AI while protecting trust, cost predictability and operational stability.

www.vaxowave.com info@vaxowave.com

THE REAL COST OF AI

AI is not a once-off investment. It brings ongoing costs, new risks and hard questions about value. Business leaders say the difference between waste and return comes down to cost control, integration and governance, writes ITUMELENG MOGAKI

For Jameel Khan, co-founder of UnconventionalCA, the biggest hidden cost is not the technology itself but poor

platforms before de ning the exact job to be done. Running multiple pilots without clear ownership or baseline metrics makes it hard to prove value. If you can’t prove what worked, you can’t scale it.” Khan adds that measurable outcomes are essential. “Tie AI directly to revenue, cost reduction, productivity or risk reduction. If it’s not embedded into daily work ows, it becomes a demo nobody uses.”

His recommendation is simple: start with one high-impact use case, track results carefully, consolidate tools and assign clear cost owners. “When you solve one de ned commercial problem and standardise the new work ow, AI shifts from a cost to something that funds itself.”

Dumisani Moyo
Jameel Khan

AI’S EXPENSIVE BACKBONE

Exploding demand for graphics processing units, high-performance networks and energy-intensive data centres is turning artificial intelligence into one of the most expensive computing workloads businesses have ever deployed, writes BRENDON PETERSEN

Arti cial intelligence may be getting smarter, but it is also becoming far more expensive to run. Training and deploying modern AI models now demands vast amounts of computing power, specialised hardware and supporting infrastructure. From graphics processing units (GPUs) to networking bandwidth, the cost stack behind AI has expanded rapidly. For companies racing to adopt the technology, the question is no longer only what AI can do, but what it costs to operate at scale.

WHERE THE MONEY IS SPENT

The scale of investment re ects how infrastructure-heavy the AI boom has become. Major technology companies and cloud providers are committing hundreds of billions of dollars to AI infrastructure, including specialised chips, new data centres and high-capacity networks. The spending highlights how the economics of arti cial intelligence increasingly depend on physical computing infrastructure rather than software alone. At the centre of the cost surge is compute. Advanced AI models rely heavily on high-performance GPUs, which have become the engine of modern machine learning. Demand for these chips has surged globally, pushing prices higher and stretching supply chains. Large training runs can require thousands of GPUs operating simultaneously for weeks, turning a single model training cycle into a multimillion rand infrastructure exercise.

these systems are also evolving. Teraco, one of Africa’s largest data centre providers, says resilient digital infrastructure must be designed with high-capacity power delivery, advanced cooling systems and strong connectivity to support modern digital workloads.

As arti cial intelligence workloads grow, those infrastructure requirements are becoming more demanding. Data centres must support dense computing environments while maintaining reliable power supply, cooling capacity and high-speed interconnection between thousands of servers operating simultaneously.

Networking is another cost layer that many organisations underestimate. As AI workloads move from experimentation to production, massive volumes of data must travel between servers, storage systems and cloud environments.

“Many organisations underestimate the networking and bandwidth costs associated with deploying AI systems at scale,” says Smangele Nkosi, country leader for Cisco Southern Africa. The company’s South Africa AI Readiness Index shows that only 15 per cent of organisations are considered mature in terms of network resilience.

As AI adoption accelerates, networking itself is becoming a material cost driver rather than simply a supporting layer.

“Networks that were not built with AI in mind can quickly become bottlenecks under AI workload surges,” Nkosi says, noting that many organisations must modernise their infrastructure before scaling AI deployments.

Even after models are trained, the spending does not stop. Running AI in real-world applications requires inference infrastructure that can respond to millions of queries. Each interaction consumes compute cycles, memory and electricity, meaning operational costs continue long after development ends. For companies in South Africa and across Africa, these pressures can be even sharper. Much of the advanced infrastructure used to run AI is hosted outside the continent. That can introduce higher latency, additional data transfer costs and questions about data sovereignty.

As a result, AI budgeting is changing. Early experimentation often focused on software and model capabilities. Now, chief information of cers must plan for sustained infrastructure spending that spans chips, networking, energy and facilities.

Yet even with ef ciency gains, most analysts expect the overall cost of running AI systems to continue climbing as demand for generative AI services expands.

However, hardware is only part of the equation. Memory requirements have increased sharply as models grow larger and datasets expand. The facilities that host

Smangele Nkosi
The economics of AI depends heavily on data centres that are fully equipped to aid dense computing requirements.

THE AI FRONT DOOR IS OPEN

MUHAMMED OMAR , country manager – Africa at ServiceNow, explains how EmployeeWorks turns intent into action

Most enterprise AI investments have delivered one thing reliably: more tools to manage. Employees toggle between a chatbot for IT, a portal for HR, a form for nance, and somehow, the work still waits. ServiceNow EmployeeWorks was built to end that cycle. Launched in late February 2026, it represents something the industry has been promising for years but rarely delivered: AI that doesn’t just answer questions; it nishes the job.

ONE

FRONT DOOR FOR THE ENTIRE WORKFORCE

EmployeeWorks combines Moveworks’ conversational AI and enterprise search – trusted by over ve million employees globally – with ServiceNow’s uni ed portal and autonomous work ow engine. The result is a single AI front door accessible from wherever employees already work: Microsoft Teams, Slack or any browser. An employee types a request in plain language: update my bene ts, check my paid-time-off balance, provision software access. EmployeeWorks interprets the intent, co-ordinates the necessary actions across IT, HR, nance, facilities and procurement systems and delivers the outcome. No ticket. No wait. No second system to navigate.

THE GAP IT CLOSES

The proliferation of point AI solutions over the last two years has fragmented the employee

THE DISTINCTION THAT SETS EMPLOYEEWORKS APART IS DECEPTIVELY SIMPLE: IT COMPLETES TASKS RATHER THAN DESCRIBING HOW TO COMPLETE THEM.

experience. Workers bounce between systems, re-explain context to each tool and still end up doing the integration work themselves. EmployeeWorks addresses this by acting as an orchestration layer – it ingests the request, assigns tasks to the right AI agents behind the scenes, manages multisystem handoffs and returns a completed outcome.

Critically, it does this within your organisation’s existing governance framework. Approvals, role-based permissions, audit trails and policy enforcement all remain intact. AI executing work is meaningless – or dangerous –without governance executing alongside it.

EXECUTION, NOT SUMMARISATION

The distinction that sets EmployeeWorks apart is deceptively simple: it completes tasks rather than describing how to complete them. Enterprise search tells you where the answer lives. A chatbot tells you what to do next. EmployeeWorks does the work.

Organisations like CVS Health, Siemens Healthineers and the City of Raleigh are among the early adopters. The signal from these deployments is consistent: when AI solves real problems elegantly, adoption follows organically. It stops being a target on a roadmap and becomes a measurable result.

PART OF A LARGER PLATFORM BET

EmployeeWorks doesn’t stand alone. It connects to the ServiceNow AI Control Tower – the central command centre for managing all AI activity, agents and specialists on the Now Platform. Every action is traceable, every agent is governed,

EMPLOYEEWORKS INTERPRETS THE INTENT, CO-ORDINATES THE NECESSARY ACTIONS ACROSS IT, HR, FINANCE, FACILITIES AND PROCUREMENT SYSTEMS AND DELIVERS THE OUTCOME.

and every work ow remains connected to business rules rather than operating as a black box.

Alongside EmployeeWorks, ServiceNow launched Autonomous Workforce – AI specialists that own entire job functions end-to-end rather than individual tasks. The rst, a Level 1 Service Desk AI Specialist, arrives in general availability in Q2 2026. The direction is clear: from a conversational front door today to an autonomous enterprise workforce tomorrow.

The next competitive advantage won’t come from having the most AI tools. It will come from having AI that actually nishes the work — with the governance, integration and employee experience to prove it. ServiceNow EmployeeWorks is that proposition, now generally available.

Muhammed Omar

A BLUEPRINT FOR AGENTIC BUSINESS

AI that thinks. Workflows that act

AI without workflows is just expensive advice. AI inside workflows is autonomous enterprise execution, and that’s where, with its blueprint for agentic business, SERVICENOW provides a solution

Arti cial intelligence (AI) got brilliant fast. In three years, it went from novelty to necessity, reasoning, writing, analysing and coding at levels that rede ne what’s possible. Every enterprise leader is asking the same question: how do we put this to work? Even the most powerful AI still can’t resolve a cross-system payroll discrepancy on its own. It can’t provision an employee across ve systems with the right approvals. It can’t investigate a compliance exception spanning three vendors and two jurisdictions while producing an audit trail that holds up in front of regulators.

Intelligence has outpaced execution. The models can think. The open question is what makes them act safely and at scale, with the context and governance that enterprises require.

IT’S ALL ABOUT WORKFLOW

The answer is work ows. Not simple automation or task-level scripts. Enterprise work ows, the kind built on decades of business process capital, security architecture, regulatory compliance and cross-system orchestration. The kind tested, hardened and re ned through thousands of real-world deployments.

SERVICENOW

IS

BUILDING THE SYSTEM THAT MAKES EVERY AI SMARTER THE MOMENT IT CONNECTS TO YOUR BUSINESS.

AI without work ows is just expensive advice. AI inside work ows is autonomous enterprise execution. That’s the shift happening right now. And ServiceNow was built for it. The average enterprise runs 367-plus applications across the employee experience alone, each with its own data model, security perimeter and governance logic. That fragmentation has been accumulating for decades. AI didn’t create it, but it made it impossible to ignore because AI can only

execute as well as the infrastructure beneath it allows. That’s why, despite record investment, enterprise AI maturity actually declined 20 per cent year over year.

Vendors bolted AI onto existing applications as sidecars, producing shallow intelligence layered on disconnected processes. The models could reason. They couldn’t execute across systems with governance, context and accountability. The foundations weren’t ready; ServiceNow’s were.

When AI needed a platform that could supply identity, governance, context and execution infrastructure, the platform was already there.

THE INDUSTRY IS SOLVING THE WRONG PROBLEM

There’s an arms race underway, measured in benchmarks, parameter counts and context windows.

Every quarter, a new model claims the top spot. Every quarter, the gap between rst and fth gets smaller.

The cost of intelligence has dropped by an order of magnitude in three years. The benchmarks are converging. The price gaps are closing. Intelligence is already cheap, and getting cheaper by the month.

That’s not a problem. It’s a signal. The value is shifting. When intelligence is abundant, the scarce resource is everything surrounding it: the enterprise context that grounds AI in reality, the governance that makes it safe, and the execution infrastructure that turns insight into action. Durable competitive advantage lives in who can apply intelligence where work actually happens.

When you look at the competitive landscape through that lens, the gaps become clear.

Stand-alone large language models (LLMs) represent a genuine leap in reasoning, but they’re general-purpose intelligence engines. They can suggest actions, but can’t orchestrate work ows across systems. They have no persistent memory, no built-in governance, no connection to the systems of record where work gets done. Even leading model providers increasingly acknowledge that the real breakthrough comes from the context and integration wrapped around the model. Intelligence without action.

EVERY AUTONOMOUS WORKFLOW SERVICENOW DELIVERS IS

DESIGNED TO GIVE PEOPLE BACK THEIR TIME, THEIR JUDGEMENT AND THEIR FOCUS. THE TECHNOLOGY GETS SMARTER. THE WORK GETS MORE HUMAN.

Vibe coding has lowered the barrier to building software in remarkable ways. It’s fast, real and genuinely impressive for prototyping. However, enterprise value doesn’t come from prototypes. It comes from decades of accumulated business process capital: the approvals, service level agreements (SLAs), exception handling, cross-functional co-ordination and regulatory controls re ned through real operations. Vibe coding makes the rst 20 per cent easy. The remaining 80 per cent, hardening, integrating, governing and maintaining, is where the real cost lives. Speed without depth.

Data platforms play a critical role in organising and modelling enterprise data. They help organisations understand what they have. However, understanding and acting are different things. Insights generated on a data platform still need to be executed through another system. They power intelligence but don’t sit inside the ow of work where incidents get resolved, approvals get routed and service requests get ful lled. Insight without execution.

Agent frameworks and digital workers have evolved from chat assistants to task-level agents. However, they operate at the task layer, not the enterprise execution layer. Early agentic frameworks have proven that AI can act autonomously, and simultaneously demonstrated why ungoverned execution creates real risk. Even their own creators advise limiting access to sensitive les and restricting internet activity to trusted environments. Powerful tools with real limitations. Capability without control.

Every one of these approaches solves a real problem. None of them solves the whole problem.

Who can apply AI where work actually happens, safely, at scale, and with context?

That’s a platform question. And it has a platform answer.

THE INSIGHT THAT CHANGES EVERYTHING

Here’s an analogy that clari es the entire competitive landscape.

A GPS helps an individual optimise a route. Powerful, personalised, useful, but localised. It has no awareness of what’s happening in the rest of the system.

Air traf c control co-ordinates thousands of moving parts simultaneously, enforcing safety constraints, routing across teams and systems and maintaining real-time operational awareness across the entire airspace. One assists an individual, the other manages the system. That’s the line between what everyone else is building and what ServiceNow already operates. AI that assists individuals versus AI that runs enterprises.

The distinction matters for a reason that goes beyond technology. When autonomous work ows handle the routine, people gain the autonomy to do the work for which they signed up. The real promise of enterprise AI is putting it to work for people, so they can focus on judgement, creativity and the decisions that actually move the business.

WHY THIS ADVANTAGE COMPOUNDS

There’s a fundamental asymmetry at the heart of the AI landscape that most people miss. Frontier AI models can be built, improved and commoditised within training cycles and measured in months. Enterprise operational context, the work ows, integrations, data relationships and institutional knowledge required to apply AI reliably, compounds over years. You can’t accelerate that with compute alone.

ServiceNow already sits at the operational core of the world’s largest enterprises. 80B-plus work ows. 6.5T transactions annually, growing at ~25 per cent year over year. Eighty- ve per cent of the Fortune 500. Ninety-eight per cent renewal rate for 20-plus consecutive quarters. Twenty years of embedded work ow intelligence across IT, customer relationship marketing (CRM), employee experience, security, nance, and beyond. That’s not a commercial metric. That’s deep operational embedding. The platform where work already happens is the platform best positioned to make that work autonomous.

THE COUNTERINTUITIVE TRUTH

Here’s what most people miss when they think about enterprise AI: AI agents need the platform more than humans do. Humans have intuition about boundaries. They know not to look at a colleague’s compensation data. They know to get approvals before making payroll changes. They watch deadlines, read context and exercise judgement. A powerful AI agent can do far more than any individual, but that ampli es risk unless the platform supplies the guardrails.

The more capable the agent becomes, the more it depends on identity resolution, entitlements, work ow constraints, integration governance, audit evidence and change management. Without those foundations, a brilliant agent becomes a brilliant liability.

This leads to a critical economic insight: as intelligence commoditises, enterprises won’t pay durable premiums for features that any model can reproduce. They’ll pay for the AI Control Tower, the governed platform that knows who you are, what you’re allowed to do and how to execute across systems with audit-grade proof.

•Governance OS: permissions, work ow enforcement, audit trails. Any agent must run through it.

•Domain agents on-platform: the model isn’t what’s special. The platform’s operational data and execution context are.

•Data flywheel: every action, exception and resolution becomes compounding operational history that makes automation smarter and governance stronger over time.

Here’s what makes this position unique: ServiceNow gets better every time the models get better.

Every breakthrough in reasoning, speed or capability from any model provider ows directly through the platform, grounded in your enterprise context and governance. Other companies are racing to build the smartest AI. ServiceNow is building the system that makes every AI smarter the moment it connects to your business. We don’t compete with the models; we make them work.

OPEN BY DESIGN

The AI Control Tower only works if it’s genuinely neutral. An orchestration layer locked to one hyper-scaler or one model provider is just another silo. ServiceNow is architected for openness at every layer.

•Any data: Work ow Data Fabric connects to 450-plus systems and federates live queries across your entire data estate without replication or lock-in.

•Any model: bring NVIDIA, OpenAI, Google, Anthropic or ServiceNow’s own models, all grounded in your enterprise context and governance.

•Any workflow: autonomous execution across IT, CRM, employee experience, risk and security and custom applications built on the platform.

•Any cloud: deploy on-premise, private cloud or public cloud across any hyper-scaler, with con gurable data residency and sovereignty controls. Every other player in the ecosystem optimises for their own layer. Hyper-scalers want to consolidate workloads. Data platforms want to centralise data. Model providers want to standardise on their automated programming interfaces (APIs). ServiceNow optimises for the enterprise, orchestrating across all of them. That neutrality is what makes the AI Control Tower the trusted centre.

WHAT THIS LOOKS LIKE IN PRACTICE

Architecture is only as convincing as the outcomes it produces. Here’s a scenario that brings it to life, not because it’s dramatic, but because it’s exactly the kind of cross-system, regulated, high-stakes problem that every enterprise faces and that no one else can solve end-to-end.

An employee checks their brokerage account on restricted stock unit (RSU) vesting day. The share count looks wrong. Not wildly off, just enough to trigger concern. A simple question: “Where are my shares?”

An LLM can explain RSU vesting logic, walk through withholding calculations and pull policy summaries. Helpful, but it can’t answer the actual question. It doesn’t know the employee’s entity, tax jurisdiction, equity plan version or which systems were involved. It can explain. It cannot resolve.

A desktop agent might notice something suspicious, like a withholding election that appears to have changed, but it can’t verify why because it lacks access to the enterprise systems of record. So it guesses. It blasts multiple teams in parallel. It starts drafting messages containing personal stock transaction details. Noise, risk and embarrassment, fast. It can act. It cannot govern.

ServiceNow resolves the employee’s identity across HR, payroll, equity administration, tax and brokerage. Structured, relational data connected by

THE MORE COMPLEX THE WORK, THE MORE THE ENTERPRISE NEEDS THE AI CONTROL TOWER.

relationships that change over time and are full of edge cases. The platform enforces permissions below the model. The system cannot show or do what it’s not allowed to. It launches a governed work ow spanning multiple vendors, investigates root cause, validates corrections and co-ordinates across systems with proper approvals and segregation of duties. And, it produces the evidence: who initiated, what changed, who approved, SLA compliance – a complete audit trail. The root cause turned out to be a cross-entity transfer that changed the employee’s tax pro le and created a temporary mismatch. The AI could explain how vesting works. The platform could actually x it, safely, with approvals and a complete trace of what changed where.

The more complex the work, the more the enterprise needs the AI Control Tower.

THE ARCHITECTURE: SENSE, DECIDE, ACT, GOVERN

Four interconnected capabilities power everything described above. Together, they form the AI Control Tower, the platform that enables autonomous work ows at enterprise scale.

Sense. Most LLMs are trained on the internet. ServiceNow gives AI your enterprise context. Work ow Data Fabric connects to 450-plus systems, such as SAP and Salesforce, contextualising data in real-time. Zero-Copy connectors eliminate duplication and brittle pipelines. The knowledge graph maps it all to your business context and con guration management database (CMDB), giving AI agents a complete navigational view of your organisation. That extends to every connected asset: internet of things, operational technology, cloud infrastructure and medical devices. Continuous discovery that knows what exists, how it’s connected and what it means to the business. Decide. AI models need to reason with business accountability, not probabilistic guesswork. ServiceNow grounds any model provider in your enterprise context, rules and knowledge. The result: decisions that are aligned with your policies, predictable in their behaviour and auditable from end to end. Think of it as AI alignment for the enterprise. Researchers work to align models with human interests. ServiceNow aligns them with your business interests.

WHEN AI NEEDED A PLATFORM THAT COULD SUPPLY IDENTITY, GOVERNANCE, CONTEXT AND EXECUTION INFRASTRUCTURE, THE PLATFORM WAS ALREADY THERE.

Act. This is where most AI stops, and ServiceNow keeps going. Agent Orchestrator, Agent Studio and Agentic Playbooks execute work end-to-end, from autonomous IT resolution to updating CRM records based on customer signals. When an out-of-the-box work ow doesn’t exist, App Engine and Build Agent let teams build new ones with AI, inside the guardrails of enterprise security and governance. The difference between AI that recommends and AI that gets it done.

Govern. AI work ows require guardrails at the moment of action, not after the fact. AI Control Tower ensures every AI system, asset and identity is compliant, secure and aligned with your strategy. This is where ServiceNow’s 20-year history in enterprise IT becomes an AI advantage, governing every model and every agent across the organisation.

These four capabilities are already delivering results across ve enterprise domains. In IT, ServiceNow is shifting organisations from reactive

re ghting to autonomous resolution. AI handles routine support, incident triage and remediation so experts focus on strategic work. AstraZeneca has reclaimed 30 000-plus hours annually. Adobe resolves outages 25 per cent faster. The best incident is the one that never happens.

In CRM, the model is changing from digital ling cabinet to revenue engine. AI agents sell, serve and support across front, middle and back of ce on a single platform. Pure Storage has seen seven times faster case resolution. Bell has automated 90 per cent of dispatch-related tasks. Customers get faster answers. Sales reps spend more time selling.

For the employee experience, the friction of the patchwork enterprise is disappearing. Procurement onboards suppliers in a click. HR resolves over 70 per cent of inquiries automatically. Siemens resolves 210 000 tickets autonomously every month.

Eaton doubled work ow capacity without adding headcount. The daily grind of navigating 367 applications is being replaced by AI that gets work done.

For builders, ServiceNow is turning app development from a months-long IT project into a days-long creative act. Low-code and AI-assisted development inside the guardrails of platform security, governance and compliance. Every app inherits enterprise-grade trust from day one. No shadow IT. No ungoverned code.

And in risk and security, the equation is ipping from reactive to predictive. As AI-powered apps and agents multiply, security teams need the same speed the rest of the business is gaining. Honeywell achieved 75 per cent faster compliance attestation. Avalara saves 800 hours per month. Governance isn’t slowing the business down; it’s keeping pace with it.

“ServiceNow is destined to be one of the best platforms, the operating system of enterprise AI agents,” says Jensen Huang, CEO, NVIDIA.

WHERE WE’RE GOING

The AI landscape is moving fast. ServiceNow’s advantage is that we’re not starting from scratch. We’re building from a position of deep operational embedding, compounding data and proven enterprise trust.

However, the next phase requires more than extending what we have. It requires becoming something new. ServiceNow is becoming an AI-native enterprise: AI embedded into every product, every feature and every interaction with the platform. Not AI bolted on. AI built in. That means AI-native engineering practices, AI-native pricing and packaging, multimodal user experiences and a modernised tech stack that delivers innovation to customers faster than ever. That shift is already underway. And, it’s accelerating.

Autonomous work ows are expanding into every corner of the business: nance, legal, procurement, supply chain. Every new domain adds enterprise context, strengthens the execution advantage and creates new value for customers already on the platform. The enterprise data advantage is compounding. Contextual AI grounding through CMDB, context graph and work ow history improves with every execution. Bringing the right data, at the right time, to every agent and

work ow and providing business context to AI decisioning, not just raw information. Governance is becoming the deciding factor. Gartner predicts 40 per cent of agentic AI ventures will fail by 2027 due to governance challenges. The platforms that apply AI safely will win. The governance challenge is also expanding beyond traditional IT. Every AI coworker, every connected device, every machine-to-machine interaction needs an auditable identity chain and scoped permissions. ServiceNow is investing to meet this moment, building an AI Control Tower that governs not just what AI does, but who it acts as and what it can reach.

CREATING THE AI FRONT DOOR FOR THE ENTERPRISE

The 367-plus application problem doesn’t just create architectural complexity. It creates a daily experience problem for every employee. ServiceNow is building a single conversational AI interface that collapses that complexity into one front door. One place to ask questions, take action and resolve issues across IT, HR, nance, and beyond, grounded in enterprise context, permissions and work ows. The patchwork enterprise disappears for the people who work inside it.

Accelerating time to value. Simpler products, faster deployments, and more ways for customers to see results quickly. The more teams across an organisation that use ServiceNow, the stronger the platform becomes for everyone.

Through all of this, one principle holds: autonomous doesn’t mean unattended. It means intelligently delegating routine work so people can do the work that actually matters. Every autonomous work ow ServiceNow delivers is designed to give people back their time, their judgement and their focus. The technology gets smarter. The work gets more human. That’s what it means to put AI to work for people.

THE BOTTOM LINE

Intelligence is rapidly becoming a commodity accessible to anyone through an API. The differentiator is who can apply intelligence at the precise moment it matters, grounded

AI CONTROL TOWER ENSURES EVERY AI SYSTEM, ASSET AND IDENTITY IS COMPLIANT, SECURE AND ALIGNED WITH YOUR STRATEGY.

EVERY BREAKTHROUGH IN REASONING, SPEED OR CAPABILITY FROM ANY MODEL PROVIDER FLOWS DIRECTLY THROUGH THE PLATFORM, GROUNDED IN YOUR ENTERPRISE CONTEXT AND GOVERNANCE.

in the right context, embedded in the ow of work and capable of turning insight into action.

ServiceNow’s advantage is AI embedded inside the operational fabric of the enterprise, grounded in real work ows, assets, relationships and history. We don’t layer intelligence on top of work. We apply it inside the execution layer where work actually happens. Competitors can access frontier models. They cannot easily replicate our operational context, our execution position or the institutional knowledge that compounds with every work ow we run. The more capable AI agents become, the more they need a platform that supplies identity resolution, entitlements, work ow constraints, integration governance, audit evidence and change management.

Intelligence will keep getting cheaper. Trusted execution will keep becoming more valuable.

Every enterprise will have access to brilliant AI. The ones that win will be the ones that put it to work, safely and at scale, inside the work ows where their business actually runs.

That’s the AI Control Tower for business reinvention. That’s where ServiceNow lives.

https://www.servicenow.com/

HOW BUSINESSES CAN GET AI RIGHT: SKILLS, STRUCTURE AND STRATEGY

AI is no longer just a technical tool; it’s a business opportunity. Experts say organisations need the right skills, data and structures to make AI work, while also bringing employees along for the journey, writes ITUMELENG MOGAKI

Many companies are excited about AI, but struggle to translate that enthusiasm into real business results.

Hiring the wrong people, focusing on the technology instead of the business or lacking data-ready systems often leads to failed projects.

Experts say success comes from a combination of human understanding, technical skills and organisational strategy.

THE SKILLS THAT MATTER MOST

Daniel Neville, co-founder of Cerebral Circuit, says organisations must combine technical and human-focused skills. “Critical to success is systems thinking at a whole-system level,” he explains. “Companies that understand this can better choose AI projects and implement them effectively.”

Neville also says emotional intelligence in a company is just as important. “Understanding employees helps involve them in building AI solutions. Coding is no longer the main challenge; it’s making sure staff contribute and shape products with their insights.”

On the technical side, Neville highlights prompt architecture and logic mapping. “This means translating complex business rules into a format that AI can follow. It requires understanding both the business and how to structure it technically. You need to view the business as a set of functions, not just as separate departments.”

“AI CAPABILITY SHOULD BE PART OF EVERY TEAM. CROSS-FUNCTIONAL TEAMS WITH BOTH TECHNICAL AND BUSINESS KNOWLEDGE SHOULD OPERATE WITH AUTONOMY.” – DANIEL NEVILLE

You need people who can integrate AI into existing systems and understand the product and journey. Specialists without context often create tools that look impressive, but don’t

culture and knowledge. New hires bring the technical expertise needed to close gaps. A mix works best.”

Neville adds: “AI capability should be part of every team. Cross-functional teams with both technical and business knowledge should operate with autonomy. This way, AI tools are built where the work happens, not in isolation.”

Skipping employee involvement or ignoring data-readiness can

Mosola-Mnjama adds that organisational maturity is also key. “Some companies don’t fully understand AI or how it ts their projects. They need the right skills and the experience to know what AI can and cannot do.”

UPSKILLING VERSUS HIRING

Mosola-Mnjama says hiring specialists still has a role. “Upskilling keeps the company

capabilities, it risks discrediting itself in the market.”

existing staff is more effective than hiring entirely new AI teams. Neville concludes: “Your staff already know the business and your clients. It’s faster to teach them to use AI tools than to bring in new hires who need to learn the industry from scratch. The goal is to make employees AI operators who amplify their work, not replace them.”

Many companies focus too heavily on hiring AI specialists, says Neville. “You don’t need people to build models from scratch.

Follow: Daniel Neville www.linkedin.com/in/danneville Lebo Mosola-Mnjama www.linkedin.com/in/lebomosola

Daniel Neville
Lebo Masola-Mnjama

WHERE ROBOTICS MEETS THE REAL ECONOMY

Robotics is reshaping operations across sectors, but only where the environment, economics and systems are genuinely ready for it. By TIANA CLINE

Physical automation is arriving in South African organisations, but not in the way the headlines suggest. It is appearing selectively, where speci c operational conditions make it viable, and the economics of getting there are more demanding than most deployment conversations acknowledge.

“People often have an outdated view of what is possible,” says Professor Benjamin Rosman, who leads the Robotics, Autonomous Intelligence and Learning Laboratory at the University of the Witwatersrand. “The cost of experimenting is much lower than it used to be,” he adds.

THE READINESS PROBLEM

Ekow Duker, founder of the AI Shop, works with organisations at the point where automation ambition meets operational reality. What he nds, consistently, is that the technology is rarely the hard part.

“It’s easy to build a demo,” he says. “But as soon as you think of integrating into an actual system, the gremlins pop up. It becomes an order of magnitude more dif cult.” Part of what makes integration dif cult is that it surfaces problems the organisation already had. “Work ows don’t necessarily talk to each other,” says Duker. “You bring in automation and it’s another system that needs to connect to existing ones. It’s a problem that predates the technology, but it can end up being blamed on it.”

“YOU NEED PEOPLE WHO UNDERSTAND ROBOTICS, PEOPLE WHO UNDERSTAND ELECTRONICS, AND PEOPLE WHO UNDERSTAND THE ENVIRONMENT THE ROBOT IS GOING INTO.”
– PROFESSOR BENJAMIN ROSMAN

That gap between expectation and readiness runs deeper than software. It runs into skills. “You need people who understand robotics, people who understand electronics, and people who understand the environment the robot is going into,” says Professor Rosman. That threshold has come down as hardware has become more affordable and platforms more general-purpose, but the requirement has not gone away. In warehouses, factories and power plants, he  says, the strongest case for mobile robotics is continuous monitoring, the kind of early detection that prevents a small problem from shutting down an entire operation. “A robot that can wander around day and night, detecting whether something might break before it does, that looks a lot like an insurance policy.”

WHEN THE ENVIRONMENT DECIDES

In mining, the case for automation is less a strategic choice than an environmental one. Heat, gas, high-voltage equipment and unsupported rock faces create conditions where the risk of human exposure is dif cult to justify.

Ekow Duker
Professor Benjamin Rosman

Dwyka Mining Services, the company that brought Boston Dynamics’ Spot to South Africa, has deployed ground robotics across the continent, into hazardous pockets and con ned spaces where cameras and acoustic sensors detect gas or compressed air leaks before they become a crisis. “Management buys into clear objectives and deliverables as measures of success from operations,” says CEO Jamie van Schoor.

The value, he says, comes from solving more than one problem per mission. Van Schoor also sees a clear difference in how adoption plays out across regions. “Regions such as the USA, EU and Middle East are optimising for operational ef ciency,” he says. “Africa, with a skills gap and large unemployment to answer, is optimising for operational participation.”

Ground robotics in South Africa is not following the same adoption curve as elsewhere, and the economics cannot be

WHEN AUTOMATION BECOMES INFRASTRUCTURE

BMW’s IT Hub in Rosslyn, Pretoria, is a different kind of deployment. Automation here is not a response to hazard or a cautious pilot. It is infrastructure, built into production lines running continuously across 14 plants worldwide, and it cannot afford to stop. “These systems cannot fail. If the line stops, we stop producing,” says Jurie Kritzinger, delivery manager for internet of things and edge enabling at the Hub.

The Hub, which employs around 2 600 technical staff and contributes over R4billion annually to the South African economy, builds and operates systems for BMW globally. ATS, an autonomous transport system developed at the Hub, co-ordinates the movement of parts across BMW plant oors, managing routing, timing and traf c between autonomous vehicles, sensors and human workers. Any unexpected behaviour triggers an alert immediately.

AI QX, the Hub’s visual quality inspection platform, captures images at every station

on the assembly line and checks them against what should be present. “We take pictures at every station and run them against our AI platform to check if what we expect is actually there,” says Bright Themba, deli very manager at the Hub. “The product is paying for itself.” Addressing the skills and cost barriers that slow adoption elsewhere, the Hub has also developed a smart robotics platform, hardware-agnostic, that allows plant engineers to train

specialists. “Before you can automate any process, you need to understand the process,” says Themba.

“That’s where the engineers who understand the process come in. From the IT side, we take that and put it into an automated system.” Both ATS and AI QX are updated weekly, tested and rolled out into plants already running at full capacity, with multiregion failover built in as standard. “We practise failovers. We test recovery. We design so that we can operate from anywhere,”

Duker. “You have to stabilise

questions about accountability and control become harder to avoid. Who is responsible

even where the technology and the process are both ready, country-speci c conditions can slow deployment. “From a pure technology point of view, there is nothing we cannot implement here,” Themba says. “But we are highly unionised, and for some use cases it will take time to productionalise because of the hurdles we have to jump.”

Automation is becoming part of the operational fabric of South African organisations, but selectively and on its own terms. The gap between what robotics can do and what organisations are ready for remains wide. “The question is not whether a robot can do something,” ends Professor Rosman, “it is whether the environment is ready for it.”

Bright Themba
Jamie van Schoor
Jurie Kritzinger

AUTOMATING THE BEATING HEART OF A BUSINESS

Rather than replacing humans, AI-led workflow systems put employees first and meet their workplace needs, writes TREVOR CRIGHTON

In the face of so much data and the speed at which it needs to be processed and acted upon, any modern enterprise must have a way to co-ordinate the myriad moving parts across departments to ensure smooth operations.

Modern work ow platforms connect and co-ordinate operations across departments, linking them and offering improvements in everything from communication to ef ciency.

THE ULTIMATE AIM

OF ANY AI-LED WORKFLOW SYSTEM SHOULD BE TO IMPROVE CONSISTENCY, VISIBILITY AND OPERATIONAL EFFICIENCY FROM THE EMPLOYEE’S PERSPECTIVE.

MEETING MODERN ORGANISATIONAL NEEDS

“We look at how our products can reimagine the working experience for employees to make organisations more ef cient,” says Muhammed Omar, country manager – Africa at ServiceNow. “We want to look at the employee journey holistically and use our technology to solve for the challenges they face along the way.”

The onboarding of employees is a good example of a process that would bene t from work ow automation “Onboarding is not a single-silo function – it encompasses HR for contracts, IT to provision equipment, facilities to manage access control, nance needs to be informed so the person can be added to the payroll

and there’s a legal element as well when it comes to their employment contracts,” explains Omar. “Orchestrating all those processes in a seamless, automated, parallel way, with full visibility, is the crux of what automation can bring.”

Andrew Bourne, regional head at Zoho South Africa, also believes that the process of integrating AI work ow platforms into an organisation’s operations should be process-led, not software-led. “Technology is an enabler, not the starting point. Organisations should begin by clearly identifying their issues: What inef ciencies are they resolving? Where are bottlenecks occurring? What level of visibility or control is missing? Once there is clarity on process gaps and desired outcomes, our platforms can be con gured to align with those redesigned processes.”

FOCUS ON THE EMPLOYEE

on manual emails, spreadsheets or informal approvals, automated systems ensure that processes follow prede ned rules every time.

“Consistency improves because tasks are routed based on logic, not individual discretion, visibility improves through real-time dashboards and audit trails, and leadership gains a holistic view of operations across departments, while managers can track bottlenecks and performance metrics in real-time,” he says.

IMPROVING, NOT OUTSOURCING

Bourne explains that the integration of a work ow management system is often the most technically sensitive aspect of digital transformation. “Many organisations continue to operate legacy systems that were not designed for modern interoperability. These systems may have limited application programming interface (API) support, inconsistent data structures or outdated infrastructure.

Different stakeholders across organisations will have different goals, but the ultimate aim of any AI-led work ow system should be to improve consistency, visibility and operational ef ciency from the employee’s perspective. “The key thing is providing the necessary visibility in how information is delivered to employees in an automated way, with the least amount of human intervention to reroute work to function areas to service them more effectively,” says Omar.

“Organisations need to consider every employee life-cycle event and consider how to orchestrate every process in simpler ways.”

Bourne says that instead of relying

“The level of complexity depends on the organisation’s starting point. Some cloud systems can be integrated relatively quickly using prebuilt connectors and APIs. In other cases, especially where customised legacy environments exist, a phased and hybrid approach is more appropriate.” He warns that integration is not purely technical; it requires process redesign, data governance planning and effective change management to ensure alignment across departments.

“When approached strategically, integration becomes a structured evolution rather than a disruptive overhaul. By prioritising high-impact work ows, maintaining data integrity and aligning stakeholders early, organisations can modernise their infrastructure while preserving operational continuity.”

Muhammed Omar
Andrew Bourne

AI AT THE HEART OF AFRICA’S ECONOMIC RISE

AI offers help and hope for medium-size organisations with limited resources and capacity, writes DR PETER CROW and THE AI SHOP

Arti cial intelligence is hot, and businesses are scrambling to take advantage of it. Large, well-resourced enterprises are dedicating teams and resources to explore how to use AI to secure a competitive advantage and streamline operations. Smaller rms and start-ups, unencumbered by legacy systems, are experimenting rapidly, too. In between, all is not as rosy. Medium-size organisations – the engine room of South Africa’s economy – are doing it tough, and many are at risk of falling behind because they lack the resources to explore the possibilities and make great investment decisions.

The reason is not one of ignorance (of AI’s potential) nor of desire, or, even, of access to appropriate technology. It is a shortage of leadership capacity to make great decisions in a timely manner. In essence, where to allocate limited resources for maximum effect.

DECISION-MAKING IN AN AGE OF COMPLEXITY

In complex environments, decision quality and timeliness matters, a lot. And boardrooms are no exception. High-quality decisions depend on many things. Directors need high-quality information, time to read deeply, consider options, test assumptions, think critically and form views. Those who do are more likely to make good decisions.

If boards are to be effective, they need information presented in a timely manner and suf cient time and space to consider options carefully before deciding how to proceed.

SYSTEMS ARE EVOLVING RAPIDLY, REACHING THE POINT WHERE THEY ARE BECOMING VIABLE TO AUGMENT LEADERSHIP JUDGEMENT. REAL INSIGHT DEPENDS ON DEEP KNOWLEDGE AND CAREFUL ANALYSIS: ABOUT THE WAY THE MARKET IS MOVING, PROBLEMS AND OPPORTUNITIES.

However, this is a real challenge, especially in medium-size rms. Leaders who are busy running operations, serving customers and managing cash ow, often don’t have time to prepare solid proposals, much less think carefully about the longer-term future of the company. In many cases, they sense problems, but lack the time to think critically and analyse well. The downstream effects are plain to see: arguments lack rigour and evidence-based proposals are imsy. Strategic thinking is compressed into evenings, ights, or hurried calls during a school run – if it happens at all.

Yet decisions still need to be made. Some leaders use dashboards to expedite decision-making, but these provide only a super cial view. Real insight depends on deep knowledge and careful analysis: about the way the market is moving, problems and opportunities, how work is actually done today, how people behave in various situations and how decisions play out in context. A map may be useful, but it is not the terrain.

Dr Peter Crow is an accredited director, chair and governance specialist with a doctorate in corporate governance and strategy. He has served on the boards of private and public enterprise-controlled companies, family businesses and social enterprises, and advises boards, chairs and governments across five continents, including Africa. He is a Chartered Member of the Institute of Directors and the creator of the strategic governance framework for effective board practice.

AI AS A CIRCUIT-BREAKER?

The emergence of AI offers great hope for executives and boards under pressure. Systems are evolving rapidly, reaching the point where they are becoming viable to augment leadership judgement (make sense of complex data, stress-test assumptions and expose risks and opportunities quickly). If these opportunities are taken (to introduce AI to augment leadership judgement), higher quality and more informed decisions are likely to follow, leaving leaders and boards time to focus on what matters – driving outcomes. And what’s there not to like about that?

Dr Peter Crow

WOULD YOU LIKE TO SPEAK TO AN AGENT?

Smart organisations are using AI and automation tools to serve their customer service representatives, not replace them. By ANTHONY SHARPE

Advanced digital technologies have been reshaping the customer service industry for some time now. Who hasn’t sat on the phone with an AI agent, eyeballs rolling like a slot machine in frustration while waiting to talk to a real human being?

The picture is far bigger than that, however. AI and automation systems are rewiring both the back and front end of customer service, impacting response times, staf ng models and work ows.

Asokan Moodley, head of generative AI and industry advisory for NTT DATA Middle East and Africa, asserts that automation is fundamentally compressing response times by removing friction at the front of the interaction. “AI-driven intent recognition and autonomous resolution mean customers no longer wait in queues for routine requests, so many issues are resolved instantly, across voice and digital channels, at any time of day. When a human agent is required,

“CONTACT CENTRES ARE SHIFTING AWAY FROM LARGE POOLS OF GENERALIST AGENTS TOWARDS SMALLER, MORE SKILLED TEAMS
– ASOKAN MOODLEY

AI now arrives  rst, summarising context, verifying identity, retrieving information and proposing next-best actions before the agent joins the conversation. This has materially reduced average handling times and repeat calls.”

Staf ng models are evolving accordingly, says Moodley. “Contact centres are shifting away from large pools of generalist agents towards smaller, more skilled teams supported by AI copilots. Human agents are increasingly focused on complex problem-solving, high-value customer moments and regulatory-sensitive interactions, while AI absorbs the high volume, repetitive workload. The result is not fewer people, but different roles, more specialists, fewer system navigators.”

Moodley adds that escalation work ows have also become more structured and deliberate. “Instead of customers being bounced between departments, AI enforces clear escalation thresholds based on risk, sentiment, transaction

ASSISTANTS, NOT REPLACEMENTS

This may seem a little at odds with the average person’s anecdotal experience of “technologically enhanced” customer service, but it illuminates a key point: AI is best used to assist people, not replace them.

The organisations getting this balance right treat AI as a governed colleague, not an unchecked replacement, says Moodley. “The  rst safeguard is clarity, de ning precisely what AI is allowed to do autonomously, under what conditions, and when it must defer to a human. These boundaries are enforced through con dence thresholds,

risk scoring and real-time monitoring, ensuring customers are never trapped in automation when empathy or judgement is required.”

Moodley adds that human oversight remains central, but it shifts from manual execution to supervision and exception handling. “AI continuously performs tasks and surfaces decisions, while humans retain authority over outcomes, escalations and customer recovery. Importantly, well-designed systems always include visible fallbacks; customers can reach a person quickly, and agents an override AI recommendations when context demands it.”

CULTURE SHIFT

To make this work, customer experience consultant Julia Ahlfeldt says there must be a step change in how business leaders think about AI. “For too long, people have thought about it as something that replaces people. That’s not what it should be. It’s really about humans and machines working together in harmony. It’s also not about there being a human in the loop, but a human being at the helm.”

Ahlfeldt says that AI has long been sold to businesses as something that will just bring huge ef ciency gains and cost savings, but many of them aren’t necessarily ready to adopt and use it optimally. “The technology can do almost anything we want it to, assuming you feed it clean data. The real challenge is getting the right skills, mindset and governance in place.”

She adds that in a world where people are increasingly unsure what’s real and what’s fake, we are going to place an increasing premium on authenticity, human connection and empathy. “Organisations that continue to deliver that will be the ones that stand out.”

Asokan Moodley
Julia Ahlfeldt

THE AI COMPUTE CHALLENGE: INFRASTRUCTURE AVAILABILITY

Artificial intelligence has quickly moved from experimentation to enterprise strategy. Across industries, organisations are moving beyond proofs of concept and beginning to integrate AI into core business processes, writes AKHONA NKALITSHANE , business development manager: enterprise computing solutions at Altron Arrow

From predictive analytics and cybersecurity to intelligent automation and customer engagement, enterprise adoption is accelerating. Yet, a critical question is emerging for technology leaders: “Is the infrastructure required to support AI workloads readily available to organisations?”

The challenge is no longer only about algorithms or data science capability, but more about compute resources needed to run modern AI workloads at scale.

THE SHIFT FROM AI EXPERIMENTATION TO AI INFRASTRUCTURE

Early AI experimentation could often run on shared cloud resources or small graphics processing units (GPU) environments. However, as organisations begin to deploy production-grade models, infrastructure requirements change drastically.

Training and inference workloads require large volumes of accelerated compute, specialised GPU architectures and memory bandwidth capable of processing enormous datasets. As this demand drives a global surge in GPU adoption across enterprise IT environments, infrastructure availability has become a growing constraint.

The global surge in AI development has created signi cant pressure on GPU supply and high-performance memory, affecting how quickly organisations can scale their AI initiatives. This issue is particularly visible in emerging markets.

AFRICA’S AI OPPORTUNITY AND INFRASTRUCTURE GAP

Africa is increasingly participating in the global AI ecosystem. Universities, start-ups, ntech companies and enterprises are actively developing AI-driven solutions for sectors such as nancial services, healthcare, agriculture and cybersecurity.

However, one of the most signi cant barriers to broader adoption remains access to specialised AI infrastructure.

Local organisations often face challenges such as latency, cost constraints, regulatory considerations and limited access to high-performance compute resources.

There is a growing need for locally accessible AI infrastructure ecosystems that enable organisations, research institutions and solution providers to build and deploy AI models closer to where the data resides.

This is where the role of infrastructure partners becomes increasingly important.

ENABLING AI DEVELOPMENT THROUGH INFRASTRUCTURE ECOSYSTEMS

As AI adoption grows, there is an emergence of specialised infrastructure ecosystems designed to support the entire lifecycle of AI development. Distributors and technology providers are increasingly acting as value-chain links between global hardware innovation and local enterprise deployment. Companies like Altron Arrow play a pivotal role in this ecosystem by enabling access to advanced compute platforms that might otherwise be dif cult for organisations and channel partners to procure and integrate from global manufacturers.

By bridging global technology supply chains with regional demand, infrastructure distributors help address three key challenges facing AI adoption in markets such as South Africa:

1.Access to specialised GPU compute and AI infrastructure.

2. Availability of high-performance AI platforms.

3. Support for local AI development ecosystems. This model is becoming increasingly important as the demand for accelerated compute continues to grow.

THE GROWING IMPORTANCE OF GPU-ACCELERATED PLATFORMS

Modern AI workloads rely heavily on GPU acceleration. Unlike traditional central procession unit-centric computing, AI training and inference require massively parallel processing capabilities capable of handling matrix operations across large datasets.

Technology providers such as ASUS have expanded their portfolio of AI infrastructure platforms to support these requirements, ranging from GPU servers and edge AI systems to specialised compute platforms designed for machine learning workloads.

As AI models continue to increase in complexity, infrastructure that supports high-bandwidth memory, GPU parallelisation and scalable compute clusters becomes a foundational requirement.

THE MEMORY CHALLENGE BEHIND THE AI BOOM

Another constraint shaping the future of AI infrastructure is memory.

Training large applications requires enormous volumes of high-bandwidth memory. Yet the global semiconductor ecosystem is currently experiencing a memory allocation imbalance, where much of the supply is prioritised for hyperscale cloud providers and other regions. For emerging markets, this can slow AI deployment locally.

Organisations are rethinking how they architect AI infrastructure; balancing cloud resources with on-premises and edge AI environments that provide greater control over compute availability and performance.

For many enterprises, the journey into AI increasingly involves building the right infrastructure foundations that allow AI initiatives to scale from experimentation to operational capability.

To succeed, organisations must treat infrastructure as a strategic enabler of innovation rather than a background IT function.

Akhona Nkalitshane

FROM DATA TO DECISIONS

Organisations have accumulated vast amounts of data, but translating that data into measurable commercial outcomes remains uneven – with an impact on revenue, cost and risk, writes TREVOR CRIGHTON

In the ‘big data’ world, where the volume of information organisations crave is now readily available, a new challenge has arisen – what to do with it all. Gigabytes of data are useless unless they can be structured and analysed to deliver the insights businesses require to meet their internal needs and those of their customers.

IN THE WEEDS

Bonisa Applied Insights CEO, Jacobus Eksteen, says organisations nd themselves in trouble when the sheer volume of data creates a trust de cit, rather than clarity. “Employees spend a lot of time doing manual work, working with spreadsheets, and they ‘saw harder’, instead of ‘sharpening their saws’.

when you feel forced into a ‘take it or leave it’ mindset with external suppliers that doesn’t account for your speci c operational needs.”

Sean van Schalkwyk software director at Daisy Business Solutions, says a common challenge is the sheer volume of unstructured data that sits scattered across emails, shared drives, paper archives and disconnected systems. “The result is delayed decisions, rising operational costs, compliance exposure and lost revenue opportunities,” he explains.

PREDICTIVE MODELS FOR FASTER DECISIONS

To solve this challenge, organisations are turning to predictive modelling systems to transform complex data challenges into can then be used to build the business case to solve the problem.”

“PREDICTIVE MODELLING LEADS TO FASTER DECISIONS VIA AUTOMATION AND BETTER DECISIONS VIA BETTER MODELS, SPECIFICALLY IN RISK ASSESSMENT, OPERATIONAL OPTIMISATION AND ALTERNATIVE DATA INTEGRATION.” – JACOBUS EKSTEEN

Mistakes from this manual work often lead to re ghting, redoing work and frustration,” he says. “The indicators for this are complexity overload – when overly complex tools and cryptic algorithms create a distance between data and the people who need to use it, when you have a vast hum of servers but struggle to turn that information into meaningful action, when a lack of historical performance data blocks you from evaluating new opportunities and

clear, actionable strategies. Predictive modelling is a statistical technique and branch of arti cial intelligence that uses historical data, machine learning and data mining to forecast future events.

Eksteen explains that predictive modelling leads to faster decisions via automation and better decisions via better models, speci cally in risk assessment, operational optimisation and alternative data integration. “Our Bonisa AI uses a

Van Schalkwyk cites a real-world example: a leading South African food retail and quick service restaurant chain reduced invoice processing from 5–7 days to under 24 hours, cut manual capture by 80 per cent, and achieved 99.7 per cent accuracy. “That directly accelerated cash ow (revenue), slashed labour costs and reduced compliance risk through complete audit trails, which built organisational con dence to expand the same platform across divisions, gradually converting millions of data points into daily operational decisions that move the needle on revenue, cost and risk.”

Follow: Jacobus Eksteen www.linkedin.com/in/jacobuseksteen Sean van Schalkwyk www.linkedin.com/in/sean-van-schalkwyk

Jacobus Eksteen
Sean van Schalkwyk

TURNING AI INTO ROI: THE RECIPE FOR CX SUCCESS

DION MILLSON , head of AI strategy at Connect, writes that artifi cial intelligence should form part of a comprehensive business strategy

Adoption of arti cial intelligence (AI) in contact centres is accelerating as executives look to keep pace with competitors and rising expectations. Yet, reports state that up to 95 per cent of AI projects fail to reveal a deeper issue: many organisations deploy AI before de ning the business problems they’re trying to solve. AI has become the default response to customer experience (CX) challenges, rather than the outcome of a thoughtful strategy.

Rising pilot failure rates, concerns about an AI bubble and a widening gap between massive investment in AI and actual returns are signals that AI, while transformative, is not a universal solution.

For a genuine return on investment (ROI) and to deliver sustainable impact, AI must be implemented as part of a comprehensive business strategy, rather than as a stand-alone technology pilot.

A strategic approach begins with clarity: What are the business priorities? Which customer outcomes matter most? Then a CX roadmap can be built that aligns these priorities with the right technologies and processes. AI should be selected only once the business problem is de ned, ensuring the tools complement existing systems and deliver measurable value.

EXCEPTIONAL CX

The old paradigm – improving CX means increasing cost – is changing. The right use of technology will dramatically improve customer experiences, while delivering real cost savings. AI excels at processing vast amounts of data, identifying patterns, automating routine tasks and providing real-time insights. This is great for tools like AI Agent Assist, AI Quality Management and Conversational AI. However, AI cannot x broken processes, compensate for poor organisational culture or replace the need for human judgement in complex situations. It won’t magically align disconnected systems or overcome resistance to change.

Designing your CX strategy is like crafting a recipe for the perfect cake – the right combination of ingredients mixed in the right sequence for the perfect outcome.

However, each customer journey requires many different ingredients from across the business to come together in the right way at the right time. Implementing technology requires careful consideration of each ingredient, such as:

• Foundational ingredients: the base technologies used with CX, such as contact centre systems, CRM, CSM, WEM and data tools.

• People ingredients: a customer journey touches many different stakeholders across an organisation, with different opinions, from business to IT to risk, often with different representatives for each different system and for each different channel within a system.

• Optimisation ingredients: the different AI and automation technologies that sit on top of the foundation ingredients must be brought together seamlessly to improve CX.

• The chef: the right service provider with the skills, experience and access to all the necessary ingredients to bake your CX cake. A considered customer journey seamlessly orchestrates the different technologies with the company’s human elements.

Many organisations assume they can plug a RAG model into their data and instantly produce a high-performing bot. In reality, this often leads to shallow, FAQ-style exchanges that fail to support real customer journeys.

Typically, customer journeys start as customer-led, with the customer explaining a need and the agent looking to understand what’s required.

From there, the journey needs to become agent-led – taking the customer through a process that ful ls their need. This requires understanding, which begins with context.

FOR A GENUINE RETURN ON INVESTMENT AND TO DELIVER SUSTAINABLE IMPACT, AI MUST BE IMPLEMENTED AS PART OF A COMPREHENSIVE BUSINESS STRATEGY.

Within each step, it is vital to understand which ingredient is most important for solving the customer query at that speci c moment.

Channels are no longer just access points; they are ingredients in the customer journey. Voice AI offers immediacy, text channels like WhatsApp allow asynchronous exchange and richer data capture, and human agents provide empathy and judgement.

Omnichannel ensures customers can engage on their preferred channel, but optichannel goes further: it selects the best channel for each step of the journey. For example, a voice agent may remain in conversation while a WhatsApp bot collects location data in parallel. Designing these experiences requires creativity and, critically, the seamless transfer of context across channels.

Follow: Connect www.linkedin.com/company/connectmanagedservices/

Dion Millson

MODELLING YOUR AI INVESTMENT

How to avoid the hidden tax on your contact centre, by SALLY HODGIN , principal AI consultant at Connect, and TIM NORTH , group vice president of strategy at Connect

We never expected to nd ourselves discussing tax strategy in the context of arti cial intelligence (AI). However, when Nicolas Bustamante coined the term, “LLM Context Tax”, it really resonated. As contact centre

AI deployments scale, the hidden cost of pushing every interaction through frontier-scale large language models (LLMs) is becoming increasingly visible, in compute spend, latency, sustainability targets, reliability, data security and ultimately cost to serve.

LLMs are undeniably powerful. The question is not whether they work. It’s whether they are being applied proportionately. In high-volume, structured environments like the contact centre, AI model architecture is no longer a technical nuance; it is a commercial strategy.

UNDERSTANDING LANGUAGE MODELS IN BUSINESS TERMS

A language model is an AI model trained to understand and generate human language by learning patterns across large volumes of text data.

Models differ primarily by scale and specialisation.

• Large language models (LLMs) typically exceed 20 billion parameters.

• Small language models (SLMs) operate in the low billions.

• Micro language models (MLMs) are under 100 million parameters and are trained for highly speci c tasks, such as intent detection or entity extraction.

A parameter is simply a learned weight inside the model. More parameters generally mean greater expressive capacity and deeper reasoning ability, but also greater computational demand.

Cost is in uenced by two factors: model size and usage. Every interaction is broken into “tokens”, fragments of words and numbers. The more context

you provide, the more tokens the model must process. Larger models make each of those tokens more computationally expensive. At scale, that combination translates directly into infrastructure cost and energy consumption.

THE OPERATIONAL IMPACT ON CONTACT CENTRE PERFORMANCE

Contact centre leaders are accountable for a consistent set of outcomes:

• Cost to serve.

• Average handling time (AHT).

• First contact resolution (FCR).

• Containment rate.

• Customer advocacy (customer satisfaction score/net promoter score).

• Compliance and operational risk.

• Environmental, social and governance (ESG) and sustainability performance.

AI model architecture has a direct and measurable in uence on each of these outcomes. The size of the model selected, the way it is deployed, the amount of context it processes and the degree to which it is specialised for the task all shape operational performance in different ways.

The discussion should not centre on whether AI is capable, which has largely been proven. The more relevant question is whether the chosen architecture improves core contact centre metrics in proportion to the cost and complexity it introduces. When automation scales, marginal inef ciencies in precision, latency, energy consumption or governance discipline compound quickly.

The impact typically emerges across a small number of primary drivers, each of which maps directly to measurable business performance.

1. Precision: accuracy drives containment and FCR

Most contact centre interactions are structured: identity veri cation, balance enquiries, appointment changes, policy updates. These are deterministic work ows that require speed and precision rather than open-ended reasoning.

Consider identi cation & veri cation (ID&V): large conversational models can perform ID&V, but they are not optimised for atomic precision. If one digit is misinterpreted in a 10-digit account number, the system reprompts. Repetition increases AHT. Escalation reduces containment and FCR drops.

Task-speci c micromodels are built for this workload. In benchmark testing, specialist intent and entity models routinely achieve F1 scores in the mid-90s.

F1 is a combined measure of precision and recall. Scores above 90 per cent indicate strong production reliability. Connect’s intent models, powered by Elerian AI, operate at approximately 96 per cent F1 with inference times measured in milliseconds.

Translated commercially: fewer re-asks, fewer escalations, higher containment and more predictable FCR. At scale, marginal improvements in precision compound signi cantly.

IN HIGH-VOLUME, STRUCTURED ENVIRONMENTS LIKE THE CONTACT CENTRE, AI MODEL ARCHITECTURE

IS NO LONGER A TECHNICAL NUANCE; IT IS A COMMERCIAL STRATEGY.

Sally Hodgin

AT SCALE, ORCHESTRATION IS NOT A TECHNICAL PREFERENCE. IT IS THE MECHANISM THROUGH WHICH AI MATURITY TRANSLATES INTO SUSTAINABLE COMMERCIAL RETURN.

2. Latency: speed influences AHT and customer experience

Latency is often underestimated; even small delays in voice automation disrupt natural turn-taking. Additional seconds during authentication or work ow con rmation increase AHT.

Higher AHT directly increases cost to serve, smaller models process requests faster because they require less compute per token. In structured journeys, that speed advantage translates directly into shorter interactions and more ef cient automation.

In commercial terms, shaving seconds off high-volume journeys is equivalent to adding headcount capacity, without increasing headcount.

3. Energy intensity: compute drives cost and ESG performance

Energy benchmarks illustrate the scale of difference:

• MLMs: ~0.1–0.4 watt-hours per million tokens.

• SLMs: ~1–4 watt-hours.

• LLMs: often 10–100+ watt-hours.

In commercial terms, using a frontier LLM for a high-volume, low-complexity journey can consume 10 to 50 times more energy than a task-speci c alternative.

In contact centre environments, processing high volumes of interactions per month, that difference impacts:

• Cloud infrastructure spend.

• Cost to serve.

• Margin performance.

• ESG and sustainability commitments.

• Scalability and cost predictability. At scale, one element of the hidden tax on LLM usage is the cumulative compute overhead of applying frontier-scale models to workloads that do not require frontier-scale intelligence.

4. Governance and deployment: risk influences compliance and brand Frontier LLMs are typically accessed via shared cloud application programming interfaces (APIs). While often secure, this model

introduces external dependency and limits infrastructure control.

Smaller SLMs and micromodels can often be deployed within a client’s own environment on-premise, private cloud or tightly controlled virtual private cloud. As they require signi cantly less computational resource, they can operate inside regulated infrastructure.

For nancial services, utilities and public sector organisations, this supports:

• Data sovereignty.

• Reduced external data exposure.

• Stronger auditability.

• Alignment with regulatory frameworks. Governance directly in uences compliance risk and brand protection, so should not be an afterthought when designing your AI solutions.

5. Orchestration: capital efficiency drives return on investment

The most advanced organisations are carefully orchestrating their use of AI models. Rather than relying on a single model to perform every task, mature AI strategies distribute workloads across model tiers according to task complexity:

• MLMs: handle atomic, precision-based tasks, such as intent classi cation, entity capture, validation and ID&V. These models are optimised for speed, determinism and cost ef ciency.

• SLMs: manage work ow orchestration, routing logic, summarisation and structured journey management. They co-ordinate the interaction, determine next best action and ensure procedural consistency.

• LLMs: invoked selectively when deeper reasoning, contextual interpretation or emotional nuance is required.

From the customer’s perspective, the experience remains seamless. From the organisation’s perspective, computational intensity aligns with task complexity. Premium AI cost is incurred only where premium reasoning materially improves outcomes. This is where capital ef ciency emerges. Rather than paying frontier-scale compute for every interaction, the organisation pays for depth of reasoning only when depth of reasoning is required.

Connect’s AI models, powered by Elerian AI, are designed with this principle in mind. Delivered via secure APIs into existing contact centre as a service ( CCaaS) and customer relationship management (CRM) platforms, they enable organisations to optimise intent accuracy, work ow performance and containment without replacing core customer experience investments. The objective is disciplined optimisation of the AI layer within the existing ecosystem, not wholesale platform disruption.

At scale, orchestration is not a technical preference. It is the mechanism through which AI maturity translates into sustainable commercial return.

THE BOTTOM LINE

In the contact centre, intelligence should be measured by business outcomes, not parameter count.

• Precision improves FCR and containment.

• Latency in uences AHT and cost to serve.

• Energy intensity affects margin and ESG performance.

• Governance protects compliance and brand.

• Orchestration ensures capital ef ciency. The hidden tax of LLMs in your contact centre captures the commercial risk of ignoring these dynamics. Strategic “tax avoidance” in this context is not about limiting ambition; it is about applying the appropriate level of intelligence to each workload so that customer experience, cost discipline and operational resilience improve together.

A PRAGMATIC FRAMEWORK FOR SCALING ENTERPRISE AI IN AFRICA

Artificial intelligence has entered a decisive new phase, with organisations across the globe shifting from asking whether to adopt it to grappling with how to operationalise it. By CLIFFORD DE WIT, managing director and chief innovation officer of Accelera Digital Group

Experiments have given way to production systems, and the conversation has moved from general-purpose large language models (LLMs) to specialised agents embedded in work ows. Yet for African enterprises, this transition is shaped by a unique set of constraints, including data residency requirements, energy limitations, infrastructure gaps and a talent market that cannot compete with global AI salaries. To navigate these contradictions, C-suite leaders need a strategic framework grounded in pragmatism rather than hype. AI must be treated as a business transformation, not an IT project, with success depending on making deliberate trade-offs that balance ambition with operational reality.

START WITH THE PROBLEM, NOT THE TECHNOLOGY

The rst principle is deceptively simple, namely clarity of purpose. AI is not a silver

bullet, but a piece of technology, and like any technology, it succeeds only when deployed to address a clearly de ned business problem. If leaders are not crystal clear about the issue they are solving, they will be disappointed with the results.

This clarity is inseparable from return on investment (ROI) because when the problem is well-de ned, the ROI becomes measurable. Organisations can track whether the solution reduces fraud, accelerates claims processing, improves customer engagement or cuts operational costs. Without this, AI becomes an

expensive experiment rather than a value-generating capability.

In a region where budgets are constrained and energy costs are rising, this focus on ROI is not optional, but foundational.

BUILD THE DATA FOUNDATIONS BEFORE BUILDING THE AI

The second pillar is data-readiness. Many organisations aspire to build sophisticated agents, only to discover they lack the data needed for a human to perform the task, let alone an AI system. You cannot automate, predict or reason over data you do not have.

ANY MATURE AI FRAMEWORK MUST INCLUDE A FINOPS LAYER CAPABLE OF NOT ONLY TRACKING INFERENCE COSTS, BUT ALSO ENERGY CONSUMPTION AS ENVIRONMENTAL, SOCIAL AND GOVERNANCE REPORTING RECOGNISES COMPUTING

Two questions are of crucial importance: “Do you have the data?” and “Is the data in good order?”

Across the continent, many enterprises are now experiencing the consequences of years of underinvestment in data estates. Without strong data foundations, AI simply cannot scale. African enterprises also face an additional layer of complexity around where their data is stored, how it is processed and which jurisdictions it touches. Regulation is a moving target as policymakers race to catch up with innovators, meaning that organisations must embed residency, governance and compliance considerations into their AI frameworks from the outset.

AI MUST BE TREATED AS A BUSINESS TRANSFORMATION, NOT AN IT PROJECT, WITH SUCCESS DEPENDING ON MAKING DELIBERATE TRADE-OFFS THAT BALANCE AMBITION WITH OPERATIONAL REALITY.

As AI systems become capable of extracting granular insights from multiple sources, permissions must become equally granular. It is no longer enough to protect a spreadsheet; organisations must protect each piece of data within it. This shift toward atomic permissions – understanding where data comes from, where it goes and who can access or change it – is becoming critical for privacy, security and regulatory compliance.

EMBRACE ARCHITECTURAL PRAGMATISM

For years, the global AI narrative has been dominated by scale – bigger models, more parameters and more graphics processing units (GPUs). However, this approach is increasingly unsustainable – large models are expensive to run, energy-intensive and often unnecessary for enterprise use cases. This is where architectural pragmatism becomes essential, as the right model for the right job is rarely the biggest model available. Using a trillion-token model to solve a work ow automation problem is equivalent

to bringing a nuclear bomb to a gun ght.

Small language models (SLMs) and domainmodels offer a more sustainable path. They require signi cantly less GPU and central processing unit power, are cheaper to run and can be ne -tuned to local dialects, regulatory environments and industry-speci c work ows. For fraud detection, claims processing or customer service, these models are not just suf cient; they are optimal.

At the same time, portability is becoming a strategic advantage. When organisations are not locked into a single hyperscaler’s model, they gain the exibility to run workloads in local data centres to meet residency laws, in the cloud for elasticity and across multiple providers for resilience.

Recent outages in major cloud regions have highlighted the risks of deep lock Portability gives organisations options, and on a continent where infrastructure can be unpredictable, optionality is a competitive advantage.

EMBED FINOPS INTO THE AI OPERATING MODEL

As AI shifts from capital expenditure to operating expenditure, nancial operations (FinOps) becomes essential. When compute is consumed on demand, costs can escalate rapidly without real -time monitoring. Any mature AI framework must include a FinOps layer capable of not only tracking inference costs, but also energy consumption as environmental, social and governance reporting recognises computing as a signi cant source of indirect carbon emissions.

This is not merely a budgeting exercise, but a strategic capability that determines whether AI remains viable at scale. Without FinOps, organisations risk building systems they cannot afford to operate.

DEVELOP LOCAL TALENT

While Africa cannot compete with Silicon Valley salaries for elite AI researchers, it does not need to. The real opportunity lies in cultivating AI practitioners – domain

who understand how to apply AI to speci c organisational problems. These are lawyers, bankers, engineers and operations leaders who can translate business needs into AI - enabled work ows. This approach allows enterprises to leverage global research while building local capability where it matters most: contextual understanding. This is the pragmatic path to enterprise adoption.

AI adoption is as much a human transformation as a technical one. Organisations must consider change management, workforce-readiness and the impact on productivity. Crucially, they must resist the temptation to replace

There are numerous examples of organisations that reduced their workforce, expecting AI to ll the gap, only to face operational failures when the bene ts did not materialise. The most effective model today is humans plus AI, not humans replaced by AI. Augmentation, not substitution, is the sustainable path.

MOVE FROM THEORY TO VALUE

Finally, organisations must avoid overengineering. AI at scale is iterative, and enterprises that succeed are those that solve speci c problems, deliver clear ROI, avoid regulatory pitfalls, bank the value, learn and pivot.

AI is transformative, and transformative technologies evolve quickly. Leaders must be prepared to adapt, re ne and continuously improve their AI systems.

Scaling AI in Africa requires a shift in mindset. It is not about chasing the biggest models or the most impressive demos. It is about disciplined execution, architectural pragmatism and strategic investment in data, governance, talent and nancial operations.

AI is no longer magic, but rather a business multiplier. Organisations that treat it as such, grounded in local realities and global economics, will be the ones that turn experimentation into enterprise -wide value.

Follow: Clifford de Wit www.linkedin.com/in/clifford-de-wit-5a3a932

Clifford de Wit

AI AGENTS: ADDING A SMARTER LAYER TO WORK

AI agents are reshaping how work is organised and should be welcomed as an additional layer that streamlines and takes ownership of complete workflows, writes STUBBER

In the late 1800s factories ran on steam engines. A single giant engine in the basement powered the entire building through line shafts and belts running across the ceiling. Every machine connected to this central power source. When electric motors were rst introduced, most factories replaced the steam engine with an electric one, but kept the same layout. The result? Almost no productivity improvement.

It took decades before factory owners realised the real advantage of electricity: each machine could have its own motor.

Once factories reorganised around distributed power, everything changed. Production lines became exible. Machines could be placed where they were most ef cient. Work ows became modular and output skyrocketed, although economic historians estimate that it took 30 years before full adoption and the productivity bene ts were realised.

Every major technological wave follows the same pattern. At rst, companies bolt the new technology onto old systems. However, the real breakthroughs happen when organisations redesign themselves around the new capability.

Electricity reshaped factories. Containers reshaped global trade. Spreadsheets reshaped decision-making. And now, AI agents are beginning to reshape how work itself is organised.

A

FRAMEWORK

FOR EXECUTIVES

With all the noise around AI – and the growing sense of “AI fatigue” – many executives are asking the same question: Where do we actually start?

A helpful way to think about this technological wave is to break it into a few clear shifts.

1. Legacy AI versus large language models (LLMs) For many years, AI meant highly specialised models built for very speci c tasks: one model for optical character recognition, another for sentiment analysis, another for fraud detection.

Building these systems required large volumes of structured data, specialised expertise and signi cant computational resources.

Legacy AI was primarily about understanding data: identifying patterns, classifying information and generating predictions.

This form of AI remains extremely valuable today and often plays an important role alongside modern AI systems, providing structured insights and analytics that can feed into more advanced automation.

2. LLMs

LLMs represent a fundamentally different approach. Instead of building a different model for every task, LLMs provide a form of general-purpose intelligence that can perform a wide variety of cognitive work.

Trained on vast amounts of language and knowledge, these models reason about problems far more exibly. This creates a powerful new interface for business systems: language.

Instead of writing complex software or training specialised models, businesses can increasingly describe the task in natural language and allow the system to determine how to complete it.

LLMs also tolerate messy environments remarkably well. They do not require perfectly structured datasets or rigid schemas. Much like a human employee, they can interpret incomplete information, infer missing context and adapt to the realities of how work actually happens inside organisations.

The barrier to entry has dropped dramatically.

3. Human-augmenting AI versus independent AI agents

Most early applications of LLMs have focused on assisting humans. Tools like ChatGPT, Claude and Copilot help employees write, analyse, research and summarise information more quickly. These systems augment human capability and can deliver meaningful productivity gains.

However, this is only the rst stage of the transformation.

Using AI purely as an assistant is somewhat like replacing a steam engine in a factory with a single electric motor – the underlying organisational structure remains largely unchanged.

The deeper shift comes with independent AI agents. AI agents do not simply assist humans; they carry out work autonomously.

They can interact directly with customers, staff and internal systems to complete tasks from start to nish.

Examples include agents that operate across revenue, service and operational work ows:

• AI sales executive: conducts needs analysis, generates quotations, veri es information, completes onboarding and closes the sale.

• AI support agent: investigates a customer issue across internal systems, diagnoses the root cause and resolves the problem.

• AI claims agent: captures a claim and co-ordinates assessors, repair contractors, and internal teams to resolve the case end-to-end.

This distinction is important.

Answering frequently asked questions is not an AI agent. A true agent resolves the problem end-to-end without human intervention.

The strategic opportunity for organisations lies in identifying where these autonomous agents can take ownership of complete work ows rather than simply assisting with individual tasks.

THE THREE STAGES OF AI ADOPTION

1. Insight: legacy AI that analyses data.

2. Assistance: LLM tools that help employees work faster.

3. Autonomy: AI agents that execute work independently.

THE FIRST STEPS

For executives, the key is to approach this wave of technology in a structured but pragmatic way.

1. Go deep, not wide

One of the most common mistakes organisations make with new technologies is spreading efforts too thin.

Instead of launching dozens of small AI initiatives across the business, it is far more effective to choose one or two meaningful work ows and go deep. Focus on areas where autonomous agents can own a complete process, typically customer-facing functions such as sales or support.

When an AI agent can handle an entire work ow end-to-end, the ef ciency gains become clear very quickly.

2. Allocate a budget for experimentation

In his book, The Innovator’s Dilemma, Harvard professor Clayton Christensen explains why large organisations often miss disruptive technologies.

HOW STUBBER CAN HELP ORGANISATIONS

Platforms are emerging to help organisations deploy these capabilities quickly. One such platform is Stubber.

AI AGENT STARTER PACKAGES

To make experimentation easy, Stubber offers starter packages designed specifically for organisations exploring agentic AI. These packages include:

• Prepaid AI credits that cover usage across the full stack – large language models, databases, vector stores and communication systems.

• Implementation hours with Stubber’s solution engineering team to design and deploy real use cases.

• No long-term commitment, allowing organisations to explore opportunities with minimal risk. The platform integrates the full AI agent stack – from models to messaging channels – meaning businesses can begin deploying agents without needing to onboard multiple vendors or assemble complex infrastructure. This allows teams to focus on what matters most: identifying the workflows where autonomous agents can create the greatest impact.

Established companies rely heavily on forecasting, return on investment (ROI) models and detailed analytics before committing resources. However, with emerging technologies, those numbers simply do not exist yet.

When a new wave arrives, there is no reliable data, which means traditional decision frameworks fail.

The way successful organisations navigate this is simple: allocate a de ned budget for experimentation. Instead of demanding perfect forecasts, leadership sets aside a small pool of capital to explore new capabilities. The goal is not immediate ROI, but learning.

Once successful use cases emerge, investment can scale quickly.

In other words: experiment rst, optimise later.

3. Start with a focused AI agent deployment

The most practical way to begin is with a targeted implementation where an AI agent can take ownership of a clearly de ned work ow. Typical starting points include:

• Sales quali cation and quoting.

• Customer onboarding.

• Technical support resolution.

• Service scheduling and follow-ups. These processes are often repetitive, structured and high-volume, making them ideal for autonomous agents.

In early deployments, AI agents are already handling complete support interactions: diagnosing network issues, scheduling technicians and resolving customer problems without human intervention.

EMBRACING REAL TRANSFORMATION

Every major technological wave follows a familiar pattern. At rst, organisations experiment cautiously. The technology is used in small ways – often bolted onto existing processes. The impact is incremental. However, over time, some companies begin to reorganise themselves around the new capability. They rethink work ows. They redesign roles. They allow the technology to take ownership of entire pieces of work. That’s when the real transformation happens.

The question for executives is not whether AI will affect their industry. It is whether their organisation will treat AI as a tool that assists employees or as a new operational layer that can execute work independently.

Those that experiment early and redesign their work ows around autonomous agents will unlock extraordinary ef ciency gains. Those that wait for perfect certainty may discover that the wave has already passed them by.

AI AT WORK

The race to develop, deploy and utilise AI across almost every sector shows no signs of slowing. However, AI is not a uniform capability, and successful deployment depends on a host of factors, including data quality, legacy systems, skills, regulations and risk tolerance.

Tenterprise generative AI pilots failed. However, according to Kume Luvhani and executive director of Vaxowave, the failure of AI deployments is rarely due to the technology itself. “In my

Luvhani says she consistently sees the same

same: success shows up fastest where you have high-volume, high- frequency decisions, clear unit economics, available data and the organisational authority to actually change processes based on model outputs.”

In nancial services, fraud detection, credit risk, collections optimisation, anti-money laundering screening triage and document processing automation have all delivered measurable returns, says Luvhani. “The economics are straightforward, and the data, when cleaned up, is available. Retail and FMCG have seen real value in demand forecasting, promotional pricing optimisation and stock loss analytics. Telecoms have mature deployments in churn prediction, network optimisation and customer operations. In mining and energy, predictive maintenance and safety analytics are maturing as data quality improves.”

EFFICIENCY VERSUS GROWTH

application programming interfaces and no real-time eventing. And, nally, missing operational foundations, no model monitoring, no lifecycle management and no security controls t for

, general manager of Infrastructure Solutions Group at Lenovo Africa, says the organisations that succeed are those that stop treating AI as a science project and start treating it as an industrial process. “The common denominator is purpose-built infrastructure combined with nancial discipline. That discipline also extends to how you nance and scale the platform, in many cases adopting consumption-based approaches so you can grow responsibly without overcommitting

Some sectors are simply better suited to gain from AI than others. Luvhani says the highest traction and clearest return on investment have come from nancial services, retail and fast-moving consumer goods (FMCG), telecoms and parts of mining and energy. “The pattern across all of them is the

AI has been touted as a potential growth driver for many industries, but in 2026, the pendulum is swinging heavily towards ef ciency, says Wolson. “Macro-pressures, from economic volatility to power constraints, are forcing chief information of cers to prioritise technology that delivers measurable outcomes quickly. As ‘proof-of-concept fatigue’ sets in, IT leaders are looking for ‘ef ciency AI’ solutions that streamline administrative tasks, optimise staf ng or reduce energy consumption in data centres.”

What’s crucial to understand, says Wolson, is that this ef ciency is the fuel for future growth. “For example, in healthcare, using AI to streamline administrative tasks and enhance operational ef ciency frees up human capital to focus on patient care and research. So, while ef ciency is the immediate priority and is delivering return on investment quickly, it is also laying the foundation for the ‘growth AI’ that will de ne the latter half of the decade.”

Kume Luvhani

POWERING A CONNECTED AI ECOSYSTEM

SAMSUNG is redefining mobile privacy and Galaxy AI connectivity with the new Galaxy S26 Ultra

In 2026, Samsung Electronics is demonstrating how Galaxy AI has evolved into a truly agentic companion, delivering deeply personalised, intuitive and seamlessly connected experiences across the Galaxy ecosystem.

At the centre of this evolution is the Galaxy S26 series, Samsung’s third-generation AI phone. Built on performance and engineered for ease, the Galaxy S26 series uses Galaxy AI to simplify everyday tasks by understanding intent, anticipating needs and taking proactive action on behalf of its users.

Building on decades of display innovation, the Galaxy S26 Ultra introduces Privacy Display, a Samsung- rst, built-in technology that protects personal information at the pixel level. The Privacy Display feature controls the screen’s viewing range to limit peripheral vision. Caution is advised when exposing sensitive information, as some information may still be visible to others depending on the viewing environment, such as angle or brightness.

In a landscape where mobile phones are the primary gateway to banking, business and healthcare, Privacy Display ensures that users’ digital lives remain strictly personal.

THE GALAXY S26 ULTRA INTRODUCES PRIVACY DISPLAY, A SAMSUNG-FIRST, BUILT-IN TECHNOLOGY THAT PROTECTS

PERSONAL INFORMATION AT THE PIXEL LEVEL.

THE

GALAXY S26 ULTRA SERVES AS THE CENTREPIECE OF AN EXPANSIVE GALAXY AI ECOSYSTEM AND

THE DEVICE’S AI CAPABILITIES ENHANCE PRODUCTIVITY ACROSS THE S26

SERIES.

A FUNDAMENTAL SHIFT IN DISPLAY TECHNOLOGY

Privacy Display represents a sophisticated evolution in mobile hardware. By utilising Galaxy AI to control light dispersion at a pixel level, the screen remains bright, crisp and comfortable for the user, while instantly limiting visibility from side viewing angles.

Unlike traditional stick-on privacy lters that dim screens and degrade clarity, Samsung’s integrated solution preserves full viewing quality and colour accuracy. The protection works seamlessly in both portrait and landscape modes, adapting naturally to the user’s movement throughout the day.

BUILT FOR SOUTH AFRICA’S EVERYDAY MOMENTS

From commuting on the Gautrain in Johannesburg to working remotely in a Cape Town café, South Africans are constantly connected in shared spaces.

Privacy Display is purpose-built for these environments, creating a personal digital bubble in the most public settings.

Whether making an EFT on a packed bus in Sandton or checking sensitive work emails at OR Tambo, Privacy Display works in tandem with Samsung’s software intelligence to give users complete control:

• Automatic activation: triggers protection instantly when the device detects a PIN or password entry.

• App-specific shielding: automatically activates when opening sensitive applications such as banking, email or messaging.

•Adjustable privacy levels: users can choose between partial screen privacy to shield noti cation pop-ups and maximum privacy protection for enhanced side-view shielding.

BY COMBINING

WORLD-FIRST

HARDWARE INNOVATION WITH THE PROACTIVE

POWER

OF GALAXY AI, SAMSUNG IS ONCE AGAIN REDEFINING THE FLAGSHIP EXPERIENCE, ENSURING THAT PRIVACY TRAVELS WITH THOSE ON THE MOVE.

REDEFINING THE FLAGSHIP EXPERIENCE

Privacy Display reinforces Samsung’s commitment to safeguarding personal information at every layer. By combining world- rst hardware innovation with the proactive power of Galaxy AI, Samsung is once again rede ning the agship experience, ensuring that privacy travels with those on the move.

GALAXY S26 ULTRA: PERFORMANCE and CREATIVE HIGHLIGHTS

• Snapdragon® 8 Elite Gen 5 for Galaxy: a customised mobile chipset delivering best-in-class performance and power ef ciency tailored speci cally for the S26 Ultra.

• Next-gen thermal management: a redesigned vapour chamber supports stable, high-performance output during intensive AI processing and gaming.

• Industry-leading camera system: a uni ed experience for capture, editing and sharing. Wider apertures and enhanced sensors make this Galaxy’s brightest camera yet, delivering richer detail in low-light and at high zoom.

• Enhanced nightography and super steady video: improved noise reduction for sharper low-light footage and a new horizontal lock option for cinematic stability.

• Voice-activated photo assist: users can describe changes in their own words, such as turning a day scene into night or restoring missing parts of an object, using natural voice interactions.

• Creative studio: a new suite that allows users to transform images into various artistic styles and turn ideas into personalised stickers and wallpapers.

SEAMLESS INTEGRATION WITH OTHER DEVICES

The Galaxy S26 Ultra serves as the centrepiece of an expansive Galaxy AI ecosystem, and the device’s AI capabilities enhance productivity across the S26 series.

Furthermore, integrated sensors allow the device to sync seamlessly with the Galaxy Buds4 Pro, Galaxy Ring and Galaxy Watch series, providing a holistic, AI-driven view of wellbeing. This interconnectedness ensures that Privacy Display is just one layer of a smart, safe and seamless modern lifestyle.

BUILT

ON PERFORMANCE

AND ENGINEERED FOR EASE, THE GALAXY S26

SERIES USES GALAXY AI TO SIMPLIFY EVERYDAY TASKS BY UNDERSTANDING INTENT, ANTICIPATING NEEDS AND TAKING PROACTIVE ACTION ON BEHALF OF ITS USERS.

AI TURNING HEADS

FOUR INDUSTRIES REVEAL HOW TECHNOLOGY IS GIVING THEM THE COMPETITIVE EDGE

AI is everywhere – but some businesses aren’t just keeping up. They’re using it to predict, personalise and perfect customer experiences, writes LYNN GRALA

Some chief information of cers (CIOs) stress that AI shouldn’t be adopted merely as a tick-box exercise; it must be strategic, driving revenue while enhancing customer experience. Four leading businesses share how they use AI to transform one-size- ts-all services into hyper-personalised, predictive experiences that boost engagement and satisfaction. They all agree that a strong data foundation is key to uncovering trends, patterns and insights that anticipate customer needs. Equipping employees with the right skills to leverage AI effectively is equally vital.

Here’s what these industry experts reveal about harnessing AI to elevate customer satisfaction.

AI delivers a seamless shopping experience

Today, you’ve probably already seen or heard a couple of Checkers Sixty60 motorbikes whizzing past, the drivers dressed in their signature teal-green uniform. Winning various top-tier innovation awards, Sixty60 was the rst 60-minute grocery delivery service and the Shoprite Group’s most revolutionary digital offering to the South African public, enabling them to order from the Sixty60 app and track their order from the store to their door.

“From how products are sourced and stocked to how orders are delivered and how customers are assisted, AI plays a practical role across our business,” says Minnaar Pieters, head of AI transformation.

“FROM HOW PRODUCTS ARE SOURCED AND STOCKED TO HOW ORDERS ARE DELIVERED AND HOW CUSTOMERS ARE ASSISTED, AI PLAYS A PRACTICAL ROLE ACROSS OUR BUSINESS.”

AI also enables faster responses to shifts in customer behaviour by identifying trends in purchasing patterns almost immediately, allowing the group to adjust

stock allocations, introduce relevant promotions or adapt pricing strategies to meet the changing demand.

“All of these systems are fully integrated; they do not operate in silos. Customers never see the engineering behind them; instead, they experience it through the seamless execution we deliver in our stores and on digital platforms like Sixty60,” explains Pieters.

Across the group, AI-driven systems analyse vast volumes of sales data, historical trends and external factors, enabling the group to ensure the availability of the right products in the right quantities at the right times across its supermarket network. For example, the group has implemented an end-to-end machine-learning system for ultra-fresh products, such as fruit, vegetables and meat.

Smart trolleys, called the Xpress Trolley, are currently being trialled at two Checkers supermarkets in the Western Cape –Checkers Hyper Brackenfell and Checkers

Constantia. The trial focuses on testing customer response and overall viability, while exploring how technology can streamline processes, enhance ef ciency and improve the customer experience.

The Xpress Trolley allows shoppers to scan-and-bag items as they go, track a live running total and pay directly from the trolley without needing to stand in a queue or bag products at a traditional till point. Once they’ve nished shopping, they head to the dedicated checkout lane and pay directly from the trolley using the bank card saved on their Checkers Sixty60 pro le, take their printed till slip and exit via the checkout gate.

AI FOR A PREDICTABLE AND ENJOYABLE EATING EXPERIENCE

Navigating the restaurant industry is certainly not for the faint-hearted; it is known to be one of the toughest industries with high failure rates, thin pro t margins, intense competition and high staff turnover and overhead costs. And, many customers walk in with tight budgets and very high expectations – if they’re not happy, it may be the last you see of them.

simple tool but has profound and immediate bene ts,” says Spur Corporation’s CIO, Paul Casarin

Another simple yet practical way to help identify areas needing improvement and trends is by using data analytics from the insights of thousands of customer reviews about their dining experiences. The company also utilises data to improve its supply chain and operational ef ciency and manage food in ation and maintain margins, as well as predictive analytics to assist in streamlining orders for

Spur also prioritises child safety in its play areas. Using a combination of security tags, wristbands and dedicated attendants to monitor children, together with Sensormatic technology, Spur ensures children remain in the safe zones.

With Spur Corporation making recent headlines announcing a pro t increase of 13 per cent, driven throughout the 10 brands, and its plans of opening 42 new restaurants in South Africa and 14 internationally this year, it reveals that they have gone the extra mile to understand exactly what their customers want, and are leveraging technology and AI ef ciently to drive consumer satisfaction across the group’s brands.

Ranking among the top ve apps in the South African app stores in the Food and Drink Category, the Spur Family app offers online ordering, together with its unique award-winning loyalty programme, which integrates comprehensive loyalty, online ordering and tailored, kid-centric rewards into one platform. Its key differentiators include the ability to earn cashback in points on every meal and the Secret Tribe birthday rewards for kids.

“We’ve focused on improving our customer experience through technology across the dining experience and positioning our data as a strategic asset – working through the challenges of data silos and data quality. One  recent example in digital adoption has been the introduction of AI for our teams; it’s a

“Any digital transformation effort is grounded by the company’s purpose and direction of leading for the greater good, says Casarin. “It’s not a one-time event, but an ongoing process to propel the business forward through innovation. AI and technology are powerful enablers, but they only deliver sustainable advantage when combined with clear strategy, responsible use and a learning and adaptive culture.”

AI THAT HELPS IN AN EMERGENCY

The Automobile Association (AA) is a prime example of it’s never too late to evolve and relook your strategy through the lens of technology to drive and achieve business objectives, which, in their case, can literally involve life or death situations. >

USING AI, THE MEMBER’S NUMBER IS IDENTIFIED, AND THEIR PROFILE, DETAILS AND LIVE LOCATION APPEAR. IT ALSO SHOWS THE MEMBER IN REAL-TIME HOW FAR AWAY A RESPONDER IS, WHICH REDUCES THEIR STRESS.

Paul Casarin

For almost a century, the AA has been providing 24/7 emergency roadside assistance, vehicle-related legal advice, towing and, more recently, medical rescue and armed response. It realised that how Gen Zs want to deal with emergencies is vastly different from that of their grandparents a few decades ago.

The AA’s recent strategic brand revamp has placed digital innovation at the core of how it can serve its members more ef ciently and effectively. It has invested signi cantly in automation and digitisation of key processes across the organisation, as well as launching the

“When you are dealing with an emergency, you know the member is stranded or in a very stressful situation, so technology needs to focus on reliability and speed of response,” explains the AA’s CIO Phila Msizazwe. “In the middle of an emergency, a member doesn’t want to be on a long call with the help

My AA app is a simple mobile app that members can use to access the panic button literally within seconds of opening the app. The question ‘Are you safe will pop up?’ If the answer is no, an emergency response is activated. Using AI, the member’s number is identi ed, and their pro le, details and live location appear. It also shows the member in real-time how far away a responder is, which reduces their stress.

Before the AA started applying AI to improve its dispatch intelligence and enhance contact centres, it rst ensured it had a strong data foundation.

Msizazwe believes IT delivers the most value when it’s deeply aligned with the core of the organisation: “Start with your biggest problem and move from that point. Innovation for us isn’t about chasing trends; it’s about solving real problems in practical ways.”

AI FOR GLOBALLY COMPETITIVE ACCOUNTANTS

With chartered accountants playing such a critical and strategic role in the nancial arena and broader economy, the South African Institute of Chartered Accountants (SAICA) has been hard at work tailoring technology to become insights-driven to meet members’ needs and requirements more ef ciently and intelligently.

“INSTEAD

OF INTRODUCING UNNECESSARY COMPLEXITY THROUGH NEW TECHNOLOGY PLATFORMS, SAICA IS ENHANCING CURRENT SOLUTIONS WITH AI CAPABILITIES. THIS APPROACH ENSURES MEMBER CONTRIBUTIONS ARE USED RESPONSIBLY WHILE DELIVERING TANGIBLE IMPROVEMENTS TO SERVICES.” – YOSHEEN PADAYCHEE

SAICA represents more than 62 000 members globally, including chartered accountants working across business, government, academia, entrepreneurship and the public sector. These professionals play a vital role in the economy as nancial leaders, governance specialists and trusted advisors. Their needs typically include continuous professional development (CPD), access to trusted technical guidance, career and leadership development opportunities, global professional credibility and a strong professional community.

“The priority is to make existing systems AI -native,” says SAICA’s incoming CIO, Yosheen Padayachee. “Instead of introducing unnecessary complexity through new technology platforms, SAICA is enhancing current solutions with AI capabilities. This approach ensures member contributions are used responsibly while delivering tangible improvements to services.”

For technical knowledge and guidance, AI-enabled search tools allow members to quickly locate relevant IFRS, auditing or tax guidance across extensive technical repositories, signi cantly improving ef ciency while maintaining reliance on trusted, authoritative sources.

On SAICA’s digital platforms, members can track and manage CPD requirements, access technical publications and standards guidance, register for professional events and engage with professional communities.

“By integrating data across platforms, the institute gains valuable insights into member behaviour, engagement patterns and emerging professional skills. This allows SAICA to anticipate member needs rather than simply react to them,” explains Padayachee. She adds: “These insights also enable SAICA to design services that help keep members globally competitive and future-ready.”

AI and analytics help SAICA identify emerging professional skills and industry trends, allowing training programmes and professional initiatives to remain aligned with the evolving needs of the economy. From a professional development perspective, AI tools enable members to focus on developing the skills most relevant to them and the future of the profession.

For students and trainees, AI-driven learning analytics can identify learning gaps and support improved exam preparation, strengthening the pipeline of future chartered accountants.

“Throughout these initiatives, SAICA emphasises responsible AI, ensuring technology enhances professional judgement rather than replacing it,” Padayachee says.

As the experts advise, a good starting point for any business is to understand exactly what problem or outcome it is trying to solve or achieve. Then, with the strategic and responsible adoption of AI and technology, it can begin to capitalise on the wealth of its customer data to move customer satisfaction to the next level.

Follow: Paul Casarin www.linkedin.com/in/paulcasarin Spur Corporation www.linkedin.com/company/spur-group www.instagram.com/spursteakranches/?hl=en SAICA www.linkedin.com/company/saicaza www.instagram.com/saicaza/?hl=en https://web.facebook.com/OfficialSAICA

The Automobile Association www.linkedin.com/company/automobile-association-of-south-africa www.instagram.com/aasouthafrica/# www.facebook.com/AASouthAfrica Shoprite Holdings (Checkers) www.linkedin.com/company/checkers-sixty60 www.instagram.com/checkers_sa/?hl=en https://web.facebook.com/CheckersSixty60App

Yosheen Padayachee
Phila Msizazwe

THE VALUE OF AI DIFFUSION

AI is becoming core to enterprise intelligence and demands resilience over spectacle, governance over hype and operational precision, writes SASIDHAR PARVATHANENI , acting Chief Sales and Solutions Officer, BCX

When a core system fails it means regulatory exposure, reputational damage, revenue loss and eroded public trust, and it can disrupt services or compromise safety.

In South Africa, technology failure is a strategic risk. That reality is reshaping how arti cial intelligence is adopted. While global markets often treat AI as disruption or experimentation, South Africa’s operating environment demands something more deliberate. Here, AI is becoming an operational discipline. For ICT leaders, the issue is whether it can strengthen stability, governance and measurable performance in volatile conditions.

FROM EXPERIMENTATION TO ENTERPRISE DESIGN

The early wave of generative AI adoption was de ned by exploration: chat interfaces, copilots and proofs of concept. It built familiarity and presented both opportunity and risk.

The next phase is different. It is about AI diffusion, moving from isolated use cases to embedded, enterprise-wide capability.

Diffusion is where value is won: intelligence embedded securely into work ows, data and security, governed through clear accountability, and repeated across functions until it becomes part of the operating model.

Boards are now demanding more: measurable value, service reliability, governance, integration into the operating model and accountability when systems fail.

In South Africa, those requirements carry additional weight. Energy instability affects uptime. The public sector faces scal and audit scrutiny. Financial services face rising compliance pressure. Capital-intensive industries need predictive reliability.

The challenge is no longer access to AI, but integration. Many organisations have pockets

of innovation yet struggle to translate pilots into sustained capability. Value is realised when intelligence is embedded into core processes, decision frameworks and customer experiences, not when it exists in silos.

INTERNAL DISCIPLINE AS PROOF

AI maturity is proven in internal operating change, not external messaging.

Before AI can be positioned as an enterprise capability, it must demonstrate a measurable impact inside the organisation deploying it. Internal discipline precedes external credibility.

Across BCX’s service and infrastructure environments, AI is being embedded with a focus on performance integrity, governance and accountability. AI-driven incident classi cation and pattern detection are improving response times and enabling predictive intervention, shifting operations from reactive escalation to proactive stability.

Generative tools are streamlining documentation and improving knowledge consistency, not just faster work, but also standardisation, traceability and reduced dependency on siloed expertise. Modelling also supports infrastructure risk assessments, including the impact of energy disruptions, enabling anticipatory decision-making.

In regulated environments, anomaly detection strengthens fraud visibility and oversight responsiveness by surfacing patterns at scale. The principle is consistent: intelligence is integrated into connected systems with clear accountability and measurable performance.

ENTERPRISE APPLICATION IN PRACTICE

This internal maturity translates into enterprise impact and shapes how organisations like BCX enable their clients.

In telecommunications, modelling assesses the impact of load shedding on network

performance, enabling prioritised interventions that protect uptime and revenue continuity. In nancial services, predictive fraud detection and consolidated risk modelling improve oversight while reducing investigative lag. In healthcare regulation and contact environments, conversational and classi cation systems improve access and turnaround times while maintaining audit visibility.

These are production environments under regulatory and performance constraints. The common thread is integration into governed operating models.

GOVERNANCE AS STRATEGIC INFRASTRUCTURE

No AI strategy in South Africa can ignore governance. Organisations operate under the Protection of Personal Information Act, sector-speci c regulation and increasing scrutiny. Explainability is becoming a requirement.

For ICT leaders, governance enables scale. Without it, AI sprawl and shadow deployments emerge, fragmentation increases and trust erodes.

THE LEADERSHIP DECISION

Global AI narratives celebrate speed and novelty. South Africa’s context demands resilience over spectacle, governance over hype and operational precision.

The decision is clear: embed AI into operating models or bolt it onto existing systems. Design for measurable performance or experiment for visibility.

Over the next three to ve years, organisations that treat AI as governed infrastructure will strengthen service credibility, protect capital and enhance competitiveness. Those that prioritise hype risk fragmentation and exposure.

Arti cial intelligence is becoming core to enterprise intelligence. The technology is available. The risk is misalignment.

In South Africa’s operating environment, AI will not be judged by what it promises, but by what it sustains.

Sasidhar Parvathaneni

AI ISN’T REPLACING WORKERS IT’S SUPERCHARGING THEM

Most executives are asking the wrong question about AI. It’s often framed as humans versus machines – but the real disruption lies elsewhere, writes COHESIONX

The more important question is what happens when every knowledge worker is supported by a growing team of specialised arti cial intelligence (AI) agents? That future is nearer than most think. Not because businesses are handing over control to autonomous software, but because the economics of knowledge work are shifting. Organisations are riddled with expensive human effort spent on low-leverage tasks: searching for information, summarising documents, triaging requests, reconciling data, routing work and following up. These are repetitive, rules-based and highly interruptive tasks.

AI AGENTS MATTER

This is where AI agents matter. Their value isn’t that they think like executives. It’s that they take structured work off people’s desks. One agent retrieves knowledge. Another drafts a response. Another validates data or routes

a request. Humans remain accountable for judgement, exceptions and relationships, but routine co-ordination shifts into software.

The next wave of AI won’t be one generic assistant in a chat window. It will be a portfolio of role-based agents working quietly in the background. Just as spreadsheets multiplied nance teams and enterprise systems multiplied operations, agents will multiply the effective capacity of knowledge workers. The highest performers won’t be working alone with a prompt box; they’ll be directing a trusted bank of agents daily.

WORKFORCE AMPLIFICATION

That’s the real prize: workforce ampli cation. This reframes matters for adoption strategy. Treating agents as a labour-replacement story provokes resistance and drives the wrong automation. Treating them as productivity infrastructure makes the path clear. Start where skilled people are losing time on repetitive preparation, handoffs

and administrative drag: customer service, legal review, operations, internal support, analytics and HR administration.

Businesses don’t need to jump straight to autonomous decision-making. The smarter route is staged adoption:

1. Assistants and copilots: observe, suggest and draft under human review.

2. Narrow agent use cases: repeatable tasks, clear inputs, checkable outputs.

3. Greater autonomy: only once trust, accuracy and governance are established. This is how agent adoption becomes operational rather than experimental and how organisations avoid mistaking a good demo for a usable operating model.

Governance isn’t a technical footnote. It’s the difference between a scalable capability and a risky toy. Agents need a clear remit: approved actions, de ned data access, escalation rules and audit trails. They must be treated less like magic and more like digital workers: accountable, measurable and governed.

Firms that move early won’t necessarily have fewer people. However, they’ll get more output from the same people. Resolving issues faster, scaling expertise, shortening cycle times and freeing senior staff from routine drag. In a low-growth, high-cost environment, that matters enormously.

Productivity gains don’t have to come from hiring better people or demanding more from existing teams. They can come from giving those teams better leverage.

That’s when agents stop being a technology story and become a business story.

Start writing yours: www.cohesionx.co.za

WHEN ATTACKERS USE THE SAME TOOLS YOU DO

As small and medium enterprises embrace AI to boost productivity, cybercriminals exploit the same tools – accelerating phishing, deepfakes and attacks – forcing businesses to prioritise security alongside innovation. By TREVOR

Arti cial intelligence has rapidly shifted from a futuristic concept to a daily business tool. For organisations, particularly small and medium enterprises (SMEs), AI is no longer a luxury reserved for large corporates. It is embedded in laptops, cloud platforms and productivity apps, helping smaller rms move faster, work smarter and compete in an increasingly digital- rst economy.

Across South Africa, SMEs are deploying AI to automate administrative tasks, streamline reporting and accelerate decision-making. Sudesh Pillay, executive head of iStore Business, observes that the shift is “remarkably practical”. He explains that SMEs are using AI to solve long-standing resource constraints, particularly in areas such as data entry, invoice reconciliation and nancial modelling. “Tasks that once consumed disproportionate time are now handled with speed and accuracy,” he says. In one example, he noted that a brokerage rm they work with reduced the time required to build nancial models from days to minutes.

Pillay explains that the democratisation of AI has changed what is possible for smaller rms. “This shift busts the myth that high-tier AI performance is only for large enterprises,” he says. With powerful, AI-ready devices now accessible to SMEs, businesses can process complex datasets and run advanced models locally. “The real advantage is that SMEs can now compete on capability rather than just budget,” he adds.

A CHANGING CYBERTHREAT LANDSCAPE

However, the same tools driving productivity are also empowering cybercriminals.

Seshni Moodley, director of cybersecurity at NTT DATA Middle East and Africa, warns that AI has fundamentally altered the cyberthreat

landscape. “The rst major shift is scale,” she explains. Generative AI enables criminals to produce persuasive phishing emails, fabricated identities and deepfakes with minimal effort. “The second is speed. Attack chains no longer unfold over days, but in minutes,” she says, noting that automated reconnaissance and lateral movement now occur at machine speed.

For SMEs, the risk is not limited to sophisticated AI deployments. Moodley cautions that “risk rises even when AI is basic”. Governance often lags behind innovation, creating blind spots around data exposure and access controls. AI tools frequently connect to email, cloud storage and customer relationship systems, widening potential attack pathways. “Attackers now go after identities and applications that hold the keys to sensitive data,” she adds.

The line between legitimate innovation and malicious use is becoming increasingly blurred. Pillay acknowledges that “the same AI tools that help an SME draft client proposals can be used to generate convincing phishing emails or deepfake content”. Without dedicated security teams, smaller organisations face growing exposure.

He emphasises the importance of architectural security, including hardware-level protections and prioritising on-device processing to reduce reliance on external infrastructure. Moodley highlights additional concerns, including attacks on AI systems themselves. Criminals can tamper with data inputs or manipulate prompts, causing AI tools to generate inaccurate or misleading outputs. “AI makes attacks faster, more believable and able to improve themselves,” she says. “Organisations must respond at machine speed.”

Pillay advises businesses to focus on fundamentals. “Start with a real problem,” he says, urging SMEs to identify operational bottlenecks before adopting AI tools. He recommends prioritising secure platforms, investing in AI literacy and ensuring “a human remains in the loop”, particularly in regulated industries.

From a cybersecurity perspective, Moodley suggests beginning with identity protection. Stronger authentication for nance and administrative roles, strict approval processes for payment changes and regular access reviews can signi cantly reduce exposure. She stresses the importance of visibility. “Bring shadow AI into the light,” she advises, encouraging SMEs to inventory all AI tools in use and to implement baseline controls, such as logging and human oversight, for sensitive outputs.

AI is not inherently a threat, but an ampli er. For SMEs, it ampli es productivity and competitiveness. For cybercriminals, it ampli es deception and speed. In this dual-use era, resilience will depend on organisations embedding security into their AI journey from the outset.

Sudesh Pillay
Seshni Moodley

THE BACKBONE BEHIND BOTS

As AI adoption accelerates, BUSANI MOYO examines the chips, data centres, connectivity and energy systems enabling intelligent technologies, and what they mean for South Africa’s digital future

Arti cial intelligence may appear weightless, algorithms generating text, images and insights in seconds, but behind every bot lies a dense physical infrastructure of chips, bre, data centres and power systems.

Two industry leaders shaping this backbone are Dr Mmaki Jantjies, group executive: innovation and transformation at Telkom, and Akhona Nkalitshane, business development manager: enterprise computing solutions at Altron Arrow. They offer complementary perspectives on what it takes to scale AI, from silicon to skills.

“At Telkom, our role as South Africa’s digital backbone is central to this journey,” says Dr Jantjies. She argues that AI growth is inseparable from foundational infrastructure. “Ultimately, scaling AI in South Africa will require a holistic approach that integrates compute, data, connectivity, sustainability and skills.” Without that integration, she warns, adoption will remain uneven.

“Graphics processing units sit at the cornerstone of running AI systems because they are excellent at parallel matrix operations,” she explains. GPUs operate alongside multicore central processing units (CPUs) that co-ordinate workloads and manage data ows. For certain applications, particularly inferencing, tensor processing units are also deployed.

Memory and storage have become strategic constraints.

“The explosive rise of generative AI has increased demand for high-bandwidth memory,” Nkalitshane says. “Enterprise-grade ash storage is as critical as GPU.” Without high-speed interconnects and low-latency networking, processors can sit idle waiting for data. “Performance is measured by how ef ciently data can move across

DATA CENTRE ARCHITECTURE IS  CHANGING

On the hardware front, the requirements are exacting. “AI models rely on highly optimised hardware capable of running massive parallel compute resources with speed and memory bandwidth,” says Nkalitshane. At the core are graphics processing units (GPUs), which excel at the parallel mathematical operations required to train large models.

These demands are reshaping data centre architecture. “We now see data centres transitioning from traditional air cooling to advanced liquid cooling,” Nkalitshane notes. Liquid cooling supports high-density racks and improves energy ef ciency, a critical factor given AI’s power intensity.

Modern facilities are also designed for exibility, able to scale up when workloads spike and scale down when demand eases. Increasingly, hyperscale centres are complemented by edge computing, reducing latency by processing data closer to its source.

For Dr Jantjies, connectivity remains the foundation. “In South Africa, there are several infrastructure challenges that in uence the pace of AI adoption,” she says, pointing to uneven broadband access, high connectivity costs, limited hyperscale data centre capacity in certain regions, power instability and a shortage of specialised digital skills. “These structural barriers can slow innovation and risk widening the digital divide if they are not addressed in a co-ordinated and sustainable way.”

Cloud computing is helping to bridge some of these gaps. “Cloud has turned AI from something only large corporations could afford into something smaller businesses can access,” says Nkalitshane. Through AI-as-a-Service and GPU-as-a-Service models, companies can scale compute usage according to demand, avoiding heavy upfront capital expenditure. Energy stability, however, remains pivotal. AI systems are power-intensive, and resilience must be built into deployment strategies. While grid performance has improved, organisations continue to factor backup power into their investments, adding cost and complexity. Import dependence compounds the challenge. As Nkalitshane observes, when global demand for advanced processors and memory surges, “Africa tends to be lower on the priority list when global manufacturers allocate AI hardware.”

Yet progress is visible. “Investments in bre networks, 5G, mobile broadband and digital public infrastructure are steadily improving connectivity and expanding access,” Dr Jantjies says. Through Openserve’s bre expansion, mobile network evolution and enterprise cloud capabilities, Telkom is working to enable AI-driven growth across sectors.

“Sustainable AI adoption depends not only on infrastructure, but on people, partnerships and a strong digital ecosystem,” Dr Jantjies emphasises.

The bots may capture attention, but it is the backbone beneath them, such as silicon, storage, connectivity and skills, that will determine whether South Africa can fully participate in the AI economy.

Follow: Akhona Nkalitshane www.linkedin.com/in/akhona-nkalitshane-5835b255

Dr Mmaki Jantjies www.linkedin.com/in/mmaki-jantjies-phd-2b389b116

Akhona Nkalitshane

ENTERPRISE AI WILL NOT BE WON IN THE PILOT PHASE

To deliver meaningful value, AI must be fully integrated into an organisation’s operations, writes RETRO RABBIT / SMARTEK21

Across South Africa, arti cial intelligence has moved well beyond the hype cycle. The focus has shifted to the more critical challenge of making AI work effectively in real-world environments. This is where many initiatives continue to fall well short.

While a growing number of organisations have experimented with AI, reports show that far fewer have successfully embedded it into their core operations in a way that delivers measurable value, aligns with governance requirements and scales beyond pilot programmes. This marks the true divide in enterprise AI today. It is no longer between those exploring AI and those ignoring it, but between those still testing possibilities and those converting AI into operational capability.

The differentiator is not ambition, but rather its execution.

INTEGRATED

NOT ISOLATED

Enterprise AI rarely fails due to limitations in the technology itself. More often, failure stems from a disconnect between implementation and business realities. Too frequently, AI solutions are introduced as isolated innovation

projects rather than integrated operational capabilities. While they may perform well in controlled environments, solutions that are not embedded into work ows, aligned with security and compliance frameworks and designed for user adoption seldom progress beyond experimentation.

To deliver meaningful value, AI must be embedded into the way organisations already operate. This begins with clearly de ned business challenges:

• Where are teams losing time to manual processes?

• Which work ows are slowed by repetitive human intervention?

• How do key controls rely on inconsistent manual reviews?

These are the areas where AI can transition from concept to tangible impact.

Equally important is delivery discipline. In enterprise environments, AI must operate within established frameworks of architecture, governance, risk management and change control. This is particularly critical in regulated sectors, where trust and accountability are as important as performance. Retro Rabbit developed its Smartboxx AI platform within this context for enterprise-ready use.

WILL THEREFORE NOT BE LED BY THOSE WHO EXPERIMENTED THE MOST, BUT BY THOSE WHO HAVE LEARNED HOW TO IMPLEMENT RESPONSIBLY AND SCALE WITH INTENT.

Rather than positioning AI as a stand-alone capability, Smartboxx enables organisations to embed intelligence directly into existing systems and work ows. The focus is on practical deployment, which reduces operational friction, improves turnaround times, strengthens oversight and helps with enhancing decision-making across the enterprise.

The above approach is already delivering measurable outcomes for organisations that are partnering with Retro Rabbit. One major South African insurer reported savings of over R41-million in the 2025 nancial year alone, achieved through AI initiatives enabled by Smartboxx. In other cases, Smartboxx-supported deployments have enabled 100 per cent interaction monitoring in contact centres, reduced onboarding times from 24 hours to just 10 minutes and delivered ef ciency improvements of up to 80 per cent per team member in targeted use cases.

These results highlight that the value of enterprise AI lies not in the model itself, but in how effectively it is integrated into day-to-day operations.

For South African organisations, this must be an immediate business priority. Companies are facing increasing pressure to improve ef ciencies, modernise service delivery and remain competitive under complex regulatory and operational constraints. The next phase of AI adoption will therefore not be led by those who experimented the most, but by those who have learned how to implement responsibly and scale with intent.

Ultimately, enterprise AI will not be de ned by performance in a pilot environment. It will be de ned by its ability to deliver trusted, measurable value where it matters most, within the core of the business environment.

For more information: info@retrorabbit.co.za www.retrorabbit.co.za

VISIT WEBSITE
The Retro Rabbit team showcases Smartboxx, its AI Accelerator for Enterprise, at the AI Expo Africa Conference, 2025.

Arti cial intelligence has shifted from an experimental tool to an economic force. The most persistent myth parents and executives still believe is that talent guarantees success. It does not – not anymore.

The signal is clear: the jobs our children will be employed for don’t exist yet. The skills that earned top marks previously are no longer reliable predictors of long-term success.

Insights from the World Economic Forum’s Future of Jobs Report 2025 show that nearly half of core workplace skills are expected to change by 2030, with analytical thinking, resilience and exibility ranking above technical pro ciency as the most critical capabilities for the future workforce.

Meanwhile, a 2025 LinkedIn Economic Graph global workplace study highlights that hiring increasingly prioritises adaptability, learning agility and problem-solving over traditional credentials, re ecting the rapid integration of AI across industries.

RETHINKING EDUCATION

For decades, education systems have rewarded accuracy, compliance and memory. Students who could reproduce information quickly and correctly rose to the top. Standardised testing became the proxy for intelligence. Talent was de ned narrowly and celebrated loudly.

AI now performs many of those functions better than humans. It remembers perfectly. It processes instantly. It generates content endlessly. If education continues to reward what machines can already do, we are preparing children for redundancy.

WHY ADAPTABILITY BEATS TALENT

In a world where AI reshapes work, children must learn to adapt, experiment and collaborate with technology, writes SEMONE PEACOCK, director at Logiscool Ruimsig

Adaptability is different. It is the ability to confront something unfamiliar without freezing. It is cognitive exibility. It is the willingness to test, fail, adjust and try again. It is problem-solving under uncertainty. Unlike raw academic talent, it cannot be automated easily.

Talent is impressive, adaptability is durable, and in an AI-driven economy, durability wins.

THE CREDIBILITY CRISIS

Traditional education faces a credibility gap. Schools still organise learning around xed answers and predictable outcomes. Students graduate having mastered content that AI can already replicate, but without the mental agility to pivot when circumstances change. Parents often respond by doubling down on marks – more tutoring, more exam preparation, more pressure – which may be the wrong priority. A child who learns how to approach unfamiliar problems with curiosity instead of fear has a longer shelf life than a child who scores distinctions.

A teenager who understands how to learn independently will outpace one who relies on structured instruction. A young adult who can collaborate with AI tools rather than compete

against them will thrive where others stall. The shift required is as much philosophical as practical. Education must move from rewarding certainty to rewarding exploration.

BEYOND TECHNICAL ABILITY

Established in 2014, Logiscool is rooted in empowerment, growth and community. The organisation creates safe, supportive environments where children build confidence, develop essential digital skills and explore their creativity. Programmes go beyond technical ability to support academic achievement, personal growth and long-term success, preparing learners for school, future careers and everyday life in an increasingly digital world.

REDEFINING ACADEMIC RIGOUR

This does not mean abandoning academic rigour. It means rede ning it for the AI age. Rigour should involve wrestling with complex questions, building solutions from scratch, analysing failures and iterating improvements. Technology must be experienced as a tool for creation, not passive consumption. Students must design, test and re ne rather than memorise and repeat.

Parents, educators and policymakers must embrace a mindset that prizes adaptability, curiosity and collaboration with AI above narrow de nitions of talent. Children who can navigate ambiguity, experiment con dently and adjust quickly will outlast and outperform those who rely solely on memory or exam scores.

Turn static files into dynamic content formats.

Create a flipbook