2 minute read

How to adopt AI without abandoning your principles

by Tom Hewitson Chief AI Officer at General Purpose

AI offers genuine productivity gains, but it raises legitimate questions about environmental impact, job displacement, and who actually benefits.

The answer is deliberate adoption: clear principles that protect your values whilst capturing value from the technology.

Three principles that work in practice decisions, dismissals, customer complaints, contract terms.

Get everyone using AI in their daily work first. When everyone in your organisation is actually using AI day-to-day, you’re far more likely to spot biased outputs, problematic patterns, or unintended consequences.

You’re building the skills and confidence people need to articulate concerns about ethical usage. People need hands-on experience to contribute meaningfully to ethical decisions. Someone who’s used AI for three months can engage in the debate. Someone who’s never touched it cannot.

This requires proper ongoing training for everyone, not just the technical team. Not a two-hour introduction. Real upskilling that gives people confidence and capability to use these tools thoughtfully.

Be transparent about what AI does.

Once people are using AI, be clear about how it works and what decisions it’s making in your business.

When something goes wrong (and eventually, something will), prior transparency is the difference between a manageable incident and a trust crisis that takes months to repair. AI tools should be assisting human judgement, not replacing it.

The most significant ethical failures in AI happen when organisations remove human oversight to save costs, then discover too late what their systems were actually doing.

Understand

your impact monopolistic abuse without killing innovation. Businesses that adopt AI ethically can show what responsible practice actually looks like, creating pressure on others to follow.

You’ve probably heard claims that using ChatGPT is like boiling a kettle or buying a latte. Nonsense.

A ChatGPT query emits about 2–4 grams of CO2. A latte emits around 840 grams. In other words, you would need well over 100,000 AI queries to equal one transatlantic flight from London to New York.

Make informed trade-offs rather than dismissing the issue entirely or treating every query as a crisis.

The AI industry is consolidating fast Training sophisticated models requires hundreds of millions in capital and access to scarce technical expertise. The number of organisations capable of building foundational AI systems is small and shrinking. We’re looking at a natural monopoly situation. For mid-sized businesses, this creates a problem. If you don’t develop internal capability now, you’re going to be buying expensive tools from monopolistic providers later, with no influence over how they work or what values they embed.

Organisations that establish ethical AI practices today will have leverage tomorrow. They’ll be able to demand transparency, fair pricing, decent terms from providers. They’ll have the expertise to know when they’re being sold something that doesn’t actually serve their interests.

Collective action matters. Individual ethical choices, whilst important, aren’t going to prevent monopolisation. That requires coordinated pressure.

But that coordination only works if enough organisations develop the internal expertise to understand what they should be demanding in the first place.

What to do on Monday morning

Give everyone in your organisation access. Provide proper training. Create space for people to experiment and learn. Encourage them to share what works and what doesn’t.

This is how you build the institutional knowledge to make good ethical decisions about AI.

Once people have real experience with the technology, then you can have meaningful conversations about where to deploy it more formally, what guardrails you need, and what your ethical principles should actually look like in practice.

The goal is to develop the institutional muscle to make thoughtful choices as you scale.

The organisations building this capacity now will shape how AI gets used in their sector.

This article is from: