AEC November / December 25

Page 1


Building Information Modelling (BIM) technology for Architecture, Engineering and Construction

Twinmotion: a new era

Reshaping the architect-friendly viz tool

The rise of the data lake

Freedom from files: AEC firms seek data ownership

Visionary event returns 13-14 May NXT BLD 2026 An ecosystem of specialised agents Bentley frames AI future

editorial

MANAGING EDITOR

GREG CORKE greg@x3dmedia.com

CONSULTING EDITOR

MARTYN DAY martyn@x3dmedia.com

CONSULTING EDITOR

STEPHEN HOLMES stephen@x3dmedia.com

advertising

GROUP MEDIA DIRECTOR

TONY BAKSH tony@x3dmedia.com

ADVERTISING MANAGER

STEVE KING steve@x3dmedia.com

U.S. SALES & MARKETING DIRECTOR DENISE GREAVES denise@x3dmedia.com

subscriptions MANAGER

ALAN CLEVELAND alan@x3dmedia.com

accounts CHARLOTTE TAIBI charlotte@x3dmedia.com

FINANCIAL CONTROLLER

SAMANTHA TODESCATO-RUTLAND sam@chalfen.com

AEC Magazine is available FREE to qualifying individuals. To ensure you receive your regular copy please register online at www.aecmag.com about

AEC Magazine is published bi-monthly by X3DMedia Ltd 19 Leyden Street London, E1 7LE UK

T. +44 (0)20 3355 7310

F. +44 (0)20 3355 7319

© 2025 X3DMedia Ltd

All rights reserved. Reproduction in whole or part without prior permission from the publisher is prohibited. All trademarks acknowledged. Opinions expressed in articles are those of the author and not of X3DMedia. X3DMedia cannot accept responsibility for errors in articles or advertisements within the magazine.

Register your details to ensure you get a regular copy register.aecmag.com

Industry news 6

Aecom acquires AI start-up for $390m, Trimble brings collaboration into SketchUp, KREOD to bring ‘aerospacegrade precision’ to AECO, plus lots more

Trimble builds AI strategy on agentic platform 14

Agentic AI to be embedded across Trimble’s technology stack and within the workflows of customers and partners

AI in AEC news 16

City of Raleigh using AI to gain insight into traffic, Tektome extracts knowledge from past projects, Viktor to simplify coding for automation, plus lots more

A peek into Rayon’s AI future 18

We preview some of the promising new new AI tools for the “Figma of 2D CAD”

AI Spotlight Directory 18

AEC Magazine’s new directory will help you find the AI tools aimed at your particular pain point or specific function within the AEC concept-to-build process

Cover story: views from the data lake 20

When transitioning away from proprietary files, some AEC firms are seeking out cloud-based databases that promise genuine ownership of data

SpeckleCon 2025 26

NXT BLD 2026 30

Make a diary date for 13-14 May 2026, when we will be holding the tenth edition of NXT BLD, incorporating NXT DEV

Bentley Systems shapes its AI future 32

From civil site design to construction planning, Bentley is embedding AI across more of its tools, while giving firms full control over outcomes and data

Event report: Graphisoft Ignite 2025 38

Company executives and customers shared Graphisoft’s latest technology developments and showcased some impressive project work

Twinmotion: a new chapter for arch viz 42

Epic Games is reshaping its arch viz tool, aligning it more closely with Unreal Engine while enhancing geometry, lighting, materials, and interactivity

46

Reality capture and surveying have undergone considerable technological changes in recent years. Now there’s a new kid on the block, but what does it mean for AEC workflows? Building

We report from Speckle’s annual meet-up, where the open-source data platform promises AEC users a chance to break free from proprietary lock-ins

28 days later: end-user licence agreements

Last month’s cover story on the trend in EULA metastasis certainly invoked a wide range of responses. Martyn Day provides an update

Introducing Gaussian Splats for AEC 48

Aecom acquires AI start-up Consigli for US $390m

Global infrastructure engineering giant Aecom has acquired Norwegian AI startup Consigli, in a deal worth 4 billion NOK (approx US $390m), according to Norwegian business newspaper Dagens Næringsliv (DN). Prior to the acquisition, Aecom and Consigli had worked together on several projects. Consigli founder and CEO Janne AasJakobsen will take up the role of head of AI engineering at Aecom.

The acquisition marks a notable shift in the AEC software landscape, where startups are typically snapped up by established software vendors rather than their customers. It suggests that Aecom views Consigli’s technology — and its engineering talent — as a strategic differentiator.

Consigli brands itself as “The Autonomous Engineer”, an AI agent for space analysis, unit optimisation, automated MEP loadings, level 3 modelling, report generation, plant room optimisation, tender documents de-risking, O&M docs and more. The company claims its technology gives reductions in engineering time of up to 90%.

It is not yet clear if Consigli will remain

a commercial entity or will only be for Aecom’s internal use.

For Aecom, the deal addresses a growing battleground: the convergence of construction services and embedded predictive software. With project margins under pressure and clients demanding real-time data-driven insights, owning a targeted AI layer would offer a differentiator

From Consigli’s perspective, being part of a global infrastructure services business opens access to large-scale projects and the global engineering-delivery pipeline. And, if DN’s reported figure of $390 million is correct, it’s the exit many start-ups in this space can only dream of.

The acquisition has raised eyebrows across the AEC industry. Software vendors weighing potential startup targets may now feel added pressure, and losing one to a customer could spark a frenzied wave of negotiations. Meanwhile, other AECO firms may feel compelled to behave more like tech companies — or risk being pushed into commodity territory. Consigli was approached for comment but did not respond by the time of publication.

■ www.consigli.ai

Autodesk targets estimators with new tool

Autodesk Estimate, a new cloud-based estimating solution that connects 2D and 3D takeoffs to costs, materials, and labour calculations, has launched.

Part of Autodesk Construction Cloud,

Autodesk Estimate aims to help contractors produce more accurate estimates and proposals, eliminating the need for manual merges, juggling spreadsheets, and switching between disconnected tools.

■ www.construction.autodesk.com

Allplan unveils 2026 products

Allplan’s new 2026 product lineup — developed for architects, engineers, detailers, fabricators, and construction professionals — puts a strong emphasis on automation, collaboration, and sustainability.

To expand its sustainable design capabilities, the new release features an integration with Preoptima Concept, a third party tool for whole-life carbon assessments (WLCAs).

Meanwhile, with GeoPackage DataExchange urban planners and designers can integrate GIS data. According to Allplan, providing access to accurate site context supports better decisions on zoning, infrastructure, and environmental impact.

Elsewhere, a new AI Assistant is said to improve planning efficiency and decision making by providing guidance on Allplan workflows, AEC standards, and best practices, while offering smart suggestions for tasks, including coding support.

■ www.allplan.com

Vektor.io to bring visibility to rail project

ERB Rail JV PS, the joint venture leading the construction of the Rail Baltica mainline in Latvia, one of the largest infrastructure projects in Europe, has chosen Vektor.io as its platform for managing and visualising infrastructure data.

ERB will use the platform to bring together 2D plans, BIM models, GIS data, and other reference materials spread across many different formats and systems, directly in the browser, accessible both in the office and on site.

■ www.vektor.io

On stage at NXT BLD 2025: Consigli founder and CEO Janne Aas-Jakobsen will now be Head of AI Engineering at Aecom

ROUND UP

Water systems

CivilSense ROI Calculator is a new digital tool designed to help municipalities and utilities make informed decisions by revealing hidden water losses within the supply network and demonstrating the financial benefits of building sustainable, resilient water systems ■ www.oldcastleinfrastructure.com

QA/QC standards

A new report by PlanRadar suggests inconsistent QA/QC standards are eroding margins, driving rework, and fuelling disputes across construction projects. According to the study, companies rank QA/QC among their top priorities, yet 77% still report inconsistent documentation that varies across projects, sites and trades ■ www.planradar.com

Ability BuildingPro

ABB has launched ABB Ability BuildingPro, a platform designed to connect, manage, and optimise building operations. Acting as a ‘central intelligence hub’, it unifies data from building systems to improve performance, reduce energy use, and enhance occupant experience ■ www.abb.com

Net zero school

Arup has designed a Net Zero Carbon in operation (NZCio) Welsh school using performance modelling technology from climate tech firm, IES. The software will help reduce carbon emissions at the Mynydd Isa Campus by over 100 tonnes per year ■ www.iesve.com

Content control

Avail has launched new add-ons for AutoCAD and Civil 3D that extend Avail’s content management solution directly into the AutoCAD and Civil 3D environments, helping firms organise, visualise, and reuse their CAD content, including block libraries ■ www.getavail.com/autocad

Facilities platform

Schneider Electric has launched EcoStruxure Foresight Operation, an ‘open and scalable, intelligent platform powered by AI’ designed to unify energy, power and building systems in facilities such as data centres, hospitals and pharmaceutical campuses ■ www.se.com

Trimble brings collaboration directly into SketchUp

Trimble has built a new suite of collaboration tools directly into the heart of SketchUp for Desktop, alongside improvements to documentation, site context, and viz.

The 3D modelling software now includes private sharing control, in-app commenting, and real-time viewing, allowing designers to collect feedback from clients and stakeholders without leaving the SketchUp environment. Designers can securely share models with stakeholders, controlling who can view and comment.

Feedback is attached directly to 3D geometry, ensuring comments are linked to the right part of the model.

“Great designs are shaped by conversation, iteration and shared insight,” said Sandra Winstead, senior director of product management, architecture and design at Trimble. “Rather than jumping between email threads or third-party tools to hold conversations, collaborate and make design decisions, we’ve built collaboration directly into SketchUp.”

■ www.sketchup.com

HP connects physical and digital

HP has enhanced its construction management platform, HP Build Workspace, introducing mobileenabled scanning and AI-powered vectorisation directly from HP DesignJet multifunction printers.

HP AI Vectorization enables the conversion of raster images into ‘clean, editable’ vector drawings suitable for CAD applications.

elements such as doors, windows, text, and dashes Currently, the processing is done in the cloud but next year users will also be able to run jobs locally on HP Z Workstations, such as the HP Z2 Mini. Meanwhile, through HP Click Solutions integration, AEC professionals can also ‘seamlessly print’ documents from HP Build Workspace directly to HP DesignJet printers.

drawings suitable for CAD applications.

The multi-layered AI Vectorisation engine, which is trained on real architectural and construction plans, also includes object recognition so it can identify architectural

HP has also launched a new large format printer, the HP DesignJet T870 (pictured left), which is billed as a compact, versatile 24-inch device that combines high-quality output with sustainable design. ■ www.hp.com

HP has also launched a new large format printer, (pictured left)

Chaos Vantage 3 launches for real time viz

OpenSpace invests in tracking firm

Chaos has released Chaos Vantage 3, a major update to its visualisation platform that allows AEC professionals to explore arch viz scenes in real time complete with real-time ray tracing.

Headline features include support for gaussian splats, enabling users to place their projects directly into lifelike environments, USD and MaterialX for asset exchange across varied pipelines, and access to the Chaos AI Material Generator, to give AEC users precise

emetschek Group – the AECO software developer whose brands include Graphisoft, Vectorworks, Allplan and Bluebeam –and Takenaka, one of Japan’s largest construction companies, have signed a Memorandum of Understanding (MoU) to advance digital transformation and AI-driven solutions in construction.

The MoU initiates a strategic partnership to develop and pilot AI-assisted, cloudbased, and open digital platforms that streamline and enhance collaborative workflows across planning, design, construction, and operation processes.

“This partnership with Takenaka, a true leader with deep expertise in the construction industry, is a pivotal step,” said Marc Nezet, chief strategy officer at the Nemetschek Group. “By combining their extensive, practical know-how with

control over the look of a scene.

penSpace has acquired Disperse, the company behind the construction progress tracking technology that powers OpenSpace Progress Tracking that helps construction teams identify productivity issues earlier in projects.

With Vantage 3, AEC users can now make the most of Gaussian splats that are part of their V-Ray Scene files. Gaussian splatting allows the real world to be captured as detailed 3D data, quickly turning photos or scans of objects, streets or entire neighbourhoods into editable 3D scenes. Architects and designers can then place their projects directly into lifelike environments, creating an immediate sense of scale and context.

■ www.chaos.com/vantage

Nemetschek partners with Takenaka

our advanced digital and AI capabilities, we are co-creating a more efficient, sustainable, and data-driven future for the entire AEC/O industry.”

■ www.nemetschek.com N

Key areas outlined within the agreement include a commitment to best practice exchange through regular knowledge-sharing sessions, methodologies, and operational insights.

Nemetschek and Takenaka will also focus on joint AI and digital platform innovation, working together to identify, prioritise, and develop cloud-based digital and AI solutions for the AECO sector.

Secure data sharing and validation form another cornerstone of the agreement, with governance models and technical safeguards established to enable data-driven transformation. Both parties also reaffirmed their commitment to data protection and compliance.

The system combines 360° jobsite imagery, computer vision, and “expert human verification” to provide an objective, trusted, and detailed view of what has been built — and what hasn’t.

“Disperse was built to give construction teams a trustworthy, objective picture of progress,” said Olli Liukkaala, CEO of Disperse. “By joining OpenSpace, we can deliver that clarity at unprecedented speed and scale— and bring even more value to GCs, owners, and specialty contractors on projects of every size.”

OpenSpace Progress Tracking is part of OpenSpace’s Visual Intelligence Platform.

■ www.openspace.ai

Scia Engineer 2026 launches

Scia has released Scia Engineer 2026, a major update to its multi-material structural analysis and design software.

Headline enhancements include new tools for designing structures subject to mobile loads, full compliance with 2nd generation Eurocodes, and automated wind load generation in compliance with the latest US design code ASCE 7-22.

Other new features include “accurate and economical” design of structures subjected to significant torsion and new tools to help eliminate vibrations due to human activity.

■ www.scia.net/scia-engineer

Chaos V-Ray to support AMD GPUs for rendering Jacobs unveils flood modelling platform

Chaos V-Ray will soon support AMD GPUs, so users of the photorealistic rendering software can choose from a wider range of graphics hardware including the AMD Radeon Pro W7000 series and the AMD Ryzen AI Max Pro processor that has an integrated Radeon GPU.

Until now, V-Ray’s GPU renderer has been limited to Nvidia RTX GPUs via the CUDA platform, while its CPU renderer has long worked with processors from both Intel and AMD.

Chaos plans to roll out the changes publicly in every edition of V-Ray, including 3ds Max, SketchUp, Revit and Rhino, Maya, and Blender.

At Autodesk University 2025, both Dell and HP showcased V-Ray GPU running

on AMD GPUs – Dell on a desktop workstation with a discrete AMD Radeon Pro W7600 GPU and HP on a HP ZBook Ultra G1a with the new AMD Ryzen AI Max+ 395 processor, where up to 96 GB of the 128 GB unified memory can be allocated as VRAM.

“[With the AMD Ryzen AI Max+ 395} you can load massive scenes without having to worry so much about memory limitations,” said Vladimir Koylazov, head of innovation, Chaos. “We have a massive USD scene that we use for testing, and it was really nice to see it actually being rendered on an AMD [processor]. It wouldn’t be possible on [most] discrete GPUs, because they don’t normally have that much memory.”

■ www.amd.com ■ www.chaos.com

Architects promised construction control

rojectFiles from Part3 is a new construction drawing and documentation management system for architects designed to help ensure the right drawings are always accessible on site, in real time, to everyone that needs them.

According to the company, unlike other tools that were built for contractors and retrofitted for everyone else, ProjectFiles was designed specifically for architects.

ProjectFiles is a key element of Part3’s broader construction administration platform, and also connects drawings to the day-to-day management of submittals, RFIs, change documents,

instructions, and field reports.

Automatic version tracking helps ensures the entire team is working from the most up-to-date drawings and documents. According to Part3, it’s designed to overcome problems such as walking onto site and finding contractors working from outdated drawings, or wasting time hunting through folders trying to find the current structural set before an RFI deadline.

The software also features AI-assisted drawing detection, where files are automatically tagged with the correct drawing numbers, titles, and disciplines.

■ www.part3.io

acobs has introduced Flood Platform, a cloud-hosted hub for flood modelling that is designed to support the planning and delivery of critical flood infrastructure programs.

Flood Platform is designed to help firms overcome the challenge of managing and interpreting large volumes of data, especially as flooding and extreme weather events become more frequent and severe.

The subscription-based offering, built on Microsoft Azure technology, standardises how users manage, view and analyse flood-related data, and acts as a central location for data, simulations and collaboration.

■ www.floodplatform.com

Amulet Hotkey boosts 1:1 workstations

Amulet Hotkey has updated its CoreStation HX2000 datacentre remote workstation with a new Intel Core Ultra 9 285H processor option, delivering higher clock speeds and built-in NPU AI acceleration.

The CoreStation HX2000 is built around a 5U rack mounted enclosure that can accommodate up to 12 single-width workstation nodes that can be removed, replaced, or upgraded.

Each node is accessed by a single user over a 1:1 connection and can be configured with a choice of discrete Nvidia RTX MXM pro laptop GPUs .

Amulet Hotkey is also developing a new CoreStation HX3000, which will feature full Intel Core desktop processors and low-profile Nvidia RTX and Intel Arc Pro desktop GPUs.

■ www.amulethotkey.com

KREOD to bring ‘aerospacegrade precision’ to AECO

London-based KREOD is planning to bring “aerospacegrade precision” to the built environment, with its new KREODx platform which has launched in beta.

The software harnesses Parasolid from Siemens Digital Industries Software, a geometric modelling kernel that is typically found inside mechanical CAD tools such as Dassault Systèmes Solidworks, Siemens Solid Edge, and Siemens NX.

The software combines Design for Manufacture and Assembly (DfMA) principles with a building-centric approach to Product Lifecycle Management (PLM) — a process commonly used in manufacturing to manage a product’s data, design, and

development throughout its entire lifecycle.

KREODx is said to be powered by “Intelligent Automation” with parametric design and engineering workflows that “eliminate errors and accelerate delivery”.

The software offers full support for Bill of Materials (BoM) to deliver what the company describes as a single source of truth for costs, materials, and procurement, giving transparency from model to assembly.

According to the company, KREODx is also aligned with the circular economy, extending building lifespans, reducing waste, and enabling re-use and adaptability over time.

■ www.kreodx.com

Cintoo launches ArcGIS integration

Sisk boosts capture with DroneDeploy

Sisk, one of Ireland’s largest construction and engineering companies, is using DroneDeploy’s reality capture platform to enable faster inspections, higher data accuracy and real-time visibility across 20+ projects in Ireland and the UK.

Sisk’s geospatial engineering program covers capture with aerial drones and 360 cameras. DroneDeploy is used across flagship developments including Dublin’s Glass Bottle project and the Kex Gill road realignment scheme in Yorkshire, helping teams capture, analyse and share high-resolution site data for progress tracking, design verification and stakeholder communication.

■ www.dronedeploy.com

Esri powers digital twins for NHS estates

intoo has released Esri Experience Builder Widget for ArcGIS, enabling users to stream and interact with high-res, mesh-based laser scan data directly within the Esri ArcGIS environment. The Esri Experience Builder Widget brings immersive 360-degree panoramic views and virtual inspection capabilities

to ArcGIS. According to Cintoo, this is particularly beneficial for indoor and brownfield projects where conventional GIS tools often struggle to handle dense or detailed spatial data.

The integration converts terrestrial and mobile LiDAR data into lightweight 3D meshes. According to Cintoo, this maintains full point cloud fidelity while enabling teams to ‘effortlessly work’ with what were previously large, complex datasets.

Users can analyse scan data, compare it to BIM and CAD models, and manage asset tags without leaving the ArcGIS environment. Use cases include managing a large-scale facility, or overseeing construction progress.

■ www.cintoo.com

anchester University NHS Foundation Trust (MFT) has gone live with a digital twin of six hospitals as part of its strategy to create a smart estate. Replacing disparate systems and paper-based processes, the digital twin visualises floors, rooms and spaces with associated data and is already being used to understand space optimisation and support the management of RAAC and asbestos. Future plans include adding indoor navigation, patient contact tracing and realtime asset tracking.

Created using Esri UK’s GIS platform, which includes indoor mapping, spatial analysis, navigation and asset tracking, the digital twin went live in October.

■ www.esri.com

Fast-track digital innovation in your organisation with a free Remap Hackathon.

Our Hackathons give your team space to explore ideas, test newworkflows, and prototype solutions — all in a single, energising day. We’ve run them with…

And yes… we run them forfree!

Preparation

We help you gather ideas and workflow challenges from your team, then shortlist the top opportunities to focus on during the Hackathon.

TheDay

A guided innovation sprint with briefing, collaborative hacking and optionalCPD, helping your team learn, experiment and prototype new digital solutions.

Whyrunahackathonwithus?

Outcomes

You gain a practice-wide ideas list plus refined concepts or prototypes that address your most valuable workflow challenges. 1 2 3 4

InnovationDiscovery: Structured idea-gathering to identify opportunities that genuinely matter to your team.

CollaborativeProblem-Solving: Guided development sessions where multidisciplinary staff work together to explore and prototype solutions.

TechnologyInsightSessions:Optional CPDs during lunch covering topics such as computational design, automation, data workflows, or emerging tools.

PrototypedSolutions:End-of-day presentations to the organisation sharing the outputs — from sketches andworkflows to functioning prototypes or solved pain points.

We are a London-based digital transformation and software development consultancy helping built-environment organisations progress their digital ambitions through practical, collaborative innovation events.

Trimble builds AI strategy around foundational agentic AI platform

As industry giants continue to flesh out their AI strategies, Trimble’s Dimensions conference in Las Vegas was an opportunity for the company to relay its own plans for integrating the technology with its various products and services

here is the construction software market headed and what role is AI likely to play in it? These are the two questions that Trimble executives were keen to address at the company’s annual Dimensions conference, held in Las Vegas in November 2025.

While many vendors seemingly feel obliged to label every product announcement as ‘AI-driven’, Trimble’s message suggests a more considered strategy, based on embedding agentic AI across its technology stack and within the workflows of customers and partners.

At the heart of this strategy is a new agentic AI platform, designed to provide a standardised foundation for building agents, complete with services for their deployment, security and management.

Trimble positions Trimble Agent Studio as both an internal development layer and an extensible system on which customers and partners can build. The company arguably has little choice but to take this route. After all, construction is a fragmented market, so any AI push that sees the technology embedded directly within individual products would simply carry that fragmentation forwards. In other words, AI needs to be a foundational technology, not a bolt-on.

A platform-first approach

Trimble’s response is its Agent Studio, which abstracts the plumbing to enable multi-agent workflows and cross-tool automation. Trimble Agent Studio is currently in pilot testing at a handful of select customers.

Executives at Trimble say this platformfirst approach is necessary because the number of AI use cases emerging in construction is outpacing any vendor’s ability to build bespoke features. The theory is that an agentic layer lets Trimble embed intelligence into onboarding, object generation, data search, field reporting and asset maintenance more consistently, rather than replicating engineering work across SketchUp, Tekla, ProjectSight, Viewpoint and Trimble Connect.

In practice, this is also an attempt to reinforce the company’s wider ‘Connect and Scale’ strategy, an ongoing push to break down internal data silos and move towards a more coherent ecosystem.

The practical expression of this is seen in the AI features funnelling through Trimble Labs, the company’s early-access programme. SketchUp already has AI rendering available, with object generation and an assistant planned for late 2025. Tekla has rolled out user and developer assistants and an AI-based cloud drawing capability. ProjectSight offers help agents, auto-submittals and AI title block extraction in production, with daily report agents in testing. Similar assistants are on the way for Trimble Connect and Unity Maintain.

If the pace of this rollout seems brisk, a long list of phrases such as “expected in Q4 2025” and “available in early 2026” signals that this will be a multi-year transition rather than a ‘big bang’ delivery of a fully realised AI ecosystem.

New dimensions

Perhaps the most grounded Dimensions announcement was that relating to ProjectSight 360 Capture, a new workflow that brings 360-degree imagery directly into Trimble’s project management and collaboration environment.

The idea is simple enough: the user walks a site equipped with a 360-degree camera, an AI algorithm maps the path automatically, and the imagery is stitched back into drawings to form a living record of as-built conditions. The system aligns images, filters out faces for privacy, links images to tasks and workflows, and allows comparisons of progress over time.

This move feels overdue. The market has been moving towards integrated reality capture for years, with various point solutions offering site-walk automation, issue documentation and progress tracking. Trimble’s advantage lies not in the novelty but in its ability to tie the captures directly into broader workflows via ProjectSight and Trimble Connect, and by extension into ERP systems such as

Spectrum and Vista. Issue identification and resolution, from BCF topics to change orders, becomes part of a single ecosystem rather than a patchwork of uploads and shared links. This is precisely the type of connective tissue the industry has historically struggled to solve. Trimble’s vision makes this look more tangible.

The company also took the opportunity to announce Trimble Financials at the Dimensions event. This targets small contractors who lack the skills, time and staff to manage full ERP (enterprise resource planning) systems. Instead, this new subscription tool handles job costing, AP/AR visibility, cashflow dashboards and proposal generation, backed by an AI assistant capable of answering basic financial questions.

Taken together, the announcements relating to Trimble Agent Studio, ProjectSight 360 Capture and Trimble Financials all point to a company that is knitting together what has previously been a broad and somewhat disjointed portfolio into something more unified, using AI for both connective logic and marketing narrative.

Whether an agentic platform can genuinely standardise workflows across the full range of Trimble products remains to be seen – but Dimensions 2025 delivered a coherent signal of intent that Trimble’s AI will be grounded in delivering real workflows.

■ www.trimble.com

Harness the Power of AI

HP Z workstations, accelerated by NVIDIA RTX PRO™ GPUs,1 are purpose-built to accelerate complex AI and design workflows, from generative design to diffusion modeling.

Tektome extracts knowledge from past projects AI NEWS BRIEFS

NavLive AI

NavLive has chosen the Nvidia Jetson Orin system-on-module as the core of its latest handheld construction scanner. The scanner integrates five sensors, including three cameras, LiDAR, and an IMU, all streaming data simultaneously to support SLAM and computer vision tasks

■ www.navlive.ai

Corona gets AI boost

Corona 14, the latest version of the photorealistic architectural rendering engine for 3ds Max and Cinema 4D, supports AI material generation, AI image enhancement Gaussian Splats, procedural material generation, and new environmental effects

■ www.chaos.com

Howie investment

Howie, a Vienna-based startup developing AI-driven datamanagement and analytics tools for the AEC sector, has received a sixfigure investment from Dar Ventures. The funding will support Howie’s work on a platform that centralises and analyses project information

■ www.howie.systems

Struck secures €2M

Amsterdam-based startup Struck has raised €2 million in seed funding to further develop its AI platform that gives architects, developers, and municipalities clearer insight into regulations and, in the future, enables automated design-compliance checks

■ www.struck.eu

Track3D raises $10m

Construction monitoring software startup Track3D, has raised $10m in Series A funding. The company’s platform uses advanced AI models to transform reality data into ‘actionable insights’, including progress tracking, work-in-place validation, and performance analytics tied directly to drawings and quantities

■ www.track3d.ai

AI CONTENT HUB

For the latest news, features, interviews and opinions relating to Artificial Intelligence (AI) in AEC, check out our new AI hub

■ www.aecmag.com/ai

Tektome has unveiled KnowledgeBuilder, an AI-powered platform designed to automatically organise massive volumes of siloed project documents to help AEC teams make smarter decisions while avoiding repeated mistakes.

The software, already piloted by Takenaka Corporation, one of Japan’s largest construction firms, analyses and extracts key content from scattered files — including drawings, reports, photos, and handwritten notes on marked up PDFs — and consolidates the data into a central, structured, and “instantly

searchable” knowledge base.

The platform enables architects and engineers to ask questions in plain language and quickly see how similar issues were handled in the past, eliminating the need to “reinvent the wheel.”

According to the company, even non-IT staff can configure what to pull from drawings, proposals, photos or meeting minutes without coding or complex setup.

KnowledgeBuilder works across PDFs, CAD files (DWG & DXF), scanned images, handwritten markups, and more. BIM file support (RVT & IFC) is coming soon.

■ www.tektome.com

Viktor to simplify coding for automation

Viktor, a specialist in AI for engineering automation, has launched App Builder, a new tool designed to enable engineers without coding experience to automate tasks and build tools ‘in minutes’.

App Builder is designed to accelerate workflows and improve consistency and quality by eliminating the need to retype data or copy and paste between Excel and

design tools, while ensuring transparency, governance, and compliance.

Automation tasks include design checks, selection workflows, and calculations through a ‘user-friendly’ interface that can be shared and reused across projects.

App Builder can also integrate with AEC tools such as Autodesk Construction Cloud (ACC), Plaxis, Rhino, and Revit.

■ www.viktor.ai

A A.Engineer to drive engineering calcs

.Engineer is a new agentic AI platform for structural and civil engineers, designed to automate calculations and reporting. The software helps accelerate workflows

and improve consistency, and allows engineers to spend less time on manual tasks and more time on creative design, instructions, and quality assurance.

■ www.a.engineer

City of Raleigh using AI to gain insight into traffic

Technology from Esri, Nvidia, and Microsoft is being used in a pilot project for the City of Raleigh to better understand traffic flows and impacts. The “Raleigh In Motion” digital twin project uses AI to analyse massive volumes of data captured by real-time cameras.

The traffic monitoring software allows city officials to monitor current and historic traffic flows through key intersections at the city. Nvidia AI technology processes real-time video from hundreds of cameras around the city to identify vehicles, bikes, and pedestrians.

These feeds are then mapped in Esri ArcGIS where city officials can analyse which intersections are congested based on time of day and day of week. Indicators on the map turn from green to yellow to red as congestion builds up in intersections. The dashboard can even flag current events like stalled vehicles in an intersection that are impacting traffic.

The digital twin can help Raleigh identify dangerous intersections, reduce congestion, provide safer roadways, better respond to incidents and intelligently plan how to remediate problematic junctions. ■ www.esri.com ■ www.nvidia.com

Bluebeam Max to boost Revu with AI

Bluebeam Max is a new subscription plan, due to launch in 2026, that is designed to bring AI features to the PDF-based collaboration and markup tool Revu.

A new Claude integration will bring natural-language AI prompts directly into

Revu, allowing users to automate tasks and turn markup data into ‘actionable insights’. AI-Review and AI-Match are new intelligent tools that will help uncover design issues early, detect scope gaps, and compare drawings.

■ www.bluebeam.com

Construction ERP platform embraces AI

CMiC has launched Nexus, a construction ERP (Enterprise Resource Planning) platform powered by more than 25 AI agents.

Nexus is said to redefine how construction teams interact with data, automate workflows, and make decisions — enabling them to focus on high-value strategic work

instead of repetitive processes.

“Our AI-powered features offer users advanced data visualisation capabilities, business intelligence tools, and the ability to leverage natural language to optimise key business functions,” said Gord Rawlins, president & CEO, CMiC.

■ www.cmicglobal.uk

mbue launches AI-powered submittals

Construction technology company mbue has launched a new platform designed to automatically generate “fully compliant” product data submittals, helping subcontractors reduce weeks of manual work to just minutes.

mbue Submittals uses a proprietary AI model and computer vision to analyse drawings and specifications — including electrical schedules — and automatically extracts the required product information. The platform can generate submittals for light fixtures, switchgear, panels, devices, and other materials, streamlining a traditionally time-consuming and error-prone process.

“Submittals have traditionally been a painful, labour-intensive process that slows projects down.” said Garrett White, VP, Texas at Weifield Group. “With mbue, we’re seeing submittals completed in a fraction of the time, with greater accuracy and confidence. This is a huge step forward for the industry and directly helps us increase the amount of volume each PM can deliver, we can do more with the same team which directly contributes to our company’s revenue growth.”

■ www.mbue.ai/submittals

Maket teases new release

aket is gearing up for the launch of Maket 2.0, its next generation AI platform for architects.

Building on its original floorplan generator, new AI agents will focus on technical compliance, performing a range of tasks such as zoning verification, HVAC planning and material calculations.

■ www.maket.ai

A peek into Rayon’s AI future

Among the many next-gen CAD tools, Rayon stands out as the only one not targeting Revit. Instead, it’s squaring off against AutoCAD in the world of drawings. Recently, the company has been showing off some forthcoming AI tools that look pretty promising, writes Martyn Day

While other companies promised ‘Figma for BIM’, Rayon is delivering ‘Figma for 2D CAD’. This Paris-based start-up was founded in late 2021 by Bastien Dolla and Stanislas Chaillou and sits in the category of ‘lightweight but capable’ design tools. Rayon’s goal, meanwhile, is to modernise breadand-butter 2D workflows.

To date, the company has raised almost €6 million in funding. Its €4 million seed round back in 2023 may have been relatively modest by AEC start-up standards, but its product has seen serious development velocity over the intervening period. It is argued by some that for many design professionals, day-to-day architectural and interiors work simply doesn’t require full-fat BIM or the overhead of legacy desktop CAD. This is where Rayon is said to fit.

Rayon’s product is a browser-native drafting and space-planning environment, designed for quick turnarounds and easy sharing, rather than for offering users encyclopaedic feature depth. That

said, users can import DWG, DXF or PDF files, sketch out walls and zones, place objects, annotate and publish drawings.

What distinguishes Rayon is its exceptional user interface, its huge library of content and the collaborative layer wrapped around it. Multiple users can work on the same model, libraries and styles are shared across teams, and stakeholders can view or comment without wrestling with installs, versions or licensing. It’s designed as a modern collaborative productivity tool, more agile than the self-contained, file-centric, desktop-based CAD system.

Rayon deliberately positions itself a few weight classes below rivals such as Autodesk’s AutoCAD or AutoCAD LT but is firmly focused on sharpening its AEC focus and relevant feature set. In time, it could become a contender.

AutoCAD remains the industry’s Swiss Army knife, customisable, scriptable and tightly wired into enterprise workflows. But the next generation might be tempted by fresher user interfaces. Rayon focuses on the sizeable

AEC Magazine’s AI Spotlight Directory

At this stage in the hype cycle, there seems to be a new AI app aimed at AEC users every week. AEC Magazine’s new directory will help you find the AI tools aimed at your particular pain point or a specific function within the AEC concept-to-build process

AI is penetrating the AEC sector at a pace that is both energising and slightly disorientating. Despite the marketing bluster, AI in AEC is still in its formative phase, defined less by a single design market and more by a set of overlapping experiments, architectural ideas and emerging business models.

But the level of AI ambition now being directed at the built environment is unmistakably rising. From early-stage design exploration to construction sequencing and claims analysis, AI and the automation it brings is beginning to reshape expectations of what digital tools should deliver. The real problem is

cohort of architects, interior designers and space-planners whose work is largely 2D, who need clean drawings rather than a full modelling environment, and who value fast onboarding and frictionless sharing over specialist depth. Where Rayon is stronger than the incumbents is in its immediacy. The barrier to entry is low, collaboration is native rather than bolted on, and it handles the everyday tasks that small and mid-size studios repeatedly perform. Whether that’s enough to shift entrenched AutoCAD habits is a separate question, but Rayon at least offers an alternative shaped around contemporary workflows, rather than decades-old assumptions about how drawings should be produced and exchanged.

AI ahead

The professional and educational background of Rayon co-founder Stanislas Chaillou leaves little doubt that AI will be an important part of the company’s future technology offering. The former architect and ex-employee of Autodesk previously

finding the right AI tool for the job.

To help bring some clarity, AEC Magazine has launched the AI Spotlight Directory (aidirectory.aecmag.com), a curated and continuously updated view of the expanding AI ecosystem across architecture, engineering, construction and operations.

The directory already tracks more than 100 companies, products and features, spanning start-ups experimenting with novel generative approaches, mid-stage firms building targeted automation tools, and major vendors integrating AI into their BIM platforms and cloud environments. What stands out most is the breadth of approaches. Some applications are genuinely generative, offering

geometric options, massing studies or space-planning schemes. Others deploy machine learning for tasks such as classification, prediction, optimisation or schedule risk analysis. Many established tools now include AI-powered features that accelerate documentation or automate repetitive modelling tasks. And as with any early technology wave, there are also prototypes that feel more exploratory, solutions searching for real-world traction or awaiting more mature AI foundations.

This diversity raises an important point. Our task is not to prematurely anoint ‘winners’, but to understand the spectrum of activity, the differing levels of maturity, and the specific

attended Harvard University’s Graduate School of Design and the Swiss Federal Institute of Technology (EPFL), where he focused on the intersection of geometry, generative algorithms and design workflows. His book, Artificial Intelligence and Architecture: From Research to Practice (www.tinyurl.com/AI-arch-book), provides a detailed timeline of AI developments from the technology’s early history to current architectural applications. Along the way, it exposes both the vast potential and the limitations of AI/ML in design.

Rayon’s other co-founder Bastien Dolla, meanwhile, takes an active role in defining how ML features get integrated into design workflows. The company has been signalling on LinkedIn and Instagram that it has some interesting AI features coming down the line and they do indeed look very cool.

Rayon’s AI boost will give it the capability to act as a workflow-embedded companion to architects and space planners, adding rendering and model-view automation together with asset generation. It will also be able to handle natural language input. For example, a user might type in the following description: ‘Open plan office, 10m by 15m, glass partition, 1.2m high.’ The system will then produce a block or arrangement automatically. If robust, this kind of capability could lower the friction of asset reuse and speed up the otherwise tedious process of library searching, sizing and insertion. The software also has a cool AI capability of generating axonometric views automatically from 2D layouts.

Another point of focus is collaborative, multi-user editing, which will be baked into the AI layer. Rayon’s existing multi-

player canvas and library sharing will get a smarter overlay, with the software’s AI panel providing suggestions and auto-snapping design elements. It could perform version-history rewinds in response to semantic changes; for example, ‘You placed a room, convert to zone and annotate.’ This coherence across drafting, annotation and sharing looks amazing. Rayon has also shown off visual-toblock/image-to-object conversion. This involves using an image or 360-degree capture to auto-extract walls, doors or furniture and turn them into native elements. For smaller practices working on refurbishments or adapting existing plans, this kind of feature represents a practical shortcut. It doesn’t compete with full BIM reality-capture workflows, but it rides comfortably in the gap between desktop CAD and immersive scan-to-model technologies.

From mundane to magic

Rayon’s planned AI features would seem to have much to offer: the reduction of mundane labour, a leaner drawing workflow, some extra magic sprinkles. The strategic bet that the company’s management is making is that if you can make 80% of drawing-related work faster and easier for small and medium-sized practices, there is a potentially huge niche to be conquered and one that the larger vendors tend to ignore.

The real issue here will be how well Rayon executes on its AI plans. How flexible will the text-to-block generation be? How well will it handle edge cases? Will the AI panel remain an optional helper, or will it become a locked-in workflow? All this remains to be seen, but if Rayon gets it right, this start-up may well shift the way that mid-market AEC firms think about CAD.

■ www.rayon.design

workflow niches that each tool targets. As we saw in BIM’s early development, the industry needs clarity if it is to separate genuine capability from hype.

The entries in the directory reveal a market that is anything but uniform. Generative design systems sit alongside automated documentation platforms. AI-enabled constructionsite analysis tools share space with model interrogation assistants. New BIM 2.0 vendors are embedding AI deeply into their data models and user experiences, while incumbent BIM suppliers are layering machine learning and natural-language interfaces onto existing product lines. The variety reflects an industry

trying to determine where AI delivers genuine advantage, balancing speed, accuracy, quality, consistency, and where it risks becoming an overapplied marketing label.

Each directory listing provides concise but structured insight: what the organisation does, where its AI capability sits within the product, which stage of the workflow it addresses, and how mature the offering currently appears.

The intention is to go beyond branding and allow readers to compare propositions, follow market shifts and track where innovation is clustering. It also highlights broader patterns, such as the rapid emergence of AI in documentation and planning,

the growing interest in site-based intelligence, and the significant investment now flowing into datacentric ‘BIM 2.0’ platforms. Above all, we hope you use our AI Spotlight Directory as a living resource. As the market evolves, as firms pivot, and as new ideas emerge from both start-ups and established vendors, we will continue to expand and refine the coverage.

AI may well represent the next major platform shift for the AEC industry and as such deserves a clear, authoritative map for buyers. This directory represents AEC Magazine’s commitment to building that buyer’s guide for the entire community.

■ www.aidirectory.aecmag.com

Cover story

Views from the data lake

AEC firms are starting to recognise the value they could extract from their data, if only it weren’t scattered across countless systems, formats and project contributors. The answer for many is a shift away from proprietary files in favour of cloud-based databases that promise genuine ownership of data and better control over it

Executive summary for normal humans

To many in the AEC industry, data lakes and lake houses sound very much like approaches about which only a CIO or software engineer would care. But the truth is far simpler: this is about finally getting control of your own project information, and stopping the madness of exporting, duplicating and re-formatting the same models over and over again.

Think of a BIM file as a shipping container. Everything you need is technically inside it, but you can only open it from one end and moving it around is slow and clumsy. If you need one box from the back, you still have to haul the entire container to take out its contents.

A lake house is the opposite. It behaves like a warehouse in which every object inside a project – every wall, room, door, parameter, schedule item, photo, scan or RFI – sits neatly on a shelf. It is indexed, searchable and instantly accessible. Nothing has to be unwrapped, exported or repackaged. Every tool, whether internal or external, sees the same live information at the same time. This immediately solves three of the most familiar pain points in BIM delivery.

The first is speed: in a file-based world, clashes are found tomorrow, or Friday, or at the coordination meeting. In a lake house, clashes appear while someone is still modelling the duct.

Checking rules, validating properties, or running energy assessments can happen continuously, not in slow cycles defined by exports.

Second comes ownership: right now, firms only ‘own’ their data in theory. In practice, it sits inside proprietary formats and cloud silos, to which access is metered, limited or simply not available. A lake house flips that. The data sits in open formats you can control yourself. Vendors don’t get to decide what you can do with your own project information.

The third issue is AI deployment: every firm wants to apply AI to its project history, but almost no firm

can, because the data is so scattered. A lake house finally puts all data in one well-governed venue, so that AI tools can use it. The future will be AI agents working on your project information doing cost prediction, design optimisation, risk profiling, automated documentation – all the things that many professionals in the AEC industry would love to see. This shift isn’t about technology for its own sake. It’s about reducing rework, stopping duplication, ending lock-in, improving quality and giving firms back control of the knowledge they already produce. For an industry built on coordination, clarity and shared understanding, it sounds like a transformation we need.

As companies in the AEC industry digitise, it’s increasingly recognised that their most valuable asset is not to be found in drawings, models, or even the tools used to produce them. Their most valuable asset is the data buried inside every project - data that captures geometry, relationships, parameters, costs, clashes, RFIs, site records and more.

This data represents the accumulation of knowledge that a firm may have spent decades developing and supporting. Yet much of it is scattered across different formats and systems, often locked behind proprietary file structures and metered cloud APIs.

This familiar situation is accompanied by an uncomfortable truth: employees at these firms can open their files, they can view their models, but the firm does not meaningfully own or control the data within them. If employees want to analyse it, run automation on, or train an AI model using it, they must purchase additional subscriptions from their software provider, or even additional applications. As the industry chases meaningful digital transformation, this dependency has increasingly become a strategic liability.

But there is an alternative. AEC firms can benefit by shifting from files to their own in-house, cloud-based databases, gaining long-overdue control of their data and possibly freeing themselves from proprietary bottlenecks.

The answer may lie in data lakes and data lake houses, which offer an open data architecture where project data can live as governed, queryable, interoperable information, rather than isolated blobs inside Revit RVTs, AutoCAD DWGs or proprietary cloud databases. This is the landscape into which the industry is now moving.

RIP files

For thirty years, the AEC industry has revolved around files: RVTs, DWGs, IFCs, 3DMs, PDFs, COBie spreadsheets and thousands of others, all acting as containers for design intelligence. In the desktop era, this was entirely logical. Everything had to be saved, packaged, versioned and emailed. Files were the only workable abstraction.

But the reality of modern project delivery is vastly different from that of the 1990s. Today’s projects involve thousands of files, generated by hundreds of tools, frequently being used by people working across multiple firms. The result is a digital environment that is fragmented, brittle and slow, one in which coordination headaches, data duplication and time delays are not bugs in the system but features of the file-based architecture itself. Meanwhile, cloud platforms, APIs and

increasingly, artificial intelligence (AI) are becoming the defining technologies of modern workflows – and these technologies do not want files. They want structured, granular, persistent data that is streamable, queryable, validated and can be used by multiple systems simultaneously.

A monolithic RVT file cannot support real-time analysis, multimodal AI or firm-wide automation. It was never designed to perform that way.

In fact, firms are realising that their true competitive asset isn’t a file at all, but the data inside it, with its representations of objects, parameters, schemas and historical decisions. A file is simply a container that slows everything down, and because most file formats are proprietary, the data ends up trapped inside someone else’s business model.

Few voices have been more consistent, or more technically grounded on this point than Greg Schleusner, principal and director of design technology at HOK, who also represents the Innovation Design Consortium (IDC). For years, Schleusner has argued that the AEC industry must stop treating BIM as a file-based activity and start treating it as data infrastructure.

At NXT BLD, Schleusner laid out the problem plainly (tinyurl.com/schleusner-NXT). Revit knows everything about an object the moment it is drawn, he told attendees, but that intelligence is locked inside a file until someone performs an export. That could be hours, days or weeks too late. As he put it: “It’s never been an issue getting the metadata out from Revit. It’s always just been the geometry that’s been the slow part.”

Schleusner began his presentation by analysing how the media & entertainment industry has solved similar problems. Pixar’s USD format became the standard for exchanging complex geometric and scene information across tools. Its depth far exceeds IFC, but it has no concept of BIM data. To tackle this issue, Nvidia and Autodesk’s Alliance for OpenUSD aims to extend USD into AEC, but this work is still in progress and is still far from replacing BIM data requirements.

The more radical step in Schleusner’s research is his call to stop thinking about BIM models as files at all. Instead of exporting federated models, or waiting for ‘Friday BIM drops’, he proposes streaming every BIM object – every wall, window, beam, annotation – into an open database the moment it is authored. Each element would have its own identity, lifecycle and change history and would be immediately available for clash detection, rule validation, energy checks or analytics while the model is still being authored. This vision is now driving a broader

conversation about how AEC firms should manage their project data, and why data lakes and lake houses are becoming unavoidable.

Data lakes 101

To understand why the ‘data lake house’ has become such an important issue in AEC, it’s worth stepping back and looking at how other industries have navigated the data problem. The first major attempt to manage organisational data at scale arrived in the late 1980s with the data warehouse. Warehouses were designed for one job: to pull clean, structured information out of operational databases through a strict ETL (extract, transform and load) pipeline and then serve it up as predefined reports.

The data warehouse did this well, but only within narrow boundaries. Data warehouses were expensive, rigid and completely unprepared for the tidal wave of semistructured and unstructured content that would later define the digital world, including images, logs, documents, telemetry, and later, multimodal information.

By the early 2000s, the so-called Big Data era hit. Organisations in every sector began generating vast amounts of high-velocity, highly varied data that had no obvious schema and traditional warehouses were overwhelmed. Their rigid structure was a poor fit for unpredictable information.

The tech industry response was the data lake, an architectural about-face: instead of structuring data before storage, firms dumped all data – structured, semistructured, unstructured – into a cheap cloud object store such as Amazon’s S3, and only transformed it later as needed. This ELT approach (extract, load, transform) offered enormous flexibility and scale, giving rise to a marketing narrative that data lakes were the future.

But early data lakes quickly developed severe problems. Without governance or schema control, they quickly became data swamps – vast, murky repositories, in which inconsistent, duplicated and unvalidated data accumulated in an uncontrolled manner. Querying could be painfully slow. Trust in data deteriorated. And critically, data lakes lacked transactional integrity: a system could be reading data as another was rewriting it, resulting in broken or inconsistent results.

In short, data lakes solved storage issues but broke reliability, governance and performance – three qualities that AEC firms need more than most.

The data lake house emerged as the solution to this tension. It combines the lowcost, infinitely scalable storage of a data lake with the structure, reliability and transactional control of a warehouse. It

does this by adding a relational-style metadata and indexing layer directly over openformat files stored in cloud object storage.

This hybrid design is the critical step that can turn a loose collection of files into something that behaves like a ‘proper’ database. With this metadata layer in place, a lake house can guarantee ACID (atomicity, consistency, isolation, durability) transactions, meaning that multiple systems can read and write simultaneously. It enforces schema, so project data follows well-defined structures. It maintains full audit trails, so lineage and accountability are preserved. And it allows BI tools, analysis engines and AI models to run directly on the live dataset, rather than duplicating extracts and creating conflicting versions.

For AEC, this is not just convenient. It is foundational. AEC project data is inherently multimodal and includes solid models, meshes, drawings, schedules, reports, energy data, specifications, RFIs, documents, photos and point clouds, as well as the metadata that ties them together. A single Revit file can contain thousands of elements with their own parameters, relationships and behaviours. Trying to run AI, automation or cross-disciplinary analysis on this information using file-based workflows is like trying to do real-time navigation with a paper map that’s updated once a week.

but it is transformative for analytical workloads. Most queries only need a few columns, so engines can read exactly what they need and skip everything else, reducing I/O dramatically. This is crucial in AEC, where models can contain thousands of parameters, but only a handful are needed for any given check.

Parquet’s openness is equally important. It avoids the proprietary format trap that has hamstrung AEC for decades. Once your project data is in Parquet, any tool in the open ecosystem, from Python and Rust libraries to cloud engines like Databricks, Snowflake or open-source query engines, can read it natively. You no longer need to negotiate with vendors or wait for APIs to mature. The data is yours.

If Parquet provides the necessary storage, then Apache Iceberg provides the

‘‘

cant consequences. It delivers consistent views, because every query targets a specific snapshot, ensuring results remain complete and coherent even as new data is being written. It provides true transactional safety, where any change is either fully committed as a new snapshot or not committed at all. And it supports genuine schema evolution, allowing columns to be added, removed or renamed, without having to rewrite terabytes of Parquet files, an essential capability for long-lived AEC datasets. Together, Parquet and Iceberg deliver the reliable, unified project database that the AEC industry has never had before.

Big AEC benefits

Moving from siloed, proprietary files to an open, unified, AI-ready lake house is the clearest path to future-proofing an AEC firm ’’

In short, the lake house shifts the paradigm. A project dataset sits in one place, in open formats, behaving like a continually updated, queryable database. The file no longer defines the project; the data does. At the base of most lake houses sits Apache Parquet, a columnar storage format that has become the industry standard. Parquet stores data by column rather than by row. That may sound minor,

intelligence. Iceberg is an open table format originally developed at Netflix to bring reliability, versioning and high performance to massive data lakes. It adds a metadata layer that tracks the state of a table using snapshots. Each snapshot refers to a manifest list, which itself points to a set of manifest files acting as indexes for the underlying Parquet data.

A manifest is effectively a catalogue: it lists which Parquet files belong to a table, how they are partitioned, what columns they contain and how the dataset has changed over time. Rather than scanning thousands of files to answer a query, Iceberg reads the manifests and instantly understands the structure.

This is an elegant solution with signifi-

Netflix’s lake house solution

Some years ago, Netflix faced almost exactly the same problem the AEC industry struggles with today: mountains of fragmented, multimodal data, scattered across incompatible systems. Its crisis wasn’t content creation. It was scale, complexity and chaos – the same forces reshaping digital AEC.

Netflix’s media estate spanned petabytes of wildly different content: video, audio, images, subtitles, descriptive text, logs, user-generated tags and, increasingly, AI-generated embeddings. Each data type lived in

its own silo. Data scientists spent more time hunting and cleaning data than training models. Collaboration slowed. Infrastructure costs soared. No one had a unified view of the organisation’s own assets.

To solve this, Netflix built a ‘media data lake’, using an open multimodal lake house architecture centred on the Lance format. Lance allowed Netflix to store every data type in one system, with relational-style metadata, versioning and schema evolution layered directly over cloud object storage. Critically, Lance enabled zero-

Once AEC project data sits inside an open lake house – structured, governed and queryable – the benefits begin to accumulate quickly. The most immediate shift is that data finally becomes decoupled from the authoring tools that produced it. Instead of each BIM package jealously guarding its own silo of geometry and metadata, the project information lives in a neutral space where any tool can read it.

This single change unlocks capabilities that, until now, have been aspirational rather than practical. Firms can build their own QA systems that directly interrogate geometric and metadata standards across every project model, regardless of whether the source was Revit, Tekla Structures, Archicad or anything else. There is no export step, no format translation, no broken parameters. The data is just there, in a clean schema, ready to be queried.

With proper schema enforcement, elements, parameters, cost codes and property sets follow firm-wide standards. That finally makes automated reporting relia-

copy data evolution — teams could add new AI features, such as embeddings or captions, without rewriting the underlying petabytes of source video.

The parallel with AEC is obvious. AEC’s data is just as varied and just as with pre-lake house Netflix, this information is locked inside singlepurpose tools. Every application exports its own version of the truth. Every team maintains its own copy. Every AI initiative begins with cleaning up someone else’s chaos.

The Industry Data Consortium’s pursuit of “single export, multi-use”

is effectively the AEC version of Netflix’s journey. By extracting project data into open Parquet tables and managing them via Iceberg, firms gain a single source of truth that supports transactional integrity, schema governance and reliable engineering workflows.

Suddenly, AI pipelines, energyreport parsers, internal search tools and firm-wide assistants become possible — because the data is unified, structured and finally under the firm’s control.

■ www.tinyurl.com/lance-netflix

ble, rather than a brittle sequence of halfworking scripts – something the industry has talked about for a decade.

Once a dataset becomes trustworthy, AI and machine learning models become dramatically more effective. Instead of scraping data from a handful of projects, a firm can train predictive systems on its entire project history. Models can forecast costs from early design parameters, identify risky design patterns from past change orders, predict schedule risks or optimise building layouts for energy performance. These capabilities are not theoretical. They are exactly the sort of AI workloads that other industries have been running for years, but which have been hamstrung in AEC due to poor data foundations.

The same architecture also enables genuinely federated collaboration. Instead of exchanging bloated files, firms can give project partners secure, query-level access to precisely the objects or datasets they require, all drawn from a live single source of truth. A clash engine could read from the same table as a cost tool, which could read from the same table as an AI model or an internal search engine. That interoperability is the essence of BIM 2.0: a move away from document exchange and toward continuous data exchange.

In short, the lake house doesn’t just solve a technical problem. It opens a strategic opportunity: for firms to build their own intellectual property, automation tools and data-driven insights on top of a foundation that is finally theirs.

Automation penalties

As the industry accelerates toward automation-heavy workflows, the commercial incentives for large software vendors are beginning to shift in uncomfortable ways. Automation reduces the number of manual, named-user licences — the traditional revenue backbone of the design software business. And history suggests vendors rarely accept declining per-seat income without looking for compensatory levers elsewhere.

In today’s tokenised subscription world, that compensation mechanism may lead to higher token prices, steeper token consumption rates for automated processes, and an overall rebalancing designed to recover revenue lost to more efficient, machine-driven workflows. In effect, the more automation delivers value to practices, the more vendors will seek to recapture that value through metered usage.

This is precisely why the conversation around data lakes and lakehouse architectures matters so much. Owning your data is no longer a philosophical stance, it’s a strategic defence. If automation becomes a

toll road, then firms need control of the highway. By centralising and owning their data, and by running automation on their own plainfield rather than someone else’s, practices can decouple innovation from vendor metering and protect themselves from being priced out of the very efficiencies’ automation is meant to provide.

Data extraction

The next challenge is practical: how does the industry transition from thousands of RVTs, DWGs, IFCs and other formats to an environment in which project data lives as granular, structured, queryable information?

For past projects, there is no shortcut. Firms must extract their model archives – geometry, metadata, relationships –and convert them into open formats that the lake house can govern. This is labour-intensive but unavoidable if firms want their historical data to fuel analytics and AI.

But the real transformation begins with live projects. At HOK, Schleusner has no interest in continuing to export files forever. He is designing a future in which BIM data streams from authoring tools as it is created and directly into the lake. Instead of waiting days or weeks for federated models or periodic exports, the goal is a steady flow of BIM objects, with each wall, room, door, system and annotation arriving in the lake house in real time.

This turns the lake house from an archive into a live, evolving representation of the project. Real-time clash detection stops being a dream and becomes standard practice. Rules-based validation can run continuously instead of catching issues once a week. Analytics and AI can operate on the dataset as it changes, not after the fact.

But there are practical barriers to all this. The first stumbling block for the live streaming of BIM objects – inevitably – is Revit. When Schleusner approached Autodesk to ask whether Revit could stream objects as they are created, as opposed to files being saved, the answer was an unambiguous ‘No’. Revit’s underlying architecture was simply not built for this. The geometry engine and much of the historic core remain predominantly singlethreaded, making real-time serialisation and broadcast of object-level changes impractical without significant performance penalties. In other words, the model cannot currently emit deltas as they occur.

Yet several vendors have found ways to work around these constraints. Rhino. Inside can interrogate Revit geometry dynamically. Motif claims to capture element-level deltas as they change. Speckle has developed its own incremental update

Parquet and Iceberg

Parquet and Iceberg form the backbone of the AEC data lake house. Why?

Apache Parquet has emerged as the AEC industry’s open file format of choice, storing data by column rather than by row – a simple shift that transforms performance. Most analysis only needs data from a handful of columns, so engines can read exactly what they need and skip everything else. When working with AEC data – thousands of BIM parameters, millions of elements – this reduction in I/O is essential.

Apache Iceberg, originally developed at Netflix, brings database-like intelligence to raw Parquet files. It adds a metadata layer that tracks tables using snapshots, each one referring to a set of manifest files. A manifest is essentially an index: a lightweight catalogue listing exactly which Parquet files belong to the table, what they contain, and how the dataset is partitioned.

Instead of scanning thousands of files to understand the table, Iceberg simply reads the manifests. This gives the lake house the qualities AEC desperately needs consistent views, transactional safety, version control and the ability to evolve schemas without rewriting terabytes of data.

mechanism capable of extracting small, structured updates, rather than monolithic payloads. Christopher Diggins, founder of Ara 3D and a contributor to the BIM Open Schema effort (www.tinyurl.com/d-revit), has also demonstrated experimental object streaming from Revit and recently released a free Parquet exporter for Revit 2025.

A step in the right direction for Autodesk is its granular data access in Autodesk Construction Cloud (ACC), which generates separate feeds for geometry and metadata – but only after the file is uploaded and processed. This is useful for downstream analysis, but it is not true real-time streaming.

Meanwhile, Graphisoft is in the early stages of developing its own data lake infrastructure to support all of the many Nemetschek brands and their schemas. It’s certainly a trend that now is pervasive in the core AEC software suppliers.

As Schleusner puts it: “We don’t want to do what we currently have to do, which is every design or analysis tool running its own export. That’s just dumb.”

What the industry needs, he argues, is single export with multi-use. Data should be extracted once into an open, authoritative environment from which every tool can read and act. By putting BIM data into a shared platform, every tool, inter-

nal or external, can consume that data dynamically, without half the industry re-serialising or rewriting half the model every time they need to run a calculation, test an option, or update a view.

His experiments have ranged from SAT and STL to IFC and mesh formats, but none have provided the fidelity and openness he needs. His preference today is for open BREP – rich, precise, and free from proprietary constraints.

This is where the next piece of the puzzle appears: the Innovation Design Consortium (IDC). This group is emerging as the most significant collective data initiative the AEC sector has seen. It is a public-benefit corporation, comprising many of the largest AEC firms, primarily drawn from the American Institute of Architects (AIA) Large Firm Roundtable. These firms are pooling resources to solve shared data problems that no individual firm could tackle alone.

Schleusner joined the IDC’s executive committee nearly three years ago and brought with him a clear, technically grounded vision: to create a vendor-neutral foundation for project data that enables the streaming, storage and governance of BIM objects in an open, queryable architecture.

He is candid about why the industry hasn’t done this before: “The reason this has not been done or thought of being done today is because there’s no open schema that can actually hold drawing information, solid model representation and mesh representation.”

In his research, the closest fit he has found so far is Bentley Systems’ iModel, a schema and database wrapper that can store BIM geometry, metadata, drawings and meshes while supporting incremental updates. Crucially, iModel is now open source. That gives the IDC something the industry has never had: an adaptable, extensible schema that can act as the wrapper for all the lovely data.

There are caveats, of course. Solid models in iModel use Siemens’ Parasolid kernel, which is still proprietary. Some translation challenges remain. But as a starting point for an industry-wide intermediary layer, it is far further along than anything Autodesk, Trimble or Graphisoft have offered, although Bentley will still need to do some re-engineering.

The IDC is no longer in its prototype phase. It is actively building real tools for its member firms: extraction utilities, schema definitions, lake house integrations, and proof-of-concept pipelines that show Revit, Archicad, Tekla Structures and other tools publishing into a shared, vendor-neutral space.

The goal is not another file format. It is a

live representation of BIM objects that can feed clash engines, QA systems, cost tools, search engines and AI pipelines without rewriting half the model each time.

The IDC also plans to support AI directly. “We’re going to start hosting an open-source chat interface that can connect to IDC, provide data and individual firms’ data, keeping them firewalled,” says Schleusner.

Another technology layer, Lance DB, is also being evaluated. This is emerging as one of the most compelling formats for a modern AEC lake house, because it solves a fundamental problem into which the industry is running headlong: multimodal data at scale.

BIM models are only one part of a project’s digital footprint. The real world adds drawings, specifications, RFIs, emails, photos, drone footage, point clouds, and increasingly, AI-generated embeddings. Traditional columnar formats like Parquet handle tabular data well, but struggle when you need to store and version media, vectors and other nontabular assets in the same unified system. Lance was designed for this exact world. It brings the performance of a high-speed analytics engine, supports zero-copy data evolution, and treats images, video and embeddings as firstclass citizens. Netflix built it because its data is inherently multimodal – but so is that of AEC firms. A lake house built on Lance can finally treat all project information, geometry, documents and media, as one coherent, queryable dataset.

This is the first genuine attempt to build a shared AEC data infrastructure, one that is not driven by a vendor, but instead by firms who actually produce the work.

Another uncomfortable truth is that even if the industry succeeds in building a lake house for BIM data, model geometry and parameters alone are not enough to power meaningful holistic AI of project data. Having a model only will miss the context that lives in issues, approvals, RFIs, change orders, design intent and all those emails.

Worse, the data that does exist is scattered across owners, architects, engineers, specialists, CDEs and contractors. No single party holds the entire picture.

As Virginia Senf at Speckle explains: “It may be the large general contractors and top-tier multidisciplinary consultants who are best positioned to assemble project datasets, because owners are now demanding outcomes, not drawings.

“AECOM’s recent shift toward consultancy reflects this. But even if you gather everything, historical models are often inconsistent or simply wrong, a lot of legacy BIM data is unsuitable for analytics or AI at all.”

The way forward

The shift to a data lake house isn’t an IT upgrade. It is a re-platforming of the AEC business model. Firms have spent decades selling hours, yet the real value they have generated – the patterns, insights, decisions and accumulated knowledge encoded in their project data – remains locked inside proprietary files and vendor ecosystems.

A lakehouse finally gives firms a way to monetise what they actually know. Data stops being a dormant archive and becomes a living asset that can predict outcomes, guide design intelligence, improve bids and reduce risk.

What makes this moment significant is that the architecture is now proven. Open formats such as Parquet and Iceberg have stabilised. Cloud object storage is cheap and mature. Tools capable of extracting BIM data into open schemas exist. And, crucially, the first coordinated industry effort, the IDC, is bringing firms together to build a shared, vendor-neutral foundation for the next decade of digital practice.

Moving from siloed, proprietary files to an open, unified, AI-ready lake house is the clearest path to future-proofing an AEC firm. Especially when software companies are increasingly looking to toll ‘automated’ seat licences, which are used to batch process workflows using APIs to access the data.

The lakehouse replaces brittle integrations and repetitive exports with a single source of truth that every tool, and every emerging multimodal AI system, can build upon – and your practice will own its own data.

If BIM 1.0 was about authoring tools, BIM 2.0 is about the data itself — structured, queryable and controlled by the firms who produce it.

The IDC architecture is currently in development and will be adopted by its members when it’s ready. Wider distribution is currently being considered. For now, a number of very large firms have their own experiments internally and are deploying resources to build their own lakehouse stacks, mainly to own their own IP, run their own applications and experiment with creating bespoke AI tools and agents.

AEC Magazine will continue to explore this topic from a number of different angles in 2026 and it will certainly be a hot topic at NXT BLD in London - 13-14 May (www.nxtbld.com)

To join in this development work, membership is available through the IDC. ■ www.idc-aec.com

BEST EXPERIENCE

Design projects of any size with Archicad’s powerful built-in tools and user-friendly interface that make it the most efficient and intuitive BIM software on the market.

SpeckleCon 2025 Event report

Speckle’s open-source data platform promises AEC users a chance to break free from proprietary lock-ins and acts as the connective tissue between multiple industry design and engineering tools. Martyn Day reports from the company’s annual meet-up

It was good timing for AEC Magazine that Speckle held its annual customer conference –SpeckleCon – at exactly the same time as we were deep-diving on data, data lakes and data lake houses (see cover story on page 20) . Executives at Speckle only had to wait 12 years for us to figure out that industry design data is being held captive in the wrong places and that some liberation effort is long overdue.

The company is driven by a core belief in the benefits of open-source software. Its BIM data platform is designed to move geometry, metadata and project information freely between common and uncommon software environments, including Revit, Rhino, Grasshopper, Archicad, Blender, Dynamo, QGIS, Python, and dozens more.

Speckle is a cloud-hosted environment, with plug-in connectors delivering multiple granular streams of versioned, structured data objects, not files, to its own database and visualiser.

But that’s just the start. Speckle has also built layers of super-useful capabilities that can be applied to this data. Once collated, teams can push, query, diff, fork and merge model data in much the same way as software engineers manage code. For computational designers, analysts and digital-delivery teams, this model-as-data approach removes the traditional bottleneck of proprietary formats and vendor-controlled interoperability.

presentations, there were also substantially more enterprise deployments on stage, as well as design IT directors in the audience.

The secret, it seems, is out. Large firms are seeing broader benefits to Speckle as a corporate-wide tool to solve all sorts of problems and enable capabilities that were simply not possible using current technologies from traditional vendors. I think this is partly due to maturity and word-of-mouth, but also the develop -

‘‘

remains as CEO and founder (after all, it was his PhD research that got the company started in the first place), but his co-founder Matteo Cominetti stepped down this summer.

It could be that Speckle has somewhat ‘crossed the chasm’ from being an expert user’s weapon of choice via a single seat subscription to becoming a layer of technology to which design teams can connect and share design information throughout a company

At the SpeckleCons of the past, all presentations would be from generative and coding savvy architects/structural engineers, who had chanced upon Speckle and used the ‘Swiss Army knife’ nature of its tools to craft their own applications for design tasks. Let’s just say the crowds were very Python-savvy.

This year’s event represented a phase change, as amongst the Python-centric

ment at Speckle of additional high-level capabilities. These offer the company more of a commercial opportunity than simply looking to support their development via hosting fees.

Virginia Senf is a newcomer to the Speckle team, joining as head of growth from Autodesk, where she was director of data strategy. Dimitrie Stefanescu

Senf provides the drive to ‘productisation’ and commercialisation that was perhaps not obvious in the company’s previous, hacker-like mentality. Having customers such as Gensler and Suffolk Construction on stage was testament to the value that large firms are finding in Speckle. For most of its existence Speckle looked like a handy ‘interoperability wiring loom’ to connect different applications through its own model definition. Today, it’s very clearly a data platform and an independent environment in which customers can actually do something useful with project data. This provides Speckle with the opportunity to develop application layers and capabilities for customers who have chosen to build a ‘shadow’ BIM data strategy. There has also been a lot of engineering work under the hood to get the platform to reach enterprise-ready status, such as support for SOC 2.

Speckle Intelligence is designed to aggregate the sprawl of siloed BIM data into decisionready project information

New features

Stefanescu demonstrated new capabilities, including 2D mark-ups on 3D views and support for embedding 3D models in Miro for whiteboarding and presentations.

The biggest development, however, is Speckle Intelligence. This is a serious attempt to aggregate the sprawl of siloed BIM data into decision-ready project information. While the company frames the offering around five principles –visibility, versioning, versatility, validation and velocity – the underlying ambition is simpler. It’s to collapse the lag between a design change and a business decision.

Visibility exposes the basics that most teams currently struggle to see: what’s in the model, how many elements exist, how naming conventions diverge and where key KPIs sit, for example.

Versioning adds the temporal dimension, surfacing exactly what’s changed between iterations and, more importantly, why it matters to a programme, its cost or its compliance.

Versatility is its ability to extract insights from Revit, Rhino, Tekla, IFC and Autodesk Construction Cloud (ACC) with equal weight. That breadth matters, because firms increasingly operate multi-tool workflows and traditional analytics pipelines tend to collapse under inconsistent schemas.

Validation means models can be checked against project requirements or firm-wide baselines without the need to export them to secondary tools. It positions Speckle less as a transport layer and more as a governance layer.

Velocity ties the system together. Insights can be consumed through Speckle dashboards or streamed outward to enterprise data stacks such as Snowflake, Databricks or Fabric, where firms increasingly consolidate cost, carbon and performance metrics. This closes the loop. Models become measurable, differences become explainable and decisions become defensible.

Speckle Intelligence integrates, then aggregates, and then offers PowerBI functions across all this federated data like dashboards. It comes with a whole host of industry related templates ready to go, simplifying the whole interrogation of project information without a Python script in sight. It’s possible to track modifications between model versions and validate models against standards. It’s a bit like ACC, plus Solibri plus Navisworks – but in one open-source, modern, easy-to-deploy solution.

SpeckleCon speakers

For a one-day event, Speckle brought together an impressive range of speakers from around the world, representing firms including Stantec, Herzog & de Meuron, HENN, Royal Haskoning, Suffolk Construction and Gensler. We expect that, as in previous years, these talks will be made available online and their availability posted on LinkedIn.

Topics covered were hugely varied, but all focussed on Speckle as a data aggregator that enables firms to develop and run their own applications, whether they be running bespoke Python code or iterative workflows through disparate applications, or providing a central tool for all project teams to visualise big data and extract business metrics.

At the end of the day, I hosted a panel discussion with Suffolk’s Murat Melek, Herzog & de Meuron’s Michael Drobnik, and Gensler’s Vignesh Kaushik. The baseline takeaway was that all of these firms are adopting data-centric mindsets. While authoring tools are still important to these firms, they are also the cause of some of the chaos involved in getting access to data, which is where the real value to their businesses resides. All firms need strategies on how they procure and analyse their company’s intellectual property.

As design director for Asia Pacific at Gensler, Vignesh Kaushik started his Speckle journey during Covid, using his own credit card to try out the technology. Impressed with its performance, he built out multiple uses for Speckle within Gensler, integrating the technology with the firm’s own applications and building on top.

I could see how this probably drove the development of BI dashboard capabilities. As a side note, I did ask Kaushik what’s the biggest problem he currently faces, and he responded, unprompted, that it’s EULAs (end user licensing agreements) and their data-grabbing terms. Being in charge of so many users, some unapproved installs can still creep in under the radar.

Suffolk, meanwhile, is a very unusual construction firm, in that while it uses all the standard digital tools, it also has a dedicated arm called Suffolk Technologies that tests and invests in interesting start-ups. The thinking here is focused on Suffolk solving its own problems, rather than waiting for help from the main vendors.

Because pre-construction is completely disconnected from its model, the goal at Suffolk is to tap into source data directly, in

order to get it into the data lake early. Speckle enables the firm to automate the process of getting that data in, which is crucial, because different architects use different software, descriptions and materials.

Once inside, AI algorithms decide the closest matching code based on the Speckle data, allowing the firm to programmatically move information to its estimation system. In some respects, Speckle is acting as middleware here – a harmonisation layer before important business automations get run.

It’s perhaps fitting that Switzerlandbased Herzog & de Meuron uses Speckle like a Swiss Army knife. Drobnik, the company’s lead design technologies associate, explained the firm’s multiple uses of Speckle. This long list includes data collation for authoring tools, comparing iterations, standardising data, building in-house applications and storing and sharing assembly libraries. Herzog & de Meuron also uses Speckle to create dashboards, automate pipelines and act as a platform for running its CALC LCA tools. The firm has had a long-term engagement with Speckle and has always been liberal-minded, in that it needs an independent data platform to glue many data workflows together. And with AI and in-house training, said Drobnik, demand for quality data is rising.

Same but different

SpeckleCon 2025 was a case of ‘the same, but different’, in comparison to previous years. The product champions who made up the audience of prior events were joined by more BIM managers and global IT directors, both on stage and in the audience.

It could be that Speckle has somewhat ‘crossed the chasm’ from being an expert user’s weapon of choice via a single seat subscription to becoming a layer of technology to which design teams can connect and share design information throughout a company. I suspect the turnkey BI/dashboarding will win over even more fans as the movement to host all project data internally builds pace.

There was a lot of talk about how broken the federated BIM world is and how data just doesn’t flow well between design and construction. The important takeaway is that mature BIM firms want 3D, because what they can do with it is highly valuable. The whole 2D drawing/documentation approach only amplifies the duplication of effort to which the current standard workflow is glued. Speckle is making the right moves to connect up both internal functions and wider project teams.

■ www.speckle.systems

New Research from Chaos Shows How AI Is Transforming Productivity in Architecture

Three years after AI hype reached the mainstream, it still dominates conversation in architecture. To ground the discussion, Chaos produced a new white paper built on 2025 practitioner interviews and internal research, revealing where AI creates value, where it stalls and what deserves attention next.

AI as a Creative Partner in Architecture.

While AI is shifting the balance of work in architecture, the designer remains central to every major decision. Instead of displacing creativity, new tools are reducing the manual steps that have long slowed early design. Tasks like documentation setup and visualization preparation are moving faster, allowing architects to devote more attention to ideas that shape the experience and performance of a project.

As a result, architects are becoming guides and interpreters, ensuring that AI-generated suggestions serve the narrative and the brief. As clients experiment

with AI on their own, professional authorship increasingly depends on how well designers communicate context, feasibility, and direction.

These shifts elevate the role of judgment. When visuals match the level of development, conversations stay focused on what matters most. When iteration answers a specific question, progress becomes decisive rather than overwhelming. Tools such as Chaos’ AI Image Enhancer support that shift by refining visuals within the design environment, while still relying on architects to determine alignment and meaning. Used with intention, AI strengthens the thread of design intent that runs from concept through delivery.

Responsible Use of AI Is Critical.

As adoption grows, architects are discovering that AI introduces new risks alongside new efficiencies. Some are familiar, but others are emerging only as usage spreads across real projects. Data privacy is one clear concern. Public models often learn from what is uploaded into them, which poses challenges for work containing client-sensitive or proprietary information. Some firms are now guided by contract language that defines which tools may touch project data and which must remain strictly internal.

Authorship is another area of attention. When multiple stakeholders generate ideas with the same tools,

AI Delivers Real Productivity Gains in Design Workflows.

According to the insights from this white paper, the most valuable gains will come from eliminating redundant steps in the process rather than speeding up existing ones. Instead of rebuilding ideas multiple times, early inputs could move farther without interruption, supported by tools that understand phase, context, and performance goals from the start.

The Future of AI in AEC.

styles can begin to converge and creative identity can flatten. Designers are learning to protect the distinctive qualities of their work by curating references carefully and reviewing outputs with context in mind. Overtrust remains a risk as well, since AI can look convincing while misunderstanding the brief.

These new responsibilities are quickly becoming standard parts of delivery. Firms are establishing policies for safe use, training teams to verify outputs early, and keeping clients informed about how AI influences decisions. It’s clear that responsibility is no longer a separate topic, but rather, a core ingredient of design integrity.

This shift is likely to be driven by AI capabilities embedded inside the authoring environments architects already rely on. When tasks like asset creation stop interrupting modeling flow — for instance, using Chaos’ AI Material Generator to turn a reference photo into a ready-to-use texture — teams keep momentum focused on design rather than tool switching.

The experts interviewed expect this transition to unfold gradually, but the direction is already visible.

Imagine. Design. Believe. With Chaos.

The firms best prepared for what comes next will be those that pair strong design judgment with adaptability and clear process discipline. AI will not replace expertise, but it will make it more valuable, amplifying the importance of skills that filter, interpret, and communicate design intent.

NXT BLD 2026

Make a diary date for 13-14 May 2026, when we will be holding the tenth edition of NXT BLD, incorporating NXT DEV. An open letter from Martyn Day.

It’s hard to believe that we have successfully staged no fewer than ten NXT BLD conferences. What began life as a one-day event for 200 attendees in the Foster+Partnersdesigned Great Court of the British Museum is now a two-day extravaganza that occupies the largest conference space available at the Queen Elizabeth II Centre in Westminster, London.

As always, we are exceptionally grateful to all of those who attend, give talks, exhibit and hang out in the evening, enjoying industry natter in various local inns. We realise our readers have very busy working lives, so every contribution is meaningful. Thanks also to Lenovo and its partners for providing the invaluable support in pulling this off on an annual basis.

The first thing we want to highlight is that NXT BLD 2026 will be held in May (13-14). In other words, it’s a month earlier than usual, due to venue availability.

Second, since it’s our tenth anniversary, we’ll be pulling out all the stops to ensure that this is our best NXT BLD ever – there’ll be more of everything, dialled up to 11, and some surprises along the way.

‘‘

As with any professional conference, we recognise that it’s the delegates who bring much of the value and NXT BLD is unique in that it brings together designers, technologists, AEC industry analysts and researchers and a wide cross-section of folks from the tech world, including those from start-ups, established vendors, venture capital (VC) firms and merger and acquisition (M&A) specialists.

In many respects, this is a major gathering of the AEC design industry, all in one place, for two days.

NXT isn’t just product demos –the sole raison d’etre of far too many BIM events, in our opinion. Nor is it just about startups. It’s about challenging the status quo, because frankly, certain aspects of how the software tools market has evolved deserve challenging ’’

For NXT BLD 2026, we will occupy the third floor of the QEII, the venue’s premium area, featuring high ceilings and more space. With exhibition and catering facilities located on the same floor, there’ll be less running around and more opportunity for those serendipitous meetings and ad hoc conversations that are such an important part of the event.

Representatives of software firms want to hear what leading AEC firms are doing and VCs are all ears, too. NXT is a platform from which design firms get the chance to speak back to the tech industry and demand the tools and capabilities that they need. After all, it’s through this event that we championed the Future Design Software Shopping List (www.tinyurl.com/AECsoftware) and who could forget our Pricing, Licensing and Business Model discussion (www. tinyurl.com/AEC-pricing)?

NXT isn’t just product demos – the sole raison d’etre of far too many BIM events, in our opinion. Nor is it just about startups. It’s about challenging the status quo, because frankly, certain aspects of how the software tools market has evolved deserve challenging.

Themes and variations

As NXT BLD evolved, we added a second day to directly address emerging BIM 2.0 developers and host some hard but nec -

essary discussions that provide feedback to the industry on what needs to change. We realise that this confuses some people, as we have previously marketed the two as separate events. Since the vast majority of delegates now come for the entire two days, we are calling the conference NXT BLD, but the NXT DEV day remains, with slightly different agendas on each day.

NXT BLD (day 1) offers a blend of practitioner’s workflow advances, interesting projects, technology and workstation hardware. NXT BLD 2026 will also have distinct architecture and construction streams, with some hybrid crossovers.

NXT DEV (day 2) , meanwhile, looks further ahead, offering a blend of thought leadership, hard questions and panel discussions on gnarly topics. There’ll be advice on dealing with threats such as EULA metastasis (www.tinyurl.com/EULANXT) and negotiating with your dealer/ vendor, along with best-practice deployment guidance from every corner of the tech industry.

In terms of hot topics for 2026, BIM 2.0 really stands out, especially as startups in this space begin to reach some level of maturity. These firms will be in

the room and on hand for questions. Some, such as expert BIM developer HighArc, will be publicly presenting in the UK for the first time.

AI is bound to be another hot topic, as we hear from firms training their own models, incorporating AI into their workflows and using it to write in-house applications on demand. At the same time, we’ll also be addressing the likely cultural impact of AI on the AEC sector and the weaknesses (as well as the strengths) of AI-powered automation.

Now that autodrawing capabilities are starting to be delivered, for example, we’ll be looking at the many tools that are springing up and how they work. The promise here is that if drawing production from BIM can be automated to some degree, firms can save money and concentrate on building better models. But does that mean that the AEC industry can finally escape PDFs? Will pain points such as contracts and planning benefit from digitisation? We want to hear all views.

From the ‘data lake’ front cover story of this issue of AEC Magazine, you can clearly see that we believe data is going to be an increasingly big issue as we move forward. It shouldn’t be long before firms

can really own their data, free of vendor’s constraints around file formats and APIs. With AI agents and data lake houses, a whole new world may open up, boosting our understanding of project data which, with injected intelligence, will be able to speak for itself.

In case you still need convincing, all past talks from previous NXT BLD and NXT DEV events are available to watch on our dedicated portal (www.nxtaec.com) . You can browse by year, speaker, theme or discipline, making it easy for you to trace how ideas such as AI-driven design, robotics, digital fabrication, BIM 2.0, simulation and open data have evolved across the decade we have been running these conferences. You can even create your own playlists, revisit standout presentations that have shaped your thinking, share talks with colleagues or clients, and use the catalogue as a research tool when evaluating new technologies or preparing internal strategy decks.

So please save the dates for NXT BLD: 13-14 May 2026. We will be announcing speakers on www.NXTBLD.com as our agenda develops, as well as keeping you informed about ticket availability. We look forward to seeing you there.

■ www.nxtbld.com

Alfonso Monedero and Pablo Zamorano, Heatherwick Studio, delivering the keynote at NXT BLD 2025

Bentley shapes its AI future

From civil site design to construction planning, Bentley Systems is embedding AI across more of its tools, enabling engineers to explore thousands of options, automate workflows and retain full control over outcomes, as Greg Corke and Martyn Day report

As AI continues to permeate all software companies, Bentley Systems wants to show how it can redefine infrastructure delivery by threading it through civil design, modelling, subsurface analysis, construction planning, documentation management and more. Intelligence is being embedded across much of the company’s portfolio – not as a bolt-on, but as a structural shift in how engineering data, models and organisational memory connect and how future engineering workflows will be shaped.

Customers can expect to see AI acting as a practical companion to engineers, capable of optioneering, interrogating models in natural language, automating laborious workflows, and delivering recommendations that remain anchored in engineering logic, codes and lived project constraints. Crucially, Bentley’s message is that users retain full control over these outcomes.

ing and iterating, but that’s exactly how progress happens.”

But as the AECO industry leans on AI for more critical decisions, how do practitioners ensure that those decisions are trustworthy? “Engineers work in a creative profession, but one where precision is not negotiable and consequences are real,” said Cumins. “That’s why, at least for the foreseeable future, AI and infrastructure will remain a collaborative process with the human in the loop.”

using customer data for training AI. “Your project data remains your data, always,” he said. “Only you decide if and when you want to use your data to train AI. Our role is to help you unlock the full potential of your data.”

Cumins went on to explain that firms can train AI assistants on decades of product documents, standards and expert insights, allowing engineers to ask questions in natural language, such as, ‘How did we design the foundation for a similar bridge last year?’ And receive relevant information in seconds.

“It’s like having the most experienced engineer available to every junior team member, on demand,” he added.

Of course, AI can only tap into institutional knowledge if that data is accessible and not trapped in proprietary formats and siloed systems. “We must not only unlock access, but also organise the data in consistent schemas, so AI can interpret and apply it across projects,” said Cumins.

In order to make AI recommendations truly useful, they must be grounded in a very specific kind of context, he explained. “AI needs to understand our intent, what we’re trying to achieve, and the constraints we operate under in design and construction,” he said. “This means understanding the desired outcome and performance, along with limitations such as budget, schedule, safety and carbon impact. If we ask AI to help lay out a new transit route, or optimise a bridge design, AI needs to be guided by what we try to achieve and what rules it must respect.”

to ditions with Seequent.

The AI message

At the company’s recent Year In Infrastructure (YII) event in Amsterdam, delegates were shown an AI future that is built around an ecosystem of specialised agents, engineered to perform site-grading optimisation, hydraulic calculations and drawing automation, as well as coding assistance and data discovery within ProjectWise. Bentley’s new ‘Plus’ generation of iTwin-native applications sits at the centre of this strategy, designed from the outset for AI-driven workflows.

Real-world context that reflects both the built environment (existing assets) and the natural environment (terrain, water and climate) is also essential, said Cumins. Of course, Bentley customers have plenty of tools at their disposal here, from reality modelling above ground with iTwin Capture, to modelling and analysing subsurface con-

This provided a natural segue to the Bentley Infrastructure Cloud, a central store for all manner of infrastructure data including models, PDFs, inspection forms, photos, IoT sensors and more. “It frees infrastructure data from closed formats, aligns it to open schemas and prepares it for AI to deliver better outcomes across the entire lifecycle,” Cumins said.

In Bentley’s view, this lifecycle approach is essential for infrastructure AI. Executives at the

At YII, Bentley Systems CEO Nicholas Cumins cited examples of infrastructure projects that are using AI to compress schedules by as much as 80%, as well as analyse thousands of design options that were never feasible before.

Firms will also get crucial context for AI by learning from experience, tapping into

company believe AI must also understand how

designs perform once

He acknowledged that it’s still early days for AI. Firms must be patient, “Not every project that adopts AI sees transformative results yet,” he told attendees. “Many teams are experimenting, learn-

what Cumins describes as an organisation’s “collective memory”.

Here he took the opportunity to hammer home a point made several times at YII last year, on

built. This involves maintaining a digital thread from design through construction to operations and feeding performance data back into the system. To this point, Cumins

Nicolas Cumins Bentley Systems CEO

explained how the Bentley Infrastructure Cloud will evolve to support “full lifecycle integration”, enabling AI to make performance-based design recommendations.

Finally, he explained the importance of engineering context, “AI must operate within the bounds of engineering logic and physical principles,” he said. “This means recommendations generated by AI must align with established engineering practices and be validated by design codes and sound judgement. Whether it’s proposing a foundation designed for a high rise or recommending reinforcement details for a bridge deck, AI must respect engineering constraints, physical laws, safety standards and construction requirements.”

His words came with a warning, “Left unchecked, a purely data-driven AI might propose solutions that appear optimal in simulation, but are unsafe in the real world.”

Bentley is not undertaking this AI journey alone. Cumins introduced the Infrastructure AI Co-Innovation Initiative, whereby Bentley is inviting its users to help it create the next generation of AI-enhanced workflows, prioritising which APIs should evolve and how to better support AI use cases.

Cumins addressed the looming challenge facing all software firms: how to stay profitable amid rapid change “We will also explore new commercial models that reflect the evolving balance between AI-driven and human-driven work,” he said.

The ‘Plus’ generation

Last year, Bentley unveiled civil site design software OpenSite+, the first in a new generation of AI-powered, iTwinnative desktop applications that are fully connected to the Bentley Infrastructure Cloud. All of Bentley’s ‘Plus’ products write data directly to a Bentley iModel, without having to go through an intermediary format like DGN.

OpenSite+ is designed to automate rou-

tine tasks such as drainage design and earthwork optimisation. Generative design allows users to evaluate thousands of site grading scenarios with a single click, optimising for both cost and constraints.

OpenSite+ is now being joined by two more ‘Plus’ products: OpenUtilities Substation+ for substation design and maintenance; and Synchro+ for 4D construction planning.

While OpenUtilities Substation+ serves a niche market, Synchro+ has significant growth potential. While Bentley’s mature Synchro 4D products are currently used primarily on larger projects, Bentley is betting on Synchro+ to lower the barrier to entry by delivering a single collaborative platform for planning and estimating.

Success will rest on a simplified UI, with AI-powered features enabling nontechnical team members, not just BIM managers and schedulers, to engage directly with 4D planning. “A site manager can navigate a model, run a simulation or visualise sequences without needing deep technical training,” said Morgan Hays, senior director for construction product management at Bentley Systems.

AI is being used to bring new efficiencies to construction sequencing, automatically querying the model to quickly generate a 4D schedule, as Francois Valois, senior VP for open applications, explained: “The schedule might exist in Primavera or Microsoft projects, but linking it is something that is tedious.”

Bentley Copilot, meanwhile, is a contextaware AI assistant tailored to the needs of the engineering disciplines and infrastructure sectors. It relies on a large language model (LLM) that understands what engineers want to do and then leans on multiple AI agents trained on discipline-specific knowledge to do the actual work.

“It can interrogate requirement documents, interrogate the model, make complex changes that would have taken a lot

of steps previously – and all of that by talking to the Copilot,” said Valois.

Bentley Copilot started out life as an experiment for the new generation of ‘Plus’ products but is now being rolled out across Bentley’s more mature MicroStation-based engineering applications as well, starting with OpenRoads Designer and OpenRail Designer for model-based road and rail design.

Bentley Copilot shares common core components across Bentley products, while an AI technique called retrievalaugmented generation (RAG) provides the specific contextual intelligence for each application. For example, in OpenSite+, Bentley Copilot understands site-civil workflows because Bentley has embedded additional domain knowledge directly within the software.

Valois explained that each Bentley Copilot is built around four key capabilities.

First, they can understand product documentation, so users can ask naturallanguage questions such as, ’How do I place an alignment in OpenRoads?’. The Copilot will respond based on relevant documentation.

Second, they can understand uploaded project requirements and tailor responses to the project context.

Third, and perhaps most importantly, they allow users to interact with design data. Because the ‘Plus’ generation products feature “intelligent models”, the Copilot can query those models directly, essentially generating SQL queries in the background to extract and interpret data.

Fourth, Bentley Copilots can be used to execute commands, and because they have spatial awareness, they can understand design element relationships and intelligently modify models based on natural language prompts. Commands can even be chained together to carry out multi-step tasks. “It’s kind of creating that code in the background, and execut-

Civil site design software
OpenSite+ has Bentley Copilot built in

ing those commands,” said Valois.

It’s still early days for Bentley Copilot, but at YII, we saw how engineers can instruct OpenSite+ to perform tasks such as adjusting the angle of parking bays, checking car park designs against local standards and calculating site run-off.

Furthermore, as the Copilot LLM in OpenSite+ has also been trained on Haestad Methods, it can perform complex calculations for hydraulic design and analysis from a simple voice prompt, instead of having to manually extract data, apply formulas and validate computational steps.

It’s easy to see how Bentley Copilot could automate certain workflows, but the human remains firmly in the loop. Users retain full visibility over all decisions that are made, as Ian Rosam, product management director for Bentley’s civil engineering applications, explained. “It maintains a history, so your conversations are recorded for posterity, but it also means that we can jump into previous conversations where the LLM perhaps has been used to make a change in the model, to preserve that history, to create that record and index of that change,” he said.

ing products, so that I can combine this with my own agents that I’ve trained with my previous knowledge?’”

Moutte explained that, in the future, there will be a user experience and an agent experience within Bentley products. The challenge will be how to make the engineering capabilities of Bentley’s software approachable and discoverable to all types of AI agents.

‘‘ While some vendors’ restrictive EULAs are causing consternation, the transparency offered by Bentley executives when it comes to AI training stands out ’’

“In many cases, people might not see the user interface of our software, but they will be using the engineering capabilities of it,” said Moutte.

Last year, Bentley announced its first forays into automated drawing production through OpenSite+, using AI to deliver significant time savings to a workflow that Bentley executives say can account for up to 50% of a site design project’s time.

That effort has now evolved into a formal feature called Label Optimizer. This is also being applied to Bentley’s MicroStation power-platform-based products, including OpenRoads Designer, OpenRail Designer and OpenSite Designer.

licensed. This can include data contributed by users and organisations that have explicitly agreed to share their data for the purpose of developing AI models that benefit all Bentley users.

Cumins explained that this is typically carried out in the context of Bentley’s early-access programmes, which was the case for OpenSite+, adding that customers tend to contribute data that is not that unique to them, while data that is unique to them is held back and used for their own fine-tuning.

AI models are finessed in the Bentley Infrastructure Cloud, but Bentley executives state very clearly that data specific to each account is utilised solely for the benefit of that account and is not incorporated into broader models or product enhancements.

But how does fine-tuning work in practice? Bentley CTO Moutte gave the example of using AI for drawing automation. “There is a Bentley default model that has been trained with data that has been sourced from third parties, but you also have all [your own] past drawings in your ProjectWise data source in your Infrastructure Cloud Connect environment, which are all potentially relevant for that AI model,” he explained.

Of course, in the AI world, things are moving incredibly fast, so Bentley Copilot is not tied to a single LLM. “We have a layer of interaction between the large language model that we use and our software, so we can switch from one provider to the next easily, based on what is the flavour of the next six months.”

Agents and automation

As Bentley Copilot evolves, it seems inevitable that users will increasingly interact with their design tools through natural language – written or spoken – rather than relying solely on keyboard and mouse. This shift is likely to have a significant impact on user interface design. However, traditional methods of interaction aren’t going away any time soon. Perhaps the biggest impact that AI will have on user interface design is through agent-to-agent communication. Bentley CTO Julien Moutte explained that he is hearing from more engineering firms that are hiring data scientists to create their own agentic AI workflows. “They’re saying, ‘Is there any way for me to tap into some of the capabilities of your engineer-

Label Optimizer uses machine learning to annotate drawings, automatically organising labels and eliminating overlapping text. In short, it makes drawings easier to read. It handles label placement, leader line length, text rotation and spacing, all without manual intervention.

“[With] the neural network that we built, we were able to get a much higher level of quality – just as good as a human could do, sometimes better,” explained Valois. He added that, by using AI rather than traditional algorithmic approaches, the software is able to factor in aesthetics as well.

Of course, label optimisation is just one facet of autonomous drawings. We asked Valois if Bentley plans to tackle drawing layout as well? “We haven’t done that yet,” he replied. “At the moment, it’s the text itself.”

Tenets of training

Unlike some other software companies, Bentley Systems is crystal-clear in its messaging around AI training and customer data. To return to an earlier Cumins quote: “Your project data remains your data, always.”

Bentley has trained its AI models using data that has been purchased and/or

“We let you pick the ones that you would like to use, because the design might be the client’s or the data might not be good enough. You don’t want to put garbage in.”

Bentley then retrains the AI model with the foundation data, and the private data, but that new private AI model is only available to the customer. The impressive thing here is that Bentley provides full control over data governance.

“We provide an audit track of every time that [private model] was used, and what data was used to train that model,” said Moutte. “So, if you change your mind and you tell me, ‘Oh, that drawing that I’ve used to specialise the model, I don’t have the permission after all’, you opt it out and we retrain the model. And we can even tell you which were the inferences that have been potentially contaminated by that data.”

Project intelligence

Bentley Systems is also using AI to uncover information in ProjectWise, part of Bentley Infrastructure Cloud.

In 2026, users of the project collaboration and information management platform will get AI-powered search capabilities, designed to significantly reduce the time they spend searching for information across project folders and data sources.

“Historically, a ProjectWise search was just a straight keyword match,” said Jason Slocum, director for ProjectWise. “If you misspelled something, you got nothing back.”

That has now changed, as Moutte explained. “Say you need a document from a past project, and you’re not sure what it’s called or who created it, or even where it’s stored. With the redesigned Projectwise interface, you can stop digging and you can start finding. The new AI-powered search understands what you mean, not just what you type.”

Once a document is found, users can also receive “instant, concise summaries” generated by AI, without needing to open files or switch between applications.

Bentley is also using AI to help users develop their own automated workflows. In MicroStation, for example, the new Python Assistant acts as an AI-powered coding tool, allowing users to create scripts without having to learn Python themselves. Instead of writing code manually, they can simply describe what they want to do, and the assistant generates the Python code for them.

Cloud connect

For years, Bentley Systems has talked about digital twins, the iTwin platform and “connected data environments”, but the practical reality for most users has been scattered repositories, inconsistent schemas and workflows still anchored to desktop silos.

At YII the company launched Cloud Connect, which is a new foundation layer of Bentley Infrastructure Cloud and attempts to solve the industry’s biggest operational drag: infrastructure data remains locked inside PDFs, proprietary formats and legacy file servers.

Bentley’s Cloud Connect solution offers a unified web interface, to a single, governed environment where design models, inspections, reality capture, geospatial and IoT streams remain queryable and interoperable across the lifecycle.

Connect can ingest more than 50 engineering formats and integrate with enterprise systems, effectively acting as a federation layer rather than a file store. Collaboration tools such as feedback, mark-ups, correspondence and deliverables management are built in, making it relevant to day-to-day project operations.

“Infrastructure data lives everywhere –in models, PDFs, inspection forms, photos, IoT sensors and more – and it’s rarely connected,” said CTO Julien Moutte. “That changes with Bentley Infrastructure Cloud

Connect. Infrastructure professionals can access and manage all project and asset data in one place, fully contextualised and connected, from design through construction to operations.”

Geospatial meets engineering

Since acquiring Cesium last year, Bentley has been working to bring the geospatial and infrastructure worlds much closer together. This is being realised through tighter integration between the Cesium Ion platform and Bentley Infrastructure Cloud. Users can now see their projects, comprising CAD, engineering, iModels, PDF drawings, reality models and more, all within their geospatial context.

This convergence is also happening in Bentley’s desktop tools. MicroStation, and those tools built on the CAD platform for road, railway and bridge design, now includes support for Cesium 3D Tiles, allowing engineers to integrate a range of content directly into their design projects, including Google Photorealistic 3D Tiles and their own high-fidelity reality data.

The big news coming out of YII was

that Bentley has integrated its iTwin Capture reality modelling services within Cesium ion, creating what Patrick Cozzi, founder of Cesium and now chief platform officer at Bentley Systems, describes as a fully automated pipeline from data capture to geospatial visualisation.

iTwin Capture generates engineeringgrade reality models from imagery and applies AI-powered feature detection, while Cesium Ion enables developers to host and stream 3D content in the cloud.

The handling of point clouds and photogrammetry meshes is standard fare, but Cesium has now added support for Gaussian Splats, a relatively new technology that uses AI to identify key features from photos and videos.

“Think of Gaussian Splats as a point cloud, where each point is expanded and then blurred with the surrounding points to create a smooth, photorealistic surface,” said Cozzi. “Hence the name ‘Splats’ – each point is splatted.”

According to Cozzi, Gaussian Splats provide superior visual quality compared to traditional photogrammetry, especially for thin objects like power

1 ILabel Optimizer uses machine learning to annotate drawings
2 Synchro+ is looking to lower the barrier of entry to 4D Construction

lines and antennas, and refractive and reflective materials such as glass. At YII, Bentley demonstrated some of the benefits that Gaussian Splats can bring to engineering, inspection and digital twins, in a showcase of its R&D hub, the iLab (Innovation Lab).

Using an app developed using CesiumJS, an open-source platform for building web applications using Cesium 3D Tiles, Daniel Wikstrom, senior software engineer, presented a model of an Amsterdam cable bridge generated using Gaussian Splats.

Wikstrom explained that compared to photogrammetry, which often produces noisy, incomplete meshes with lots of holes, Gaussian Splats can capture the cables on the bridge much more accurately and cleanly.

Photogrammetry also struggles with transparency, he said, because it generates solid meshes, so see-through surfaces are lost. With Gaussian Splats, however, transparent elements are preserved, giving a much more faithful representation of the original scene.

Bentley executives clearly believe in the technology and are helping to establish Gaussian Splats as an open standard, collaborating with the Kronos Group and the Open Geospatial Consortium. According to Cozzi, the goal is to bring the reality modelling technology to glTF, an efficient 3D web format, and 3D Tiles, an open standard pioneered by Cesium that streams only the data needed for any given view, making large geospatial datasets much easier to handle.

The reality modelling capabilities in Cesium Ion will be generally available before the end of 2025.

Bentley’s opening AI chapter Bentley’s AI direction is still in its formative phase, and the company is clear that what exists today is only the opening chapter.

Copilot, for example, remains an evolving layer, rather than a finished assistant. It already shows promise in site design, drainage and documentation tasks, but it is not yet obvious how far it can stretch into more demanding domains such as detailed road design or structural workflows.

Bentley itself acknowledges that more discipline-specific agents will appear, each tuned to particular engineering tasks rather than one monolithic ‘super AI’.

This shift also coincides with a more subtle software architectural transition inside Bentley: moving from the long-established MicroStation DGNbased files to the new ‘Plus’ generation of applications. These are AI-powered, cloud-connected by default and write directly to iModels without passing through a traditional file-based workflow. This marks the beginning of a desktop that behaves much more like a connected cloud service tied directly to Bentley Infrastructure Cloud.

While some vendors’ restrictive EULAs are causing consternation, the transparency offered by Bentley executives when it comes to AI training stands out. In conversations with the company’s CEO and CTO, the message was consistent: foundational models are trained on licensed datasets, while customer data is only used for private fine-tuning, never rolled back into global models.

For customers, the bottom line for AI in design software is simple: what are the benefits? Bentley has shown a suite of

really useful capabilities, from the subtle to ‘this looks like magic’.

A great example given was Staad, a Bentley finite element analysis tool: while the software still performs the authoritative simulation, AI can be used to narrow the search space for results by putting the software in the right ballpark, reducing computation and time.

Meanwhile, iTwin Capture applies machine learning algorithms to automatically extract features and detect defects in images, such as cracks in infrastructure such as tunnels or bridges, and can rate the urgency of repairs like a seasoned professional.

The same approach extends across Bentley’s portfolio. It’s already visible in acquired tools like Blyncsy, which applies computer vision to assess roadway imagery at scale.

Bentley’s emerging ecosystem will be a constellation of specialised agents working across inspection, design optimisation, compliance checking and documentation. AI becomes not a bolt-on, but a network of deeply embedded capabilities reshaping how infrastructure work gets done.

■ www.bentley.com

3 Bentley Systems senior director, product management Molly Brown and SVP of open applications Francois Valois speaking with AEC Magazine
4 Patrick Cozzi, founder of Cesium and now chief platform officer at Bentley Systems, announces that Cesium now supports Gaussian Splats

Prevent Failures Before They Happen.

CivilSense™ uses AI and multi-source data — including GIS, infrastructure, and climate insights — to pinpoint high-risk pipeline segments before failure strikes.

Take control of your water asset management. Proactive planning starts here.

the only asset-management solution that delivers predictive and real-time AI leak detection with market-leading 93% accuracy. Backed by the expertise and scale of America’s leading infrastructure business.

Graphisoft Ignite 2025 Event report

At the two-day Ignite conference, held in Graphisoft’s hometown of Budapest, company executives and customers shared the latest technology developments and showcased some impressive project work, writes Martyn Day

The Graphisoft Ignite 2025 conference laid out a clear and future-facing roadmap, anchored around delivering what company executives claim is the “best design experience”.

According to Yves Padrines, group CEO of parent company Nemetschek Group, we are entering “the season of intelligence”. In this season, the company is aiming to deliver a more human-centric interpretation of AI than its competitors.

Gone are the days when Graphisoft’s focus was almost solely on its Archicad system. A recent expansion of its product portfolio under a new ‘design intelligence’ strategy has seen the emergence of Project Aurora, the company’s next-generation, cloud-native platform for early-stage design and feasibility work, unveiled at NXT BLD 2025 (www.tinyurl.com/graphisoftNXTBLD). There’s also the standalone MEP Designer tool that aims to give mechanical, electrical and plumbing engineers a more focused workflow, and of course, Archicad 29, boasting AI capabilities that expand its capabilities in the area of detailed design and documentation.

erability between proprietary systems is an important driver of project success.

This announcement follows an agreement between Autodesk and Nemetschek to share API access and aim for better connectivity between the two companies’ technologies. This connector alone represents a significant step in the right direction when it comes to easing multi-BIM collaboration bottlenecks.

Significant weight behind AI

The drive to AI is coming from the top down, with parent company Nemetschek applying significant weight (and money) to its AI push, in a bid to try and ease some of the gnarly problems the building industry faces.

At Ignite, Padrines didn’t sugar-coat the scale of the AEC industry’s dysfunction. It remains chronically inefficient: roughly 90% of large projects still blow

‘‘

‘dual athletes’ – experts who combine deep domain knowledge with the ability to wield AI as a multiplier. This pitch is now familiar: let machines handle the tedious, repetitive tasks, so architects can spend more time designing. The clear subtext is that AI must remain a tool, not a competitor to human endeavour.

For Julian Geiger, Nemetschek’s vice president of AI, this technology might best be seen as an “alien intelligence”, a non-human perspective brought into the design team to spark different ways of thinking. The idea here is that diverse inputs can improve outcomes, provided humans remain firmly in charge.

Hence the emphasis on humans being trustworthy and ethical in their use of AI, particularly when it comes to data privacy, user agency and intellectual property. In an industry rattled by aggressive dataharvesting and outrageous EULAs (end user licence agreements), Graphisoft is clearly drawing a line for competitive differentiation.

In the background, there is a major project underway – the experimental creation of a single schema and database format for the Nemetschek Group companies, similar to what Autodesk did with Docs

At Graphisoft, AI is framed as an assistant, rather than an architectural replacement. The company is leaning hard on this narrative that AI should be a co-pilot that removes some of the drudgery from design work, while leaving design intent firmly in human hands. Company executives repeatedly flagged Graphisoft’s ethical, privacy and IP-protection stance, a direct response to rising anxiety around other vendors that might be harvesting customer data to feed training models.

Perhaps the most consequential technical announcement at Graphisoft Ignite was the release of a new native connector for Autodesk Construction Cloud (ACC), representing a pragmatic nod to the fact that even open BIM standards are not enough for the complexity of today’s AEC tech environment and that native interop-

their budgets or schedules and the built environment continues to account for more than 40% of global CO2 emissions.

On top of that, it is battling a gaping labour shortfall, estimated at seven million workers worldwide. Layered on top of this challenge is a looming demographic cliff. Around 41% of the current workforce is expected to retire by 2031.

In other words, the sector isn’t just struggling with productivity. It’s staring down the barrel end of a structural talent collapse. New tools are required – ones that are productive, efficient and sustainable.

But how do you introduce AI into design workflows without alienating the very professionals you claim to empower? Both Graphisoft and Nemetschek execs pitched a deliberately human-centric approach that promises augmentation over automation. The aim is to create

Nemetschek has invested in more than sixteen AI start-ups and acquired firms like Firmus, incorporating the company’s technology into products such as Bluebeam. Research partnerships are extensive, ranging from a new Georg Nemetschek Institute for AI in the Built World at TU Munich to ongoing work with Stanford University in California.

Customer showcase

Aside from the technology blitz, customer case studies filled a lot of the Ignite agenda, highlighting a great spread of projects from around the world. These included presentations on the Lamborghini prototype factory by Luca Bernardoni from Archilinea (www.tinyurl.com/Archilinea)

Attendees heard how Archilinea repeatedly accepted challenging deadline demands from star client Lamborghini. To meet those demands, Bernardoni had to go to extraordinary measures to eliminate pessimism among the team and

1 Ignite was Graphisoft’s first major customer event in nine years (Credit: András Cselényi-Szabó

2 Graphisoft’s AI Assistant is billed as a co-pilot for repetitive work

3 Project Aurora is a cloud-native platform for early-stage design and feasibility works

4 MEP Designer allows MEP teams to design routes, check clearances and coordinate directly within an architectural model 1 2 3 4

recruit optimistic new members who weren’t fazed by tight delivery schedules and were able to respond positively to high-pressure projects. We sometimes forget the human aspect in the tech world. Bernadoni brought it into clear focus.

Meanwhile, Marko Dabrovic from 3LHD Architects showcased the firm’s work on a new campus for Croatian electric hypercar manufacturer Rimac, and provided some seriously impressive evidence that Archicad users can deliver complex, largescale work to very tight deadlines. The Zagreb-based firm has grown considerably since Dabrovic spoke at Ignite back in 2018 and gained an international reputation, especially in the area of luxury hotels. All presentations can be accessed online (www.tinyurl.com/2025-Ignite)

Introducing Archicad 29

The release of Archicad 29 sees Graphisoft doubling down on detailed design and construction documentation. Schedules are now far more configurable, with granular control over backgrounds, cells, totals and branding elements. Changes are mirrored across indexes and keynote legends. Keynote visibility and formatting have been improved to make annotation sets less of a visual swamp. A long-requested update to renovation filters now allows users to assign renovation status to markers themselves, meaning phased drawings finally behave in a predictable, organised manner. Even line-based tools get attention, with support for different arrowheads at each end – a tiny detail, but one that matters when you’re producing highfidelity construction packages.

Openings can now be placed directly on sections and elevations with accurate geometry, even when elements aren’t parallel to the marker. Common tasks such as 90-degree rotations are reduced to a single shortcut. Multi-page PDFs can be relinked with one click. There have also been some important user interface refinements, with the arrival of a dark mode, resized dialogues, better filtering of unused views and better Navigator/Organizer behaviour.

This new release also hosts the first appearance of Graphisoft’s AI Assistant,

integrated initially into both Archicad and MEP Designer. The assistant is pitched firmly as a co-pilot for repetitive work, drawing answers from a vetted Graphisoft knowledge base, rather than some mystery model.

MEP Designer itself is a new standalone tool running on Archicad’s engine, allowing MEP teams to design routes, check clearances and coordinate directly within an architectural model. Meanwhile, Project Aurora emerges as the early-stage counterpart to Archicad’s documentation focus with a phased rollout scheduled to begin in 2026.

Object and library updates, such as more flexible kitchen cabinetry and the ability to import OBJ files as GDL objects, will please interior-focused teams. Visualisation also gets a lift, with editable 3D resolution presets and clearer contours in physically based rendering.

Taken together, Archicad 29 improves a broad selection of existing features and brings in some new capabilities. We note that some customers have not been so welcoming of the initially limited capability AI assistant that Graphisoft has shipped, complaining online that there are still things to fix in the core package.

All new software releases have to look back as well as point forward, and this is a difficult balance for software companies to achieve. It can impact customer satisfaction, especially when the shift to cloud subscriptions has increased cost of ownership.

A more coherent cloud

When it comes to BIMcloud and BIMx, Graphisoft is now positioning BIMcloud as a core piece of the wider Nemetschek ecosystem. This is in line with a clear push at the company towards unifying platforms including DDScad, BIMx and BIMcloud under a more coherent cloud strategy.

Recent updates to BIMcloud focus on the practical: a new public links feature enables users to share files externally without requiring a BIMcloud licence; multi-core support improves BIMcloud Manager performance; and a new migration tool aims to smooth the shift to BIMcloud SaaS.

BIMx continues to impress and is taking an expanded role in design exploration and review, translating 2D drawings into accessible 3D diagrams and providing a low-friction way for clients and teams to test ideas. With support for Apple Vision Pro, BIMx can now deliver fully immersive model walkthroughs, blending 2D and 3D views in a way that noticeably raises the quality of design reviews.

The latest updates focus on unifying the platform across devices. Features

once locked to mobile – such as element hiding, layer control and cut-plane tools – are now available on both Windows and Mac, signalling a push toward consistency, regardless of hardware. BIMx also gains parallel projection, bringing its display capabilities closer to Archicad.

Navigation has been tightened as well. Walk Mode can be customised, external input devices such as keyboards, mice and gamepads are now supported on mobile, and interface elements fade away during walkthroughs to keep attention on the model.

In addition to the aforementioned Autodesk connector, Graphisoft executives also used Ignite 2025 to highlight its goal of deeper alignment within the wider Nemetschek ecosystem. A new connection to Bluebeam Studio, for example, enables real-time review sessions, rather than the usual export-upload relay race. Solibri workflows for model checking and quality assurance have been tightened, while Vectorworks users gain smoother access to site-modelling data. These are incremental improvements rather than headline acts, but they signal that Nemetschek’s individual brands are finally behaving less like barely acquainted cousins and more like close family members.

Support for IFC 4.3 across the new product releases further underlines this interoperability push. With governments and large clients increasingly mandating open formats, IFC compliance is no longer a philosophical stance, but a regulatory necessity. Graphisoft continues to stay ahead of those requirements.

cient copies of Archicad to meet the needs of AEC students.

There is a conscious effort underway at Graphisoft to improve the company’s storytelling capabilities and provide a clear direction on its technology development strategy. Nemetschek is clearly driving AI development, but the Graphisoft team seems to be making a significant contribution to this work. Its AI Assistant, for example, was developed in-house and is to be adopted by other Nemetschek brands.

Csillag and Kiss placed emphasis on the need for both external and internal interoperability at Graphisoft, so that its products can be connected with technology from sister brands like Bluebeam and dTwin. That will support a larger workflow, as well as the industry shift to lifecycle management for buildings. Graphisoft aims to empower the design process, with the Nemetschek Group providing the operational side of facility management tools.

In terms of geographic footprint, Csillag stated that Saudi Arabia and the UAE markets, due to their significant heavy construction in residential buildings, are key targets for the company, which is also working to revitalise its presence in the UK.

‘‘ Graphisoft is now positioning BIMcloud as a core piece of the wider Nemetschek ecosystem ’’

The company also reaffirmed the importance of its Rhino-Grasshopper connection, still one of the most important generative design bridges in the BIM space. For practices that rely on algorithmic workflows, form-finding or custom geometry pipelines, this remains an essential link.

Igniting the future

In a press conference held at Graphisoft Ignite 2025, CEO Daniel Csillag and chief product officer Marton Kiss shared that a core priority for their management team is to significantly expand its market share. The goal, they said, is to grow Graphisoft customer numbers by between 25% and 30% in the next year alone.

In the longer term, the company is also looking to extend its reach by ensuring that universities are equipped with suffi-

If the goal of Graphisoft Ignite 2025 was to produce warm and fuzzy feelings towards the company, then it succeeded. Executive team members and management showed themselves to be approachable and knowledgeable about the current market, the company’s capabilities and its development priorities. There was almost palpable excitement among some regarding the current market confusion surrounding Autodesk’s BIM 2.0 strategy in general and the positioning of Revit and Forma in particular. At Nemetschek and Graphisoft, there’s a general feeling of optimism that this uncertainty will play out to their advantage.

In the background, there is a major project underway – the experimental creation of a single schema and database format for the Nemetschek Group companies, similar to what Autodesk did with Docs. This would point towards a more cloudheavy future and more enhanced interconnectivity between Nemetschek tools in future. Saying that, Graphisoft remains committed to delivering both desktop and cloud applications, with data and processing taking place in the locations that customers require.

■ www.graphisoft.com

Becauseeventhesmallest ideacanbetransformedinto abigstory.Agreatbigstory.

Twinmotion: a new chapter Software

Epic Games is reshaping its architect-friendly viz tool, aligning it more closely with Unreal Engine while enhancing it with major advances in geometry, lighting, materials, and interactivity, writes Greg Corke

When Epic Games acquired Twinmotion in 2019, its stated ambitions for the AEC sector suddenly became clear. The company had been making considerable noise about taking AEC seriously, but it was hard to see how that could be achieved with only a specialist tool like Unreal Engine.

Twinmotion changed that, becoming the foundation for an expansive push towards millions of users, software partnerships, and a fast-moving roadmap of ambitious features.

Now, with technologies such as Nanite, deeper integration with Unreal Engine, and a carefully scoped approach to AI, Twinmotion is entering a new phase.

Epic’s early focus was on adoption at scale, with a dual mission to democratise architectural visualisation and build market share. Its first step was deliberately disruptive: give the software away for free.

A series of high-profile partnerships followed — most notably with Autodesk and Graphisoft — culminating in Twinmotion being bundled free with Archicad and Revit, the latter under an agreement that remains in place today. This is no cut down version. Revit customers get the exact same functionality, apart from access to Twinmotion Cloud, which allows users to share interactive real-time experiences online.

“It’s a great situation, because they are closest to the Revit code, so are able to build things into the Datasmith translator, like improving the reading of materials or the recent thing we did around asset swapping, where you can automatically swap assets from the Revit asset to a Twinmotion asset on import. We continually work with the team to understand how we can make things better.”

The technology stack

One of the most significant (if less visible) evolutions in Twinmotion is its technical underpinning. Until recently, it was a compiled standalone app — periodically rebuilt from Unreal Engine, but fundamentally a separate product. More recently, however, Twinmotion runs directly as a specialised instance inside Unreal Engine itself. This is done in ‘PIE’ or ‘Play In Editor’ mode.

Why does this matter? For users, it means that Twinmotion is now much more closely aligned with Unreal Engine,

‘‘
Nanite meshes

inside

was the first major step in that direction. “When we brought Lumen in, we got real GI, comparable to what Unreal Engine was supplying,” says Smith. “That really improved the GI that we had in Twinmotion at the time.”

Solving the big model problem

More recently, another major technology to move from Unreal Engine to Twinmotion is Nanite — a virtualised geometry engine designed to handle gigantic, detail-heavy models without performance meltdowns. In short, it only streams visible data on demand.

Twinmotion give you the ability to jam in as many polygons as you want. We want to give architects the power to tell their stories as they want Colin Smith, senior product manager

The Autodesk agreement has since evolved into more than a licensing deal. Autodesk has now taken on the development of the Datasmith Exporter for Revit, the core technology that brings Revit BIM models into Twinmotion.

“Our dev teams meet with each other on a weekly basis,” says Colin Smith, senior product manager, Twinmotion.

enabling smoother deployment of major technology advancements and improving interoperability between the two tools.

“The reason that we made that change is because now it gives us the ability to really get into the technology stack and start to pull things up through the engine and show it in Twinmotion in a much more usable way,” says Smith.

Lumen, the real-time global illumination (GI) and dynamic lighting system,

In Twinmotion 2024, large-scale real-time visualisation demanded either more powerful GPU hardware or careful manual optimisation of the scene. This sometimes meant deciding which assets should be left out, as Smith explains, “Are you going to be able to have people walking on paths, cars driving on streets - all those kind of things that eat GPU power.” This all changed recently when Nanite support was added to Twinmotion 2025.2. Smith is unequivocal about the magnitude of this shift: “Nanite meshes inside of Twinmotion give you the ability to jam in as many polygons as you want. We want to give architects the power to

tell their stories as they want.”

For context, Smith gives an example of a Mars Rover project that’s featured in the Twinmotion 2025 showreel. “It’s a huge, huge scene that would have choked Twinmotion 2024. You wouldn’t even try it,” he says.

Nanite certainly promises to transform how architects approach large scale visualisation projects in Twinmotion. However, users still need to be educated as to the benefits.

“You have to purposely convert your assets to a Nanite mesh,” explains Smith,” “You can do it on import, or you can do it after the fact.”

While most of the scene can be converted to Nanite meshes, some elements — such as certain aspects of foliage — either cannot be converted or the user may choose not to convert them.

“There are some edge cases that we don’t want to mess people up on,” explains Stephen Philips, solution architect, Epic Games. “If you have a lot of really thin geometry, like a bunch of wires, that can sometimes get overly compressed by Nanite. But for any other compatible mesh, it’s almost always a win.”

Support for Nanite also opens the door to bringing other Nanite-based functionality from Unreal Engine into Twinmotion in the future.

One of these technologies is MegaLights, a real-time lighting system designed to

complement Nanite in Unreal Engine, specifically for large-scale environments. “It allows you to flood your scene with lights, and it doesn’t affect [the performance] of your scene,” explains Smith.

Epic Games is also looking to bring Substrate materials — enabling more flexible, realistic texturing — into Twinmotion in the future.

These changes will also pave the way for a more optimised “non-destructive” pipeline between Twinmotion and Unreal Engine. This would allow Twinmotion content to be seamlessly imported into Unreal, to produce more complex real time experiences or digital twins.

“We already have some customers where their designers are applying materials and entourage in Twinmotion effortlessly, and then the specialists in Unreal don’t have to do that - they can focus on what they’re actually good at with Unreal,” says Philips.

The interactivity boost

For many users, the Configuration feature, introduced with Twinmotion 2025, signals a shift from passive to interactive presentations — giving designers and clients the ability to explore alternatives, make decisions, and even “play” within the 3D scene.

As Philips puts it, “Our configurations feature adds this whole new level of interactivity where you’re not just creating your scene and then letting someone walk

around it. You can toggle visibility and settings on any type of property and any type of visibility for any kind of object.”

Philips gives an example of a kitchen project where users can change materials or click on objects to trigger events, “You bring up a menu, and you can see all the different [floor] tiles, you can select them, and it changes them all,” he says.

“You also have interactive switches, so you can walk in the scene and turn on the lights, change the time of day, open a drawer, trigger animations - there’s all kinds of things you can do with it.”

The value for client presentations and stakeholder engagement is significant, especially when paired with Twinmotion Cloud, which makes high-impact presentations accessible to anyone, anywhere, through a browser. Using pixel streaming, all the heavy GPU processing is handled in the cloud.

“[With Twinmotion Cloud] you can send people a QR code, you can embed it in a website, you can send them a link, you can put password protection on it, and it basically allows someone to open it up in a browser, on the PC, on a phone, on a tablet, and walk around in the scene just like you would if you were running Twinmotion locally,” says Smith.

The Configurator also streamlines the production of presentation assets: “We now have the ability to automatically spit out all of the variations, so that you can

1

Software

2 Configurations add a whole new level of interactivity to

have every single version of the configuration come out as an image,” says Smith

Measured steps towards AI

While competitors such as D5 Render have moved swiftly to roll out powerful AI features, Epic Games and the Twinmotion team have taken a more measured approach. They are rightly mindful of intellectual property, ethical sourcing, and the importance of preserving designer intent.

Smith is candid about the company’s position: “Epic has been very careful in the way that they’ve been approaching AI, I think purposefully. There have been a lot of questions about where the models are getting their information from, where they’re scraping things from, IP issues. As much as AI brings to the table, there’s a lot of questions around it.”

But this caution does not mean inaction. “It’s not that we’re not embracing it, it’s just that we don’t want to go too far, too fast,” says Smith.

He reveals that the next major Twinmotion release will include a suite of AI tools, “We are going to be incorporating AI functionality in the 2026 timeframe. We have a number of AI tools that are coming, mostly around stylisation, so being able to take a finished render and make it super photorealistic.”

Here Smith gives the example of Twinmotion’s pre-rendered humans, “They look great from a distance, but when you get closer, they get kind of zombie looking, so being able to use AI to change those to looking like a photographic representation of people is really something that you want to do.”

“We’re also experimenting with things like changing backgrounds,” he says. “You could have things that look like hand drawings or oil paintings.”

However, Smith emphasises that AI

will always be employed responsibly, with safeguards for IP protection and creative control.

“AI can do a lot of things to enhance your scene, but if you’re not careful, it can do [negative] things to your hero model,” says Smith.

“No designer worth their salt is going to allow AI to throw in an extra window somewhere or decide it needs to have a door somewhere where it doesn’t exist.

“If we are going to add these things to a scene, we have to make sure that the intention of the designer is not getting messed up as part of this ‘easy way’ to get visualisation. It’s a balancing act.”

The path ahead

As the architectural visualisation market continues to grow and mature, Epic Games faces both opportunities and challenges. Smith acknowledges that the industry remains highly competitive: “AEC is a pretty crowded room between Lumion, Enscape, D5 [Render], and us, and still the traditional offline rendering options, and now AI. There’s a lot of

choices in there.”

Yet despite this crowded landscape, Twinmotion continues to grow, delivering “a 25% growth rate… year on year,” says Smith. “We have some of the biggest architectural firms in the world using our software, and they’re buying more seats than they were last year or the year before.”

While around 85% of Twinmotion’s customer base is in AEC, Epic Games is also seeing increasing adoption in sectors such as automotive, consumer products, and fashion. Even so, Smith stresses the company’s focus: “We’re not taking our eye off the ball, from where we started from,” going on to explain that the development roadmap is mostly designed to add features that benefit users across all industries.

At its core, Smith says, the mission remains universal: “At the end of the day, it’s all about storytelling… and so whether it’s a hospital or the latest BMW, they all have the same requirements as far as being able to tell a story, put a lot of context in their scene, [and] be able to share that experience with other people.”

■ www.twinmotion.com

Twinmotion would not have been able to handle this colossal Mars Rover scene without support for Nanite meshes
Twinmotion scenes

EULAs: 28 days later

Last month’s cover story on the trend in EULA metastasis certainly invoked a wide range of responses. It’s quickly becoming a regular boardroom-level topic that extends way beyond our small corner of the commercial software world. Martyn Day provides an update

End-user licence agreements, or EULAs, have long been overlooked or outright ignored. Many software users view them as ‘just’ legal small print, worded to indemnify all parties and conveying basic rules such as, ‘Don’t steal this software’. They tend to be updated every few years, with minor changes made, mainly to reflect newer product offerings or services.

Then along came AI, and some vendors got pushy. EULAs began to contain clauses stating that the vendor might take its pound of flesh in the form of data, because real-world customer data has a much greater value to them than synthesised data. This upset many in the AEC industry and has become an ongoing worry.

This was the main thrust of the ‘Contract killers’ article in the September/October 2025 issue of AEC Magazine and we’ve received a great many responses on the subject since it was published. Importantly, we’re keeping the conversation going.

For example, during the panel session I hosted at SpeckleCon 2025 recently (see page 26), I asked Vignesh Kaushik, Gensler’s principal and regional design technology director for Asia Pacific, about his current worries and concerns. Without a pause, he responded: “EULAs.”

An organisation the size of Gensler, with so many employees and important projects underway, simply cannot afford to get caught up in these data grabs. Despite having locked down IT infrastructure so that users cannot install maverick applications, some employees have still managed to evade these protections. When a new tool appears on the firm’s technology inventory, the worry is not so much the application itself, but the legal entitlement to data that a customer grants to the software company that built it.

There is clearly a significant discon-

nect between what legal teams believe they’ve signed and what technology now makes possible. Put simply, the AEC industry is woefully underprepared for negotiating the new terrains of AI and data ownership. Representatives of large firms with hefty legal teams have told me they have pushed back and renegotiated or even edited EULAs since the alternative was having it thrown out of the software estate entirely.

For Autodesk, restrictions on AI training using such broad terms was indeed added to its Terms of Use in 2018, not 2025, as we originally thought. It’s just that in 2018, nobody was checking software agreements as deeply as they are today.

It wasn’t until the AI revolution really started hitting that the industry woke up to the terms of some of these EULAs, as well as the whole wider problem of data ownership in a world where confidential company data is sent to an external software developer’s servers for AI to do its thing.

Hunting down AI terms in agreements has become an international pastime. While it now seems that Autodesk’s original intention was more modest than its wording suggests, and will hopefully be redefined soon, there is an obvious line where Autodesk (in fact, all software developers) don’t want the business logic of their commercial design tools stolen by AI, especially by third parties.

But it is true that almost every commercial software licence or EULA does contain a clause prohibiting copying, modifying, reverse engineering, decompiling, or disassembling the software.

tus quo. Pricing must be fair and transparent, with no hidden fees, forced bundles or surprise hikes. Users should pay only for what they use and only when they use it. Privacy and security should be non-negotiable.

Significantly, Motif promises that no customer data will be sold or misused, and the workings of any embedded AI features will remain transparent. Continuity features strongly. Projects and data must remain accessible and usable over time, even as technology evolves. Users must have access to product roadmaps and to company leadership and be able to influence decisions.

Meanwhile, in the UK and Australia, a new group is bringing together architects, digital directors, technologists, legal minds, enterprise customers and this magazine to work out what exactly has just happened and what to do about it.

The reaction to the EULA issue has been discussed in detail, looking at examples of what happens when AI ambitions collide with outdated contractual scaffolding. The idea that all future architectural work will probably start inside BIM systems with layers of AI to define the design seems strategically risky, if that negates firms’ ability to train on their own data. Interoperability would be the first casualty of AI-era business models.

‘‘

significant portions of design, then the value of the data that feeds it – whether this pertains to families, details, project history, even naming conventions –increases dramatically.

Software vendors know this. Lawyers, on the other hand, often do not, which is how we end up with terms that were written half a decade ago dictating the rules of engagement for technologies that barely existed at the time but are of great interest today.

There was further discussion on how there is a lack of transparency around enterprise agreements. Several participants noted that non-disclosure agreements (NDAs) prevent them from sharing pricing structures, token models and contractual differences between standard EULAs and enterprise agreements. This opacity isn’t simply inconvenient; it becomes dangerous when AI is added to the mix, because firms cannot know what rights they’ve relinquished or what limitations they’ve accepted.

Tangible outcomes

By the end of the meeting, there were several tangible outcomes. The decision was made to establish a first draft of a Bill of Rights and then to build a repository of historical industry EULAs so that changes can be tracked.

What began as an investigation into a single clause has become a broader movement to reassert some balance in the relationship between vendors and the industry that relies on them

To some extent, then, AI training on data could fall under reverse engineering. However, this isn’t what customers want to do from training on their BIM data.

Bill of rights

Quick off the mark, in response to our ‘Content killers’ article, Motif CEO Amar Hanspal published the company’s own Bill of Rights, a manifesto agreeing that design practitioners deserve more than legacy software vendors have been willing to offer. In its essence, this Bill of Rights states that customers should fully own their data and outputs, not be subject to opaque clauses that quietly hand over training or derivative rights to the platform.

Motif commits to open standards and open APIs as the default, rejecting the notion that proprietary lock-in is the sta-

The group’s conversation widened, inevitably, to what rights AEC firms should reasonably expect from the tools that now facilitate almost every act of design. Firms want more clarity on how data is used, the freedom to move information between platforms and transparency around AI training. They also want fair conditions for developers who extend or integrate with these tools, in the form of predictable, versioned contractual terms that don’t shift silently. The idea of a ‘Tech Stack Bill of Rights’ emerged almost naturally from the discussion.

While the meeting revolved around contractual terms, the most revealing conversations were cultural. People are beginning to realise that AI isn’t just another tool. It has the potential to reshape the economics of architectural practice.

If AI can generate, refine and automate

I will be following up conversations with developers to understand what boundaries are reasonable, rather than idealistic. Interest is already spreading beyond the initial group, with firms such as Bentley Systems and Graphisoft picking up on the reaction to our article.

What began as an investigation into a single clause has become a broader movement to reassert some balance in the relationship between vendors and the industry that relies on them. AI has made design data far more valuable than the contracts around it ever anticipated.

The question now is whether the sector can establish clear principles before the next generation of tools, and their accompanying EULAs, decide those principles for us.

In our original ‘Contract killers’ article we incorrectly stated that Autodesk added restrictions on AI training to its Terms of Use in 2025, where in fact it was 2018. A corrected version of the article can be found at www.tinyurl.com/ AEC-EULA, along with responses from Autodesk and D5 Render.

Technology Introducing Gaussian Splats

Reality capture and surveying have undergone considerable technological changes in recent years, shifting from total stations to drones, from LiDAR to photogrammetry. Now there’s a new kid on the block, as Martyn Day reports

Reality capture is no longer a trade-off between precision and whatever you can grab before the concrete starts to pour. We now have survey-grade terrestrial LiDAR, photogrammetry from drones that can be flown multiple times a day, and SLAM scanners that happily thread their way through congested interiors, conveyed either by human or robot. We’re not short of options; the real challenge is understanding what each method can and cannot deliver in terms of accuracy, repeatability and effort.

Into this already rich toolbox comes a technique from the computer-graphics world that doesn’t sit neatly alongside any of the above technologies. 3D Gaussian Splatting (3DGS) may sound like an upgrade to paintballing, but ‘splats’ are rapidly becoming a favourite way to generate photorealistic, navigable 3D scenes from kit no more exotic than a smartphone or drone-based video camera.

of building a surface out of triangles or locking yourself into the sparse geometry of a point cloud, a splat scene is reconstructed from countless semi-transparent ellipsoids (‘splats’), each defined by a position, a radius, a colour and a fall-off.

When rendered together, to human eyes, splats form a continuous, volumetric impression of a space. The technique avoids the brittle edges and broken surfaces that often plague photogrammetry, while also sidestepping the enormous compute required to render multi-mil -

major part in pushing NeRF research into the real-time domain, producing vastly accelerated pipelines, sample code and CUDA-optimised implementations that made these once-academic models accessible to industry. Their development accelerated and the quality of the output improved (less artifacts), as well as reducing the number of images required to derive a scene.

‘‘ It gives teams a way to record site conditions more frequently, with far less planning, and with a visual fidelity that helps non-technical stakeholders understand what they’re seeing ’’

lion-triangle meshes in real time.

The question for customers from the AEC industry is whether this is simply another visualisation fad, or the start of a more profound shift in how we capture existing conditions.

Splat origins

The core idea behind Gaussian splatting is surprisingly straightforward. Instead

The origin story is tied to a technology known as Neural Radiance Fields. These NeRFs gained mainstream attention because you could take multiple still images of a scene and synthesise a 3D model from it, even with ‘unseen’ viewpoints. This was done by training a neural network to understand how colour and density vary with camera position.

NeRFs are impressive, but painfully slow to train and far too heavy for interactive use. Chipmaker Nvidia played a

Splatting versus photogrammetry

While it may be tempting to position splatting as the successor to photogrammetry, the two approaches were built for different purposes. Photogrammetry has always been about recovering geometry. It solves camera poses, triangulates features, produces a point cloud, and from that, generates a surface that can be meshed, measured and exported. When handled carefully, the resulting model can support engineering decisions.

Gaussian splatting begins from the same camera solution but takes a different path. Rather than building a mesh, it uses the calibrated images to guide the optimisation of millions of Gaussians. The output is a radiance field rather than a surface. You can measure within it if you understand the camera geometry and if the training was anchored to reliable scale, but it is not inherently metric in the way surveyors would like. It has its strengths, of course. Splatting handles reflections,

Gaussian splatting emerged soon after as a more pragmatic alternative to NeRFs. Instead of encoding the scene inside a neural network, it takes a direct optimisation approach. As a sparse ‘structure-from-motion’ or ‘multi-view stereo’ (MVS) solution, splatting takes a collection of overlapping images and generates a 3D model by estimating camera positions and creating a dense point cloud of the scene. The software then refines millions of Gaussian primitives until they collectively match the available imagery. Nvidia again accelerated adoption by releasing reference implementations and GPU kernels that demonstrated how efficiently splats could be rendered when mapped properly onto modern hardware.

The key breakthrough is speed. A contemporary GPU can navigate a dense splat reconstruction with minimal latency, producing an experience closer to exploring a

specular materials, foliage and finegrained detail in a way that would cripple most photogrammetric pipelines. It avoids broken topology and jagged edges. It renders in real time, which makes it excellent for VR, walkthroughs and design reviews.

But it has weaknesses, too. Without a mesh, there is no clean way to extract a watertight object or generate a traditional deliverable. Metric reliability depends entirely on the underlying reconstruction. And

unlike photogrammetry, splats do not solve occlusion with extra geometry; they simply approximate what the cameras saw.

The healthiest way to view splatting and photogrammetry is as complementary technologies. Photogrammetry remains the tool of choice for robust geometry and traditional deliverables. Gaussian splatting excels at producing highquality appearance data that can augment, rather than replace, geometric reconstruction.

game environment than wrestling with a stitched mesh or a heavy point cloud.

This performance is the main reason why splatting has become the preferred method in consumer capture apps and early Gaussian-based AEC tools: it delivers high visual fidelity without the processing overhead and fragility that characterise many photogrammetry workflows.

It is important, however, to be clear about what Gaussian splats are and what they are not. They do not form a mesh and they are not collectively the same as a conventional point cloud. Instead, they occupy a middle ground: a radiance field with hints of geometry.

For engineering applications that raises familiar questions about scale, measurement and reliability. In practice, splats excel at appearance rather than precise geometry. When anchored to LiDAR or SLAM, they can behave well, but on their own, they remain a visual rather than a metrical representation. Context and workflow determine whether they are merely informative or can be relied upon as part of a survey-grade solution.

Onsite work

Because of its lack of accuracy, Gaussian splatting has found its quickest uptake not as a survey instrument but as a rapid capture method for documenting change on construction sites.

The workflow feels closer to filmmaking than scanning. A site engineer walks the site with a phone, or a drone traces its usual arc. The resulting video is ingested by a cloud service and, minutes later, a navigable 3D scene appears in the browser. The entire ritual is far lighter than a laser scan and far less formulaic than a structured photogrammetry approach.

The results sit somewhere between a video and a model. You can step through time, drift around a floor plate, or zoom into areas that would be illegible in a mesh. For project teams attempting to understand sequencing, clashes or the general state of play, the immediacy is valuable. It becomes a quick way of capturing as-built conditions on days when you wouldn’t dream of mobilising a scanning tripod.

But splatting is also being combined with more traditional methods. Several firms pair handheld SLAM devices with 3DGS reconstruction, using a LiDARbased trajectory and point cloud as the geometric backbone and allowing splats to provide texture, depth cues and realism.

This hybrid approach anchors the splats to a reliable frame of reference and that alignment is then used to extract geometry back out again. Revit plug-ins now exist that convert regions of a splat scene into

architectural elements when the system believes it has identified a planar wall, an opening or a slab edge. In this mode, Gaussian splatting becomes less a visual trick and more an intermediate representation for automated modelling.

At the moment, site work is where splatting feels most compelling. It gives teams a way to record conditions more frequently, with far less planning, and with a visual fidelity that helps non-technical stakeholders understand what they’re seeing. In a world where disputes, RFIs and coordination meetings increasingly rely on photographic evidence, splats offer a richer form of ‘being there’.

3DGS developers

The commercial ecosystem around Gaussian splatting is expanding quickly, particularly on the fringes of AEC, where fast capture matters more than surveygrade fidelity.

Gauzilla Pro (www.gauzilla.xyz) is one of the first platforms to position splats explicitly for construction. It reconstructs scenes directly from phone or drone footage, runs entirely in the browser and supports timebased playback so project teams can watch a site evolve. Some contractors are already using it to supplement drone surveys, creating 4D snapshots throughout a build that sit comfortably alongside BIM in coordination reviews. The founder of Gauzilla Pro, Yoshiharu Sato, spoke at NXT BLD this year (www.tinyurl.com/gauzilla). He also took part in panel session on reality capture (www.tinyurl.com/reality-panel)

XGRIDS (www.xgrids.com) takes a more engineering-driven approach, combining SLAM capture hardware with Gaussian reconstruction and feeding it all into its Revit extension. By leaning on LiDAR to establish absolute accuracy and then layering splats for texture and machine learn-

ing, the system claims significant productivity gains in scan-to-BIM modelling. The splats, in this case, are less a deliverable and more a substrate for teaching AI what the built environment looks like.

Autodesk (www.autodesk.com) is now adding 3D Gaussian splatting into its infrastructure toolchain, signalling yet another shift in how the company wants customers to handle mobile reality-capture data. Instead of treating SLAMLiDAR scans and photogrammetry as external artefacts to be cleaned and imported, Autodesk’s pitch is that handheld captures will feed straight into its ecosystem, including tools such as ReCap, Civil 3D, InfraWorks, Revit, and Autodesk Construction Cloud, where they’re subsequently converted into lightweight splatbased models for project review.

Bentley Systems has also moved early on Gaussian splatting, adding support within its iTwin Capture ecosystem (www. bentley.com/software/itwin-capture). This is significant, because Bentley’s reality-capture tools have historically centred on photogrammetry, meshing and high-fidelity point clouds, all of which feed into its digital-twin workflows.

By incorporating splat-based scenes, iTwin Capture can now handle radiancefield style reconstructions alongside traditional survey outputs. Bentley positions this as a complementary layer, rather than a replacement. Splats provide rapid, photorealistic context from lightweight video capture, while the established photogrammetry and LiDAR pipelines remain responsible for geometry and measurement.

ESRI ArcGIS Reality (www.tinyurl.com/ arcGIS-reality) has also joined in the splatting fun, albeit from the geospatial rather than construction-tech end of the spectrum. ESRI’s focus remains firmly on survey-grade photogrammetry, aerial LiDAR

Yoshiharu Sato, founder of Gauzilla Pro, speaking at NXT BLD 2025

Technology

and large-area GIS datasets, but its recent support for radiance-field style reconstructions is a sign of its advantages.

ArcGIS Reality can now ingest splatbased scenes from drone or mobile imagery as contextual layers within broader reality models, using them to provide visual richness in areas where traditional meshes struggle, such as façades with shiny surfaces, dense foliage or cluttered streetscapes. Crucially, Esri treats splats as a complement to its established photogrammetric pipeline, not a replacement. The geometry still comes from calibrated aerial capture, while splats provide the rapid, lightweight visual context needed for field verification, planning and communication.

Pix4D (www.pix4d.com) has also been quick to bring Gaussian splatting into its ecosystem, as an appearance layer that sits alongside its established mesh and point-cloud pipelines. PIX4Dcatch and PIX4Dcloud can now generate splatbased scenes directly from mobile or drone imagery. Crucially, Pix4D anchors these splats in the same georeferenced frameworks it uses for survey-grade deliverables, so while the splats themselves remain a radiance-field representation, they carry proper scale and alignment back to site control.

On the consumer side, Niantic’s Scaniverse (www.scaniverse.com) app has quietly normalised splat-based capture for millions of users. It can generate splats directly on a smartphone and has helped push an emerging splat file format into the wild. The significance for AEC is not the app itself, but the cultural shift it signals. Splat scenes are becoming a common currency and many people arriving to work on construction sites will already know how to produce them.

Rendering platforms are also absorbing splats. For example, Chaos (www.chaos.com) has added support for 3DGS in V-Ray and Corona, allowing designers to light and render splat-based environments as part of

a conventional workflow.

All of this activity points to a reality in which splats become a routine part of how we capture and communicate project conditions, even if they never ascend to the level of a contractual deliverable.

Hungry for change

Esri: Flight pattern and 3D visualisation of an air traffic control tower using Gaussian splat layer

tion, communication and early-stage modelling. Teamed with AI, it may become a foundation for the next generation of automated BIM authoring.

Gaussian splatting arrives in the AEC industry at an interesting moment in time. The industry is hungry for faster, more frequent, less painful forms of capture. At the same time, firms cannot afford to lose the accuracy that comes from proper survey methods. Splatting sits between these two impulses: visually rich, operationally lightweight, but not yet ready to stand alone as a basis for engineering decisions.

The best way to position splatting, then, is as a new tier in the reality capture stack. At the base, we still have physical measurement: total stations, GNSS, LiDAR and SLAM. Above that sit photogrammetry and traditional point clouds, producing geometry that can be sectioned and dimensioned. Splatting represents a new layer on top, a fast and expressive way to capture radiance and appearance from video. Anchored to survey data, it becomes a powerful tool for documenta-

Splatting versus laser scanning

Laser scanning lies at the opposite end of the spectrum from splatting: it’s a hard, physical measurement. A laser scanner records precise distances by timing the return of a pulse or analysing its phase. Registered correctly, the resulting point cloud is anchored to reality with an honesty that image-only methods cannot match. Surveyors trust it because the physics is transparent. Gaussian splatting, by contrast, sits

firmly in the realm of inference. It interprets imagery. With the right constraints, it can be aligned, scaled and registered, but it does not inherently know anything about distance. That is why hybrid workflows are emerging as the most sensible configurations for professional use, with SLAM or LiDAR for the backbone, and splats for the appearance. In field conditions, splatting does offer practical advantages. Capturing

There are still many unanswered questions here, of course. Splatting file formats are fragmenting before we hope they consolidate. Long-term archiving is untested. Interoperability is uneven. And no contract yet defines what a splat scene represents in legal terms. But the momentum is unmistakable and there are plans to make it an open standard.

In the short term, splatting will make site capture more fluid and more frequent and help teams reason about space in ways that flat imagery never can. Long-term, the more interesting prospect is not whether splats replace existing techniques, as accuracy improves and radiance-field representations become first-class citizens in design environments. If the industry starts modelling against splat-based realities – rather than merely viewing them – then Gaussian splatting will have done way more than just produce pretty pictures.

For now, the pragmatic approach is simple: Use splatting where it shines and treat it as one more instrument in the continually expanding orchestra of reality capture.

a dense interior with photogrammetry can be time-consuming, and laser scanning demands careful placement and registration. By contrast, splat capture can often be performed in a matter of minutes. Several teams report cutting capture times dramatically by using a fast LiDAR sweep to establish structure and then letting splats fill in all the visual nuance. For messy, congested or partially complete sites, that speed may be the

difference between capturing reality today or missing it entirely. Accuracy claims should be viewed cautiously, however. When vendors refer to ‘survey-grade’ splat models, they are invariably referring to workflows where LiDAR or SLAM has done the geometric heavy lifting. Splats inherit that accuracy, they do not generate it. The Gaussian representation then serves as a highquality façade over a more traditional dataset.

Build Bigger, Render Faster

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.