Skip to main content

Embedded Computing Design Spring 2026 with Embedded World Profiles

Page 1


bit.ly/ECDYouTubeChannel

www.instagram.com/embeddedcomputingdesign

The embedded industry is constantly evolving to meet the demands of a smarter, more connected world. Many of the top trends shaping 2026 will be discussed and showcased at embedded world 2026. Explore what leading companies see as the defining embedded trends for the year ahead.

Show profiles from embedded world 2026 begin on page 30.

 Industry Acquisitions, Connectivity, IoT, AI, and Embedded Trends For 2026 Tune In: https://embeddedcomputing.com/ technology/iot/wireless-sensor-networks/ industry-acquisitions-connectivity-iot-ai-andembedded-trends-for-2026

 DevTalk with Rich and Vin: AI in University Tune In: https://embeddedcomputing.com/ technology/ai-machine-learning/devtalk-withrich-and-vin-ai-in-university

 ICYMI: Ep48 TI Buys SiLabs, Variscite expands SOMs, and Engineer Education Watch Now: https://embeddedcomputing.com/ application/tech-news-roundup/icymi-ep48ti-buys-silabs-variscite-expands-soms-andengineer-education

by:

EDITOR IN CHIEF Ken Briodagh ken.briodagh@opensysmedia.com

ASSISTANT MANAGING EDITOR Tiera Oliver tiera.oliver@opensysmedia.com

PRODUCTION EDITOR Chad Cox chad.cox@opensysmedia.com

CONTRIBUTING EDITOR Rich Nass rich.nass@opensysmedia.com

TECHNOLOGY EDITOR Curt Schwaderer curt.schwaderer@opensysmedia.com

CREATIVE DIRECTOR Stephanie Sweet stephanie.sweet@opensysmedia.com

WEB DEVELOPER Paul Nelson paul.nelson@opensysmedia.com

EMAIL MARKETING SPECIALIST Drew Kaufman drew.kaufman@opensysmedia.com

WEBCAST MANAGER Marvin Augustyn marvin.augustyn@opensysmedia.com

SALES/MARKETING

DIRECTOR OF SALES Tom Varcie tom.varcie@opensysmedia.com (734) 748-9660

STRATEGIC ACCOUNT MANAGER Bill Barron bill.barron@opensysmedia.com (516) 376-9838

EAST COAST SALES MANAGER Bill Baumann bill.baumann@opensysmedia.com (609) 610-5400

SOUTHERN CAL REGIONAL SALES MANAGER Len Pettek len.pettek@opensysmedia.com (805) 231-9582

DIRECTOR OF SALES ENABLEMENT Barbara Quinlan barbara.quinlan@opensysmedia.com AND PRODUCT MARKETING (480) 236-8818

INSIDE SALES Amy Russell amy.russell@opensysmedia.com

STRATEGIC ACCOUNT MANAGER Lesley Harmoning lesley.harmoning@opensysmedia.com

EUROPEAN ACCOUNT MANAGER Jill Thibert jill.thibert@opensysmedia.com

EUROPEAN ACCOUNT MANAGER Michael O’Kane michael.okane@opensysmedia.com

TAIWAN SALES ACCOUNT MANAGER Patty Wu patty.wu@opensysmedia.com

CHINA SALES ACCOUNT MANAGER Judy Wang judywang2000@vip.126.com

CO-PRESIDENT Patrick Hopper patrick.hopper@opensysmedia.com

CO-PRESIDENT John McHale john.mchale@opensysmedia.com

DIRECTOR OF OPERATIONS AND CUSTOMER SUCCESS Gina Peter gina.peter@opensysmedia.com

GRAPHIC DESIGN MANAGER Kaitlyn Bellerson kaitlyn.bellerson@opensysmedia.com

FINANCIAL ASSISTANT Emily Verhoeks emily.verhoeks@opensysmedia.com

SUBSCRIPTION MANAGER subscriptions@opensysmedia.com

OFFICE MAILING

BlackBerr y QNX – How Partnerships Will Power the Future of Embedded Engineering

13 Enclustra – Digitally Agile Radar –Direct RF Sampling Opens up New Radar Capabilities

15 Fidus – Beautiful Embedded Systems are Becoming the Best Practice in 2026

5 Lattice Semiconductor Corporation –The Everywhere Companion Chip: FPGAs Powering the Next Wave of AI

32

(281) 419-5725

US – Bringing the entire microelectronics ecosystem together under one roof

Pico Electronics Inc – DC-DC Converters, Transformers & Inductors

RAMBUS – Building Quantum Safe Silicon: Why 2026 Marks a Turning Point for Hardware Security

SCI Semi – The Cyber Resilience Act is Coming. Is your Hardware Ready?

Sealevel Systems, Inc. –Why Designing Embedded Systems for Change Will Be a Best Practice in 2026

Tria Technologies – The rise of the OSM module for edge AI product development

www.linkedin.com/showcase/ embedded-computing-design/

www.youtube.com/c/ EmbeddedComputingDesign

AI Playtime is Over: It is Time to Get Serious about AI

AI Playtime is Over: It is Time to Get Serious about AI

Well, distinguished members of the (embedded industry) jury, I’m just a poor country lawyer (editor), and I don’t go in for all that newfangled LLM stuff that the big city folks are all on about. But the way I see it, as we get into embedded world, the time is about here to acknowledge that we’re doing serious work in the embedded industry, and we should probably put away the toys.

Leaving my very thin framing device behind, I’ll make my hot take plain: LLMs are pointless, toxic, and unsustainable toys for the marketing teams and not worth engineers’ time. They take up resources that could be used for real solutions, like machine learning and processing AI. LLM-AI in general and Generative AI specifically have repeatedly introduced faults, latency, legal exposure, security and privacy risks, and gigantic, unrecoverable costs to nearly every system in which they’re introduced.

A few well-known, non-secret examples of these problems:

ChatGPT and OpenAI lose money on every query, and many financial experts agree the company is only afloat because of infusions of outside money, while producing (if I’m generous) factually dubious results to queries. And now they’re adding ads.

Grok is being used to make legally questionable adult content and has monetized GenAI image functions.

Google claims they have no plans for Gemini to include ads, but a recent Adweek report says investors are being told a different story. Remember, search used to not be colored by ads, either.

Code written by GenAI is very likely a legal liability that could compromise patents, IP, and real value, as I warned in 2023: https://embeddedcomputing.com/ technology/ai-machine-learning/using-generative-aifor-code-can-be-a-big-risk

The environmental impact is a huge problem and a completely avoidable PR crisis.

Companies really need to examine whether the marketing value of hopping on the hype train of GenAI and LLM chatbots is worth the exposure and risk. Perhaps that accounts for the 95 percent failure-to-launch rate for

AI pilots that MIT reported last year. Or maybe it’s just that LLMs don’t work.

Accentuate the Positive

Let’s leave LLMs where they belong: behind us, and consider more positive AI tools.

The last year has been all about Edge AI, and I’ve seen some incredible innovations in powerful, compact processing and in energy efficiency. This gives me hope, and that is exactly what I mean when I talk about “Practical AI Tools.”

I get nervous when I hear about companies trying to tap into LLMs from the edge because there is no current use case for this.

Small Language Models, when trained on specific data sets for specific uses, can be very useful at the edge and elsewhere in the chain of work, and I encourage engineers to focus on these SLMs and how they might work to enhance operations at the edge, in vehicles, in the factory and warehouse, and even in homes or hospitals. (You’re really going to have to get serious about privacy and security, but that’s another column.)

Now, the hottest thing in AI right now is Physical AI. Jensen Huang of NVIDIA spent most of his CES keynote talking about it, and now every analyst and corporate marketer wants to talk about how they’re leveraging physical AI.

Hey. Physical AI is just embedded computing. We can acknowledge that, right? You’ve all been creating Physical AI your entire career.

If we think AI really has value to enterprise operations, makes the world easier to navigate, and (perish the thought) actually improves the living conditions of people all over the world, we should leave the hypecycle behind. Products solve problems-- they shouldn’t create them.

It’s time to put away the LLMs and the GenAI toys. Keep innovating and developing SLMs and smart embedded engineering at the edge and in the server, from the factory to the refrigerator, and we’ll see everyone win.

I rest my case.

QUESTION: What is the technological innovation or improvement that you are most looking forward to seeing implemented in 2026?

The Everywhere Companion Chip: FPGAs Powering the Next Wave of AI

The innovation I am most excited to see accelerate in 2026 is the growing recognition of the low power FPGA as the everywhere companion chip for AI systems. Modern AI architectures increasingly rely on a team of processors because no single device can meet all the requirements of real-time intelligence. The primary processors – GPUs, custom AI accelerators, and CPUs – are the critical system brains. They increasingly depend on a class of support silicon, which we at Lattice call a “companion chip”, that enables them to function securely, efficiently, and deterministically. This companion chip layer is where low power FPGAs shine.

Across both datacenter AI and physical AI, FPGAs provide the essential support functions for the main processors: boot, power sequencing, security, control, I/O expansion, board and power management, leak detection, bridging, sensor and signal aggregation and fusion, time-critical decision loops, preprocessing, and targeted hardware acceleration. In today’s disaggregated datacenter architectures – where AI servers are split into processor boards, networking cards, storage blades, security cards, and power and cooling modules – FPGAs show up everywhere. We’ve gone from tens to hundreds of FPGAs per rack, as operators scale out and redesign for higher density and resiliency. This companion role is becoming foundational as AI workloads push system-level complexity higher and higher.

The same trend is accelerating in physical AI. Whether it’s humanoids, industrial robots, industrial automation and logistics, robotaxis, autonomous vehicles and drones, medical, aerospace and defense, or AR/VR wearables, intelligence is moving closer to the sensors where data is created. A single robot can have dozens of motors and multiple vision sensors, each requiring high-precision, low latency control. FPGAs sit beside those sensors and actuators to merge high-bandwidth streams, synchronize real-time motion, and deliver deterministic responses that software running on a microcontroller simply can’t guarantee. In many cases, they also serve as the primary compute for smaller, contained AI models used in HMI, robotics, and industrial equipment, typically under 1 TOPS and under 1 Watt.

Security is another area where the companion-chip concept becomes critical. From the first microseconds after power-on through end-of-life, customers need trusted operation. FPGAs

enable robust hardware root of trust, secure boot enforcement, and flexible cryptography that can evolve with emerging standards like CNSA 2.0 and post-quantum algorithms. Because they are field-programmable, they allow security architectures to adapt over time – an essential requirement for long-lived platforms in the datacenter, communications, industrial, robotics, and aerospace and defense markets. On the quantum side of the world, the imperative of protecting against “store now, decrypt later” is real and being implemented by leading communications and compute OEMs today.

Connectivity and video processing are also seeing major momentum. New sensor types, new link protocols, and new system fabrics are emerging rapidly. Vision is branching out from pure image sensors to include lidar, radar, and infrared cameras. FPGAs help customers bridge these interfaces, customize data paths, and manage real-time video pipelines within tight power and thermal budgets. This is increasingly important in robotics, automotive vision, factory automation, and telecommunications equipment, where system-level performance depends as much on the glue logic as it does on the main processors.

All of these trends reflect a broader industry shift. For years, innovation focused on scaling top-of-stack performance. The hardest problems now live inside real-world intelligent systems – where determinism, low latency, security, precision, and efficiency matter as much as raw throughput. Architectures that can’t meet these constraints will become bottlenecks. Low power FPGAs, whether used as primary compute or as companion chips, solve these problems, and make the mainline processors look better. They deliver the real-time behavior, adaptability, and secure control that modern AI systems require.

That’s why 2026 is a pivotal year. The role of the FPGA as the everywhere companion chip – across datacenter AI and physical AI – is becoming unmistakably clear. And the industry is starting to treat this layer not as an afterthought, but as a foundational enabler of next-generation intelligent systems.

QUESTION: What is the technological innovation or improvement that you are most looking forward to seeing implemented in 2026?

The Cyber Resilience Act is Coming. Is your Hardware Ready?

Cybersecurity remains a large challenge for governments and enterprises globally, as can be seen with the recent $1B+ hack on JLR (Jaguar Land Rover). To address this the EU introduced the Cyber Resilience Act (CRA), which will transition to implementation and enforcement across 2026.

For anyone not familiar with CRA – it is a new EU law designed to improve the cybersecurity of products that contain software and network connectivity. It pushes, and enforces, companies designing these products to be secure-by-design and throughout their entire lifecycle.

The Act introduces cybersecurity rules for these products sold within the European market, its aim is to improve security by design, making sure exploitable vulnerabilities are fixed across their lifecycles, components are updated, and ensures that serious incidents are reported quickly and with a level of transparency. The availability of secure-by-design memory safe systems enables both technological innovation and compliance to co-exist, without large overheads in both cost and time.

What are the main challenges to implementing CRA?

Cyber-attacks have become the dominant security threat facing businesses, consumers, governments and regulated industries.

‘Memory Safety’ sits at the centre of this risk, with industry analyses from leading platform providers consistently showing that the majority of serious vulnerabilities originate from memory safety bugs (>70%). These bugs are notoriously difficult to detect and expensive to eliminate thorough static and runtime checks, making the risk both high and unpredictable.

Current solutions available to OEMs focus on a variety of approaches. Hardware-based isolation within an SoC enables security through limited physical separation. In reality it needs near-perfect software, leading to fragmented execution environments and increased system complexity. This also leads to poor interoperability across trust domains creating additional vulnerabilities on these boundaries. Memory-safe languages can improve the security of new code, but they are often impractical for migrating the millions of lines of existing legacy C/C++ code and limit the use of external non-memory safe software libraries.

How SCI Semiconductor helps with CRA compliance?

SCI have tackled the issue of memory safety from the inception of our first product ICENI™, which has been designed to be the world’s most cyber secure microcontroller (MCU). ICENI is secure-by-design, eliminating entire classes of vulnerabilities through hardware enforced memory safety. It is a family of 32-bit highly secure MCUs tailored for low-power embedded applications, feature aligned to legacy MCUs to enable seamless migration to memory safe computing. This brings the benefits of secure MCUs to a wide range of applications: from Critical infrastructure, Smart Energy, Aerospace &

Defence, Industry 4.0, Transportation Telecommunications and Autonomous Drones.

ICENI Secure-by-design mapping to CRA requirements

• Memory safety eliminates the most common root cause of critical exploits

• Robust isolation ensures that a single bug cannot compromise the whole system

• Compartmentalisation ensures privilege escalation is impossible

• Strong isolation boundaries prevent lateral movement

• Least-privilege execution blocks unauthorized operations

• Secure boot ensures only authenticated software runs

• Explicit compartment boundaries define trust relationships

• Architecture is analysable and auditable

• Security properties are enforced by the system, not developer discipline alone

• Compartmentalisation prevents system-wide compromise

• Memory-safe execution blocks Return-OrientatedProgram (RoP) exploitation

• Failures are contained and recoverable

Who is SCI Semiconductor?

SCI Semiconductor enables proven robustness for securitycentric applications by delivering ultra-secure computing. As a fabless semiconductor company, we provide sovereign capability solutions designed and implemented in the UK and fabricated in Dresden, Germany. Our ICENI roadmap delivers a memory-safe future for compute, combining the highest levels of security enabling code reuse, simpler development and streamlined lifecycles. Through established partnerships with government, world-renowned technology leaders, and academia, we bring trusted, scalable security to AI/ML, control systems, and mission-critical applications across critical infrastructure, industrial, and defence sectors.

https://www.scisemi.com

Innatera’s Pulsar Delivers Brain-Inspired Computing to Power-Constrained Edge AI Devices

The demand for always-on Edge AI workloads is not going away. With the need for continuous processing creating a constant challenge, many system designers must compromise between responsiveness, accuracy, and battery life. To solve this challenge, Innatera has developed Pulsar, a brain-inspired solution enabling pattern recognition that alters how sensor data is processed.

Pulsar Neuromorphic Microcontroller

Over seven years and six generations of silicon, Innatera designed Pulsar as a new approach to Edge AI computing by mimicking the brain’s pattern recognition structures. Pulsar is a neuromorphic microcontroller permitting continuous sensor data processing in powerconstrained environments.

According to Sumeet Kumar, co-founder and CEO of Innatera, it reduces energy consumption by around 500 times and processes at speeds 100 times faster compared to conventional approaches, all within a petite chip (2.8 x 2.6 mm) (Figure 1). The chip utilizes a new heterogeneous computing architecture and operates spiking neural networks (SNN) encoding information into the timing of discrete events (spikes) rather than continuously extracting 64 bits of data. This makes the AI models about 100 times smaller than generalized AI models.

Pulsar Architecture and Capabilities

The Pulsar chip is considered a microcontroller and not just an AI accelerator. It integrates three separate computing fabrics:

› Brain-inspired fabric with various energyefficient silicon neurons and synapses for parallel processing

› Conventional deep neural network accelerator for conventional AI workloads

› RISC-V CPU subsystem for system management

Broad sensor compatibility is engineered into Pulsar to support devices such as low-resolution cameras, radar sensors, microphones, and inertial measurement units. This is important for applications including consumer electronics (audio recognition), IoT/smart home (robust human presence sensing without a camera), industrial anomaly detection, and wearable devices.

Software Development Kit (TAMO)

TAMO is integrated with PyTorch enabling engineers with a standard machine learning framework and a turnkey tool chain to map models onto the chips without knowledge of the chip’s inner workings.

The pairing of Pulsar and TAMO overcomes conventional trade-offs by combining high accuracy, complex application support, and ultra-low power performance, enabling a streamlined development process without requiring neuromorphic chip expertise.

Technology in Use

During Computex 2025, Joya, a consumer electronics leader, sought out Innatera for its energy efficient processing capabilities to accelerate the deployment of neuromorphic-powered consumer electronics. Kumar said the goal is to better integrate AI at the edge while overcoming the challenges of bringing this technology to mainstream consumer devices.

Kumar discussed a solution that can accurately detect human presence using a radar sensor connected to Pulsar. The device can differentiate between humans and non-humans such as pets or moving bushes for far less false alarms on doorbell cameras. When integrated into doorbells, the battery life has been extended from three months to 18 months as the processor and camera are off until the sensor has detected a human, even by miniscule movements including heartbeats.

By mimicking how brains interact with environmental stimuli, Innatera has begun to re-shape how sensors collect and store information. Its Pulsar technology allows for complete devices without the need to defer from functionality, performance, and responsiveness due to battery constraints.

FIGURE 1
Image of the petite chip (2.8x 2.6 mm).

Optimizing LDO Headroom Control with a Current Referenced Switching Regulator Design – Part 1: Noise Sources, Impact, and Strategies

This article explores the various sources of noise in switching regulators and their impact on different analog signal chain components. It highlights several noise mitigation strategies, including the use of low dropout (LDO) regulators as effective postregulation filters. The article also shows a range of solutions from Analog Devices (ADI) that deliver optimized LDO efficiency across varying load conditions and output voltages, while pairing good power supply noise rejection. One solution offers a new method for LDOs to control the headroom provided by switching regulators with current reference architectures.

Introduction

Designing an efficient and low-noise power solution is essential for noise-sensitive systems that utilize high-performance analog signal chains. However, noise sensitivity varies between systems and across different frequency ranges. Some applications, such as ultrasound imaging, are particularly susceptible

to low-frequency or 1/f noise. Systems with high-performance data converters are notably vulnerable to intermodulation distortion, where fundamental output ripple can interact with the carrier signal, generating sum and difference products. These unwanted frequency sideband components can significantly degrade both the signal-to-noise ratio (SNR) and the

Figure shows the functional block diagram of the ADP5003 for adaptive headroom control.

spurious-free dynamic range (SFDR) of the data converter. Additionally, electromagnetic interference (EMI) is a critical factor, especially in systems that must comply with stringent EMI standards and certifications.

Figure 1 shows the noise frequency spectrum of a typical buck regulator operating in steady-state pulse width modulation (PWM) operation.

Additionally, the fundamental ripple and its harmonics introduce strong spurious energy across the noise spectrum. The fundamental ripple refers to the residual AC voltage present at the output of a switching regulator. It is coherently correlated with the regulator’s switching operation, with its fundamental frequency matching the switching frequency of the converter. This artifact can significantly impact data converters by modulating the analog input carrier, resulting in unwanted sidebands

that degrade both SFDR and SNR performance, as shown in Figure 3.

Typical buck regulators typically generate low-frequency broadband noise that primarily originates from reference noise. This can lead to phase noise issues in sensitive RF components, such as wideband phase-locked loop (PLL) synthesizers with an integrated voltage-controlled oscillator (VCO), as shown in Figure 2.

A third noise region involves high-frequency harmonics, which arise from voltage ringing at the switch node. This ringing is caused by the combination of fast switching transitions (di/ dt) and parasitic inductance within the regulator’s input loop, as shown in Figure 4, further contributing to EMI and signal integrity challenges, which can be parasitically coupled to the regulator’s output.

Addressing Noise Issues

Low-frequency noise, particularly in the 1/f, is effectively addressed by the Silent Switcher® 3 (SS3) architecture, which offers excellent noise performance in this region.

Fundamental ripple can be mitigated using several techniques. One approach is the use of an RC filter, which is simple but

FIGURE 1: Buck regulator output spectrum.
FIGURE 2: Phase noise of a wideband PLL synthesizer with an integrated VCO.
FIGURE 3: 16-bit, 125 MSPS high-speed ADC fast Fourier transform.
FIGURE 4: Buck regulator’s input current loop and switch node voltage waveform.

comes with certain trade-offs. To achieve a sufficiently low 3 dB cutoff frequency that effectively attenuates ripple, a large capacitor and a small resistor (R) are needed. However, this configuration can lead to considerable power loss due to the series resistor, making it less efficient for many applications. That said, it may still be acceptable in scenarios where the supply current is relatively low. While the rolloff rate is limited to 20 dB per decade, a key advantage of this method is that it does not require any magnetic component.

An LC filter is also a common and efficient approach. The cutoff frequency is typically designed to be at least a decade below the switching frequency. It offers a steeper roll off of 40 dB per decade, providing better attenuation. However, designing an LC filter requires careful attention, particularly to resonance effects, which can unintentionally amplify noise at specific frequencies instead of attenuating it. Both passive filter approaches will impact voltage output accuracy and transient performance. Figure 5 depicts the placement of RC and LC filters following the output stage of a switching regulator.

An LDO with a high gain bandwidth product (GBW) can effectively reject fundamental ripple in the megahertz range while also delivering excellent low noise performance. However, trade offs such as maintaining an adequate power supply rejection ratio (PSRR) and overall efficiency must be carefully considered. Using an LDO as a post-regulation stage offers advantages over passive filters, including improved output voltage accuracy and better transient response. To achieve an optimal solution, it’s important to carefully balance the VIN – VOUT headroom with the LDO’s PSRR characteristics. High frequency harmonics – typically in the range of 100 MHz and above – can be effectively attenuated using ferrite beads. These components exhibit resistive characteristics at targeted high frequencies, making them well-suited for suppressing such high-frequency noise. However, it’s important to note that ferrite beads come with certain complexities, such as resonance effects and impedance variations under different load conditions. These factors must be carefully evaluated during design.1

To achieve superior high-frequency noise performance, Silent Switcher architectures can be utilized. These designs effectively minimize EMI by significantly reducing high-frequency ringing at the switch node, making them a highly robust solution for noise-sensitive applications.

Switching Regulators Utilizing LDOs to Enhance Output

Noise Performance

LDOs are commonly used after a switching regulator for postregulation to filter out noise artifacts at certain frequency ranges. LDOs are typically very effective at rejecting low-frequency noise, often up to several hundred kilohertz. However, high-gain bandwidth LDOs, such as the LT3045, extend this

capability into the megahertz range, offering superior PSRR performance. This device is a 20V, 500mA high-performance, ultralow noise, and ultrahigh PSRR regulator, making it ideal for noise-sensitive applications. Compared to passive filters, LDOs offer several advantages, including higher output voltage accuracy, enhanced stability, and superior transient response.

One of the key parameters of an LDO used as a post regulation filter is its PSRR. PSRR quantifies how effectively the regulator suppresses or attenuates noise present on the input supply across a range of frequencies, preventing it from propagating to the output and compromising voltage integrity.

However, PSRR is a function of both load current and headroom voltage, the difference between input voltage and output voltage. Load current plays a crucial role in influencing the open-loop gain of an LDO’s error amplifier, and thus directly impacts its PSRR performance. Under light load conditions, the pass element exhibits higher impedance, which shifts the pole formed with the output capacitor to a lower frequency. This shift enhances the LDO’s ability to reject power supply ripple more effectively.

In contrast, under heavy load conditions, the error amplifier’s output impedance decreases, along with its open-loop gain. This reduction in gain leads to a drop in PSRR, particularly in the frequency range between DC and the unity-gain bandwidth of the feedback loop.

As headroom decreases, the gain of the error amplifier is reduced, and this effect becomes more pronounced with increasing load current. As a result, PSRR performance deteriorates under these conditions.2

LDOs are highly effective as post regulator filters, but their performance is closely tied to both voltage headroom and load current, which must be carefully managed. While increasing headroom can improve power supply ripple rejection, it also leads to greater power dissipation, especially at higher load currents, resulting in reduced efficiency. System designers can strike an optimal balance between effective noise filtering and sufficient voltage headroom to maintain high efficiency. This balance is key to achieving both performance and powersaving goals in the overall design.

Optimizing Efficiency and PSRR Performance

One approach is based on the dynamic changes of the load current. The ADP5003 low noise micropower management IC integrates a high efficiency 3 A buck regulator on the first stage of conversion, followed by an ultralow noise 3 A LDO, to remove the switching ripple and noise. It offers an adaptive headroom control configuration that delivers enhanced efficiency and thermal performance while minimizing noise suited to high-speed data converters and RF transceivers.

In adaptive mode, the LDO dynamically adjusts its headroom by internally regulating the buck converter’s output voltage based on the LDO’s load current. This ensures optimal efficiency and noise performance. Alternatively, the ADP5003 can

operate in independent mode, where the buck and LDO function separately with their output voltages set individually using external resistor dividers.

Figure 6 shows adaptive headroom control for the whole range of the LDO load current. The x-axis is the load current, while the y-axis is the headroom voltage of the LDO.

The headroom profile in adaptive headroom control is configured to maintain a consistent PSRR across varying load conditions while also enhancing the overall system efficiency. This is shown in Figure 7.

Another approach is based on dynamic changes in VOUT. Voltage input-to-output control (VIOC), a key feature in select ADI LDOs, improves system efficiency by automatically adjusting the switching regulator’s output to maintain a defined headroom voltage. While VIOC does not automatically select the best PSRR, users can manually define the headroom voltage to achieve the desired PSRR performance for specific applications.

An example is the LT3045-1, which has a VIOC feature. This device is a 20V, 500mA, ultralow noise, ultrahigh PSRR linear regulator. Figure 8 illustrates a typical VIOC application, where it is used to postregulate the output of the LT8608 buck regulator. The VIOC voltage is configured to 1 V, with the LDO’s maximum input voltage limited to 16.5 V. It also illustrates how the input-to-output differential voltage can be easily configured using resistor dividers, allowing designers to tailor the balance between PSRR and power dissipation to suit specific application requirements.

Simple LDO Headroom Control Using Switching Regulators with Current Reference Architecture

Current reference architecture is a design approach where a precise current source, rather than a traditional voltage reference, serves as the core element for regulating the output voltage. It has a unity-gain error amplifier, and the output voltage can be easily set by a single resistor. The approach is particularly advantageous in linear regulators and is increasingly

being adopted in switching converters to meet the demands of high-performance applications. This can be seen in Figure 9 from a buck IC.

ADI utilizes a current reference architecture in several of its linear regulators, such as the LT3080 and LT3045, to achieve

FIGURE 6: Adaptive mode headroom vs. load current.
FIGURE 7: LDO PSRR vs. frequency.
FIGURE 8: Typical LT3045-1 postregulating application.

high precision and low noise. The LT3080 is an adjustable 1.1 A low dropout regulator designed with a precision current source and voltage follower, enabling it to support applications that demand high current and output adjustability down to 0 V. Highly integrated switching converters such as the LTM4653 –a 58 V, 4 A step-down µModule® regulator – and those based on SS3 technology incorporate current reference architecture to enhance low noise performance and reduce EMI while maintaining high efficiency and small solution size.

The current reference architecture benefits are as follows:

It enables output regulation to 0 V, which is hard to achieve using the traditional voltage reference.

It simplifies output voltage setting by using a single resistor instead of two resistors from the traditional voltage reference. It saves component count and space.3

Consistent performance across the output voltage range since it operates at unity gain, ensuring stable bandwidth and transient response regardless of output voltage.

With ADI’s advanced SS3 technology, the output noise (0.1 Hz to 100 kHz) remains consistently low across the entire output voltage range, ensuring stable performance regardless of the output voltage level.

Normally, LDOs with VIOC capability are not designed to be paired with SS3 switching regulators because SS3 regulators

don’t have the conventional FB pin. Figure 10 shows a new architecture where a current source reference switching regulator is used to generate an output voltage based on the resistor between the switcher SET pin and the LDO output.

By utilizing DC-to-DC converters with current source reference features, a function that is similar to the VIOC feature of advanced LDOs can be implemented in a clever and efficient manner. In this setup, the first stage switching converter uses the current source reference at its SET pin and connects it through a resistor to the output voltage of the second-stage LDO, enabling dynamic headroom control and improved noise performance.

Conclusion

Switching regulator noise can affect analog signal chain components in different ways, depending on what frequencies each component is most sensitive to. Various filtering techniques can be applied, tailored to the specific frequency ranges the system aims to attain. Using an LDO is another effective approach, but it requires careful consideration of the trade-offs between PSRR and voltage headroom, which dictates the component efficiency, especially under dynamic output voltage or varying load conditions.

Part 2 will focus on optimizing LDO headroom control through current-referenced DC-to-DC converter design. It will cover practical implementations, circuit simulations, and performance evaluations – highlighting key considerations for noise-sensitive applications.

References

1. Aldrick Limjoco and Jefferson Eco. “Ferrite Bead Demystified.” Analog Dialogue, Vol. 50, February 2016.

2. Glenn Morita. “Understand Low-Dropout Regulator (LDO) Concepts to Achieve Optimal Designs.” Analog Dialogue, Vol. 48, December 2014.

3. Yu Lu and Hugh Yu. “Low Noise Silent Switcher μModule and LDO Regulators Help Improve Ultrasound Noise and Image Quality.” Analog Dialogue, Vol. 56, April 2022.

Kyosuke Shimo joined Analog Devices Japan in 2022 as a new graduate and currently serves as a field applications engineer in the Industrial Customer Solutions Group. He supports power products and works closely with customers to address technical challenges and deliver innovative solutions.

Ino Lorenz Ardiente currently serves as a power architect engineer under the Power Solutions Group at Analog Devices Philippines. He has more than 6 years of experience in the design, testing, and evaluation of high-power AC-to-DC and DC-to-DC converters before joining ADI in 2025.

Aldrick S. Limjoco currently works as a senior manager, power architect under the Power Solutions Group at Analog Devices Philippines. Since joining ADI in 2006, he has taken on diverse engineering roles focused on power management, including design evaluation, product applications, and applications research.

FIGURE 9: Current source reference architecture of a buck IC.
FIGURE 10: A block diagram of a buck regulator with current source reference and LDO for headroom control.

QUESTION: What is the technological innovation or improvement that you are most looking forward to seeing implemented in 2026?

Digitally Agile Radar – Direct RF Sampling Opens up New Radar Capabilities

Looking toward 2026, one of the impactful innovations in radar system design is the increasing adoption of digitally agile, direct RF sampling solutions, replacing traditional analog RF architectures.

Radar systems are under pressure to become more compact, agile, and precise – particularly in demanding applications like phasedarray systems for surveillance and defense. Traditional architectures, based on analog up/down-conversion and intermediate frequency (IF) sampling, add complexity and limit adaptability.

As this shift toward digital RF processing is increasingly adopted across next-generation radar platforms, Enclustra has developed the digitally agile and compact XRU50 module, based on AMD’s Zynq™ UltraScale+™ RFSoC, to address the specific demands of radar applications.

Why Direct RF Sampling and High Sampling Rates Matter

Traditional radar architectures rely on analog up- and downconversion between RF and intermediate frequency (IF) stages. These designs require multiple components, increase system size, and introduce calibration challenges.

The Andromeda XRU50 RFSoC radically simplifies this by supporting direct RF sampling at up to 10 Gsps and an operating frequency range of DC to 6 GHz. These high sample rates allow for larger operating signal bandwidths, and the high operating frequency eliminates the need for analog up and down-conversion in many use cases, resulting in:

› Superior target discrimination

› Enhanced interference and jamming immunity

› Simplified hardware architecture

› Reduced system footprint and power draw

› Lower component count and integration cost

› Improved calibration and deterministic phase coherence

These hardware advantages enable radar systems that are easier to scale, maintain, and deploy across various operational environments.

Phased-Array Radar: Powering Precision and Agility

One of the key application areas for the Andromeda XRU50 RFSoC is phased-array radar, a technology that electronically steers beams without moving mechanical parts. From drone-based target acquisition to ground-based surveillance, phased array radar demands tightly synchronized transmit and receive channels, real-time beamforming, and scalable channel integration.

The Andromeda XRU50 RFSoC supports these requirements through:

› Multi Tile Synchronization (MTS) and Multi Device

Synchronization (MDS) to align sampling across multiple RF tiles and devices

› Deterministic latency using a fully featured clocking architecture

› High-speed ADC/DAC sampling clocks for wide signal bandwidths and fine target resolution

› Compact form factor enabling easier integration into constrained or mobile platforms

Combined, these features make the Andromeda RFSoC module an ideal choice for developers building distributed or highly parallelized radar systems.

Clocking Architecture: Synchronization Made Simple

At the heart of every radar system lies the need for precise timing and phase alignment. The Andromeda XRU50 RFSoC integrates seamlessly into any clock distribution network and supports both Analog SYSREF and PL SYSREF clocking, as well as all sampling clocks inputs at maximum speeds.

The fully featured clocking architecture ensures:

› Tight phase coherence across channels

› Reliable synchronization across multiple modules

These attributes are particularly valuable in beamforming applications, where phase alignment directly affects radar resolution and target-tracking accuracy.

Adaptable Across Radar Platforms

The Andromeda XRU50 RFSoC is well-suited to radar systems that require real-time processing, a wide field of view, and compact integration. Its performance and flexibility make it a strong fit for surveillance and defense applications where precision and responsiveness are critical.

The Andromeda XRU50 RFSoC module is enabling a new generation of agile, high-performance radar systems. By delivering direct RF sampling of the radio spectrum, extremely high sampling rates, robust synchronization, and compact integration, it simplifies the radar architecture while unlocking new capabilities across airborne and ground-based phased-array systems.

With these capabilities expected to play a central role in radar systems deployed in 2026 and beyond, the Andromeda XRU50 RFSoC offers developers a flexible, forward-looking platform as radar technology continues to evolve.

Have a project in mind? We can help explore how these capabilities can be applied to your next design.

QUESTION: What is the technological innovation or improvement that you are most looking forward to seeing implemented in 2026?

Building Quantum Safe Silicon: Why 2026 Marks a Turning Point for Hardware Security

The technology I am most looking forward to in 2026 is the broader adoption of hardware-level, Quantum Safe cryptography being instantiated directly into silicon. For the last several years the conversation around post quantum cryptography (PQC) has centered on algorithms and standardization. That work has been essential, but 2026 is the year when the focus clearly shifts to how we bring those algorithms to life in practical, manufacturable, and power efficient hardware.

For designers building secure silicon, the big breakthrough will be the availability of fully integrated, side-channel-resistant Quantum Safe accelerators that do not simply bolt quantum safe operations onto legacy architectures. Instead, they are designed from the ground up for large key sizes, heavier math, and higher entropy demands. This is fundamentally a hardware problem. The computational and memory footprints of PQC algorithms like Kyber (ML-KEM), Dilithium (ML-DSA) and Sphincs+ (SLH-DSA) place real stress on embedded systems, particularly in automotive, industrial IoT, and data center infrastructure. Software-only implementations are often too slow or too power hungry for these environments. What excites me is the adoption of Quantum Safe security IP cores that will make PQC not only possible but practical.

A key innovation will be the movement toward tightly coupled entropy generation and hardware key encapsulation modules. These will ensure that even devices with modest compute resources can establish quantum safe sessions without compromising latency or thermals. Equally important is the emergence of secure enclaves that isolate PQC operations from the main processing fabric, reducing the risk of exposure to fault injection or physical probing attacks. The combination of secure root of trust, hardened PQC accelerators, and real time attestation will form a blueprint for silicon-level quantum resilience.

I am also encouraged by how quickly the industry is converging on practical integration models for PQC in constrained environments. The shift toward hybrid key establishment provides a realistic migration path. Hardware designers will be able to support classical and Quantum Safe algorithms side by side, ensuring backward compatibility while offering future proof protection. In 2026, I expect to see this hybrid model widely adopted in designs for automotive ECUs, secure microcontrollers, and network interface chips.

Secure boot has always been the anchor of device trust. In a post quantum world, that anchor must be significantly stronger. The transition to quantum safe signature verification for boot image

signatures is a challenging task because of the increased computational burden and memory requirements. However, with advances in hardware acceleration and optimized arithmetic units, we are reaching the point where Quantum Safe secure boot can be executed in time frames acceptable for automotive, networking, and edge silicon. That is a major milestone.

What excites me even more is how these same hardware capabilities will enable long term secure update mechanisms. Devices in the field will rely on Quantum Safe signatures to validate firmware, configuration data, and even late-stage feature enablement. 2026 will mark the point at which hardware roots of trust regularly support both classical and PQC signature schemes through flexible instruction and key management frameworks. This means a device shipped today can remain protected even as cryptographic standards evolve.

Most importantly, the adoption of Quantum Safe cryptography in hardware represents a cultural shift. PQC security is no longer something added late in the design cycle. It is becoming a primary architectural pillar. That mindset change is perhaps the most meaningful innovation of all. As quantum threats evolve, our silicon must evolve with them. Rambus has been a leader in Quantum Safe IP solutions that provide the foundation for many decades of secure, trustworthy, and resilient electronic systems.

Bart Stevens is Senior Director of Product Marketing for Rambus Silicon IP. He is an expert on embedded security and high-speed interfaces for Physical, Edge and Cloud AI, Automotive, HPC, Data Center, Enterprise, Networking, Wireless, IoT and Mobile applications. He also held previous roles at Rambus as Sr. Director of Product Management as well as Director of Sales for EMEA. Before joining Rambus, Bart held positions at Inside Secure as Vice President of silicon IP and secure communication, and as Director of product management, with responsibility for security chip and silicon IP products. Prior to those roles, Bart managed SafeNet’s OEM networking and wireless HW research and development teams. He has also held product and engineering management roles at Securealink and Philips Semiconductors in the Netherlands. He began his career as an ASIC designer.

QUESTION: What is the embedded trend you think is most likely to become a best practice for engineers and developers in 2026?

Beautiful Embedded Systems are Becoming the Best Practice in 2026

The embedded trend I’m most excited to see become a best practice in 2026 is the focus on crafting beautiful embedded systems through Software Defined Hardware (SDH) principles. Calling an embedded system ‘beautiful’ is a bit unusual, but it’s fair when an embedded system is the result of a welldeveloped Product Vision that drives a carefully considered, broadly informed Architecture.

It’s not about quick turn-and-burn projects that just meet today’s requirements. A future proof Architecture leans on Software Defined Hardware (SDH) techniques. When a 2026 level Product Vision aligns with a 2026 level Architecture, the result is a long-term work of art – an elegant solution that generates value over its lifespan. Enough about the art and this 2026 renaissance – let’s talk tech.

A beautiful embedded system starts with a Product Vision that addresses near-term goals and intentionally plans for future ones. Think of it as designing today’s product to meet today’s needs while mapping how that same platform can meet tomorrow’s predictable demands. Maybe it’s a shared base platform customized into different SKUs, or a single system gradually enabled over time – regardless, it’s not acceptable to compromise performance.

This broader Product Vision is enabled by Software Defined Hardware. Some argue that provisioning for future capabilities is wasteful, but a forward-looking Product Vision recognizes that a device must evolve to stay relevant. Emerging or incomplete standards, I/O advancements, levels of integration, and the constant push to be “smarter,” “AI-enabled,” or simply “better” are all certainties that demand design time attention, now. Overplanning may look wasteful, but designing only for today only guarantees obsolescence tomorrow. A thoughtful Product Vision is the foundation of a beautiful embedded system – one that can grow and adapt on demand for years to come.

Product Vision informs Architecture, and an Architecture becomes beautiful when it meets that Vision in an intentional, adaptable way. Enter Software Defined Hardware. SDH provides the flexibility to change system capabilities without sacrificing performance – a hurdle that once held back multi generational design concepts. Modern SDH based Architectures weave in heterogeneous computing elements, assigning the right tasks to the right processors at the right time. The mix might include CPUs, FPGAs, GPUs, NPUs, and APUs, each activated as the system’s needs evolve.

Is this practical? Thankfully, yes. Device manufacturers have ramped up integration, providing tools, libraries, and flexible power and clocking schemes that extend a product’s lifespan without blowing up cost, size, development-time or power budgets. SDH benefits users, empowers OEMs, and is better for the environment. But it’s not free; the cost is complexity. Architects must understand how to partition functions to the right elements, which means investing time and expertise in simulation, hardware in the loop proofing, profiling, retargeting, and refactoring. It’s challenging work, but the payoff is a robust, future ready Architecture.

“A BEAUTIFUL EMBEDDED SYSTEM STARTS WITH A PRODUCT VISION THAT ADDRESSES NEAR-TERM GOALS AND INTENTIONALLY PLANS FOR FUTURE ONES. THINK OF IT AS DESIGNING TODAY’S PRODUCT TO MEET TODAY’S NEEDS WHILE MAPPING HOW THAT SAME PLATFORM CAN MEET TOMORROW’S PREDICTABLE DEMANDS.”

Now that your beautiful Product Vision and beautiful Architecture have come together to form an embedded system worthy of 2026 – now what? You decide: Rest or actively make the most of your system with CI/CD activities. Either way, you know you’ve delivered a solution that adapts to client needs, keeps them happy, and generates revenue long after deployment – and you look forward to remotely upgrading it in the future.

Need AI features? Define the function and target the FPGA, GPU, or NPU. Need ultra low latency? Move time sensitive tasks from CPU to FPGA. New interface? Add a daughtercard and let the FPGA handle it. Want deeper real world analytics? Connect sensors and process the data wherever it makes sense.

The SDH based beautiful embedded system is flexibility without compromise; opening the door to long term, sticky client relationships and new recurring revenue models – from maintenance and support contracts to SaaS style opportunities.

https://www.fidus.com

Tackling the Documentation Mountain: Using AI to Navigate User Guides

Documentation for semiconductor products and embedded systems software solutions is extensive, often reaching into thousands of pages. Navigating the mountain of documentation is time-consuming and potentially error-prone. This translates into high support costs for the companies selling these products and frustration for engineers.

Recently launched LLM-based AI solutions can address this problem, making it faster and easier for engineers to navigate complex documentation to find the information they need. Vendors using these solutions can reduce support costs by as much as 90%.

Generic AI solutions are not well-suited for this use case. In many cases (Figure 1), company policies don’t allow the use of generic AI solutions. But even if they are allowed, they are not built for technical analysis, and they cannot be trained on private data without risk of leaking confidential information. Furthermore, they are not designed to generate long deliverables like test cases or certification documents.

Solving this problem requires an AI solution that is customized for this use case, trained on an enterprise’s documentation, and that provides enterprise-grade security to ensure confidential information is not leaked to public LLMs.

The Documentation Mountain

The complexity of microprocessors, microcontrollers, SoCs, connectivity ICs, and sensor chips continues to grow. For engineers integrating these chips into an end product, understanding the details of how these chips work is a significant challenge. Engineers integrating these solutions into a product are faced with a mountain of documentation.

This problem is not limited to hardware products. Embedded software solutions such as operating systems, communication stacks, and security libraries are also quite

complex. They come with extensive documentation describing the use and integration of these solutions.

Engineers can spend a significant amount of time searching through the various documents provided for a single product to understand how to best integrate these products. This challenge is exacerbated by the fact that, in some cases, not all documentation is consistent. Information found in a user’s guide may be superseded by information found in release notes or errata documents (Figure 2).

This can result in a number of problems, including:

1. Time wasted searching for the right information.

2. Frustration when engineers implement a feature as described in a user guide, only to later find that the errata document shows that a different approach is required, resulting in rework and lost time.

3. Heavy support burden for the company providing the hardware or software solution.

4. In the worst case scenario, companies switch to a different vendor, resulting in lost revenue for the company providing the embedded solution.

Taming the Documentation Mountain Navigating and understanding complex and extensive user documentation is an ideal use case for LLM-based AI

FIGURE 1 An explanation of reasons why generic AI tools aren’t sufficient in assisting with documentation for semiconductor products.

solutions. Modern AI solutions can be trained on a company’s technical documentation and be used to provide engineers using these products with a virtual Subject Matter Expert (SME).

In addition to training on user documentation, an enterprise-grade private LLM solution should provide integration with developers’ tools such as Jira, GitHub, and Confluence. This integration enables automated training of the LLM using data from these documentation and code bases stored in these solutions.

Use of such an SME allows engineers to quickly and easily pinpoint the information that they require to integrate products. This AI-based SME can be used by:

1. Potential customers to quickly understand a product’s features and capabilities.

2. Engineers who are integrating the product to understand interfaces, APIs, and integration requirements.

3. QA teams to verify proper integration, define test cases, and edge scenarios.

4. Vendors’ support team to reduce support costs and more quickly answer customers’ questions.

The Need for a Private LLM

A company’s product documentation is confidential, and many vendors only release this information under a nondisclosure agreement (NDA). To ensure confidential information is not leaked, companies need a private LLM with enterprise-grade security.

Public LLMs, if not used carefully, will use data provided to them in queries to continue to train the model. As a result of these privacy concerns, many enterprises have banned or placed strict limitations on the use of public LLM tools such as ChatGPT. Public LLMs, if not used carefully, will use data provided to them in queries to continue to train the model.

A flow chart demonstrating how technical products can contain hidden bottlenecks, making it difficult to get clear and concise content to developers.

Generic AI solutions also struggle with accuracy, especially across large, complex document sets. If Vendor A wants to use a public LLM to help developers navigate its documentation, it will have several issues to contend with:

› Public LLMs have already been trained with a massive set of information, which undoubtedly includes details on the vendor’s products under question. Some of this public information will undoubtedly be inaccurate.

› A public LLM may confuse information on Vendor A’s solutions with its competitors.

› Public LLMs are not designed to generate structured deliverables like certification documents or test cases.

A private LLM, such as the solution provided by Understand Tech, allows a company to train the LLM with its own data without risk of inadvertently leaking data to a public LLM. Ideally, the LLM will provide enterprise-grade security features, including the ability to control where data is hosted to ensure compliance with security regulations such as SOC2 and GDPR. Other important secure features include end-to-end encryption and integration with an enterprise’s Single-Sign-On solutions.

Summary

As semiconductor products and embedded software solutions grow in complexity, engineers are finding it more challenging to integrate these solutions into their end products. This results in increased development costs, higher support costs for the vendors providing these products, and frustrated customers.

LLM-based AI solutions that are trained to understand vendors’ product documentation can dramatically reduce the time spent searching for information in the product documentation and reduce errors caused by inconsistencies in documentation. This helps engineers be more productive and reduce support costs for the vendors producing these products.

Naama BAK is an entrepreneur with 15 years of experience in tech. He is the founder of Understand Tech, a generative AI platform for enterprises, and Trustii.io, a machine learning platform for data science challenges. He previously held roles at NXP Semiconductors, Orange, and Safran, working in cybersecurity across research, development, product marketing, and business development. He holds a Computer Science Engineering degree and an MBA.

FIGURE 2

We Must Embrace Innovation Where it Matters Most –on the Fab Floor

The domestic semiconductor industry is at a critical juncture – with substantial investments, supportive policies, and soaring demand for artificial intelligence. Yet as the U.S. ramps up domestic production, I can’t help but wonder: are we on the cusp of unprecedented growth? Or are we teetering on the edge of a cliff?

As processes continue to advance (toward 3nm, 2nm nodes), the margin for error continues to shrink. Yet, high yields and peak efficiency are non-negotiable. Any downtime is tremendously costly – both financially and competitively. It’s time to take the management of these complex facilities seriously. We must leverage the most advanced technologies to optimize operations, boost efficiency, and ensure stringent conditions are consistently met.

IoT, data intelligence, and predictive analytics may hold the key to achieving the precision, reliability, and real-time adaptability needed for our industry to soar.

Digitizing the Facility

Semiconductor fabs generate massive amounts of data from equipment, environmental controls, and production processes.

By digitizing facility services, we can leverage this data for actionable insights. A data intelligence platform like ABM Connect streamlines and displays analytics tailored to answer front-line questions instantly. This solution features an integrated IoT hub for visibility and task validation such as:

› Work completion against planned routes

› Quality performance and inspections

› Recognition patterns and performance trends

› Training compliance

› Safe workplace observations

This provides real-time metrics, robust reporting, and up-to-date KPIs. It also helps tackle compliance and audit challenges while driving continuous improvement.

Predictive Maintenance

Traditional, reactive, or preventive maintenance is expensive and inefficient. Highly specialized, expensive tools like photolithography machines, etchers, and deposition systems are prone to operating challenges. A failing laser in an EUV system or a vacuum pump failure can stop a line – and have a material impact on production yield.

Predictive maintenance (PdM) shifts from reactive or scheduled maintenance to a data-driven, just-in-time approach. By leveraging PdM, we can minimize disruptions and extend equipment life. It’s the difference between changing a part or performing a functional validation on a rigid schedule and knowing the optimal moment for operations and lifecycle costs. With condition-based repairs, we can cut maintenance costs by 5-25% and optimize spare parts inventory by up to 30%.

Real-time monitoring can be achieved with connected sensors. Wireless and wired sensors (along with AI) can monitor for conditions and report to a centralized information system. We can use these sensors to check a variety of key parameters, including:

› Heat Monitoring – Detects heat caused by insulation issues or conduction problems so you can act before discharge events or arc faults.

› Partial Discharge Monitoring –Partial discharge usually isn’t visible, but it destroys insulation over time, which will cause a full and destructive discharge.

› Circuit Monitoring – Measures power and power quality data, including harmonic disturbance in the wave forms and voltage

transients (sags and swells) that can damage sensitive equipment.

Using AI and machine learning, we can then use this data to detect patterns or anomalies signaling potential failure risks. For instance, a spike in pump vibration could prompt preemptive action. Over time, this data helps spot asset anomalies and predict equipment failures before they happen, boosting overall equipment efficiency and facility uptime. Detecting issues early prevents damage and can extend equipment life by up to 15%.

But implementing PdM isn’t without hurdles! We need high-quality, accessible data and seamless integration with existing systems. Misinterpreting data without domain knowledge can lead to flawed predictions. Defining the right parameters for failure prediction models is critical, and robust security measures are a must to protect sensitive production data. Fortunately, new solutions are making this process easier and scalable across facilities.

Workforce Gaps

The talent shortage in our industry is no secret. Embracing outsourcing may be the answer to expand the labor pipeline as demand for skilled labor outpaces supply. This is particularly true if the outsourcing partner has in-depth expertise working within these highly specialized and complex environments. Facilities management partners can help fill talent gaps in construction, operations, and maintenance, provide valuable tribal knowledge, and help ensure safety and quality.

With an experienced team, we can implement the right technologies and innovations required to enhance efficiency, cut costs, and maintain product quality and support technology roadmaps.

The demands are unique, and the stakes are high. We can’t afford to be the “cobbler with no shoes” when it comes to running today’s advanced fabs. It’s time to embrace technology and innovation

SEMICONDUCTOR DEVELOPMENT

where it matters most – on the fab floor and in the subfab. The future of the U.S. semiconductor industry depends on it.

Joe Cestari brings over 39 years of leadership in high-tech engineering, construction, manufacturing, and supply chain operations, paired with nearly three decades of global business development expertise. Currently serving as Vice President of Semiconductor Operations at ABM, Joe plays a pivotal role in advancing the company’s strategic initiatives in the semiconductor sector. He also contributes to the academic and professional community as a member of the Board of Advocates for the Baylor University School of Engineering and Computer Science.

• MIL/COTS/Industrial Models

• Regulated/Isolated/Adjustable Programmable Standard Models

• New High Input Voltages to 1,200VDC

• AS9100D Approved Facility/ US Manufactured

• Military Upgrades and Custom Modules

Surface Mount & Thru Hole

• Ultra Miniature Component Packages

• MIL-PRF-27/MIL-STD-1553

• QPL Approved Manufacturer

• Transformers - Audio/Pulse/Power/Data-Bus

• Common Mode Chokes & Power Inductors

• Critical Applications for Space, Flight, & Communications

What Developers Need to Know About FPGA-Based Designs

Specialized semiconductors – particularly flexible and high-efficiency options like Field Programmable Gate Arrays (FPGAs) – have incredible potential to support new developments in computing.

FPGAs make possible the low-latency, low-power, and high-performance capacity devices needed to bring solutions like artificial intelligence (AI) and machine learning (ML) applications, fully optimized data centers, and next-generation networking infrastructure to life.

Given their varied capabilities, FPGAs have opened up a world of new possibilities for engineers and developers. Even so, they are not necessarily a “magic fix” for every development challenge. To use FPGAs to their fullest potential, developers must understand their unique strengths and how to leverage them to balance an array of competing factors throughout the design and build process. (Figure 1)

Key Design and Development Considerations

FPGAs are not “plug-and-play” solutions, and capturing their full value depends on having an understanding not just of their potential but also of the variables that determine overall performance:

Bird’s eye view of FPGA displaying how the programmable logic blocks are connected via programmable interconnects.

Space and power

From smartphones to in-vehicle Edge sensors, today’s devices – and the resources available to power them – are getting much more compact. This makes implementing chips increasingly difficult, as developers need to account for power and space constraints from day one to ensure they are making the most of what is available. This requires an early understanding of their design limitations and continuous monitoring of how each change or addition affects the system as a whole. Failing to do so can result in routing congestion, energy waste, and the need for unexpected, timely, and costly redesigns.

By paying close attention to limitations from the start and monitoring allocation along the way, developers can proactively ensure that each design decision aligns with the intended outcome of their process. With this approach, the final product will be more likely to meet the desired specifications at both the system and IP levels.

Thermal limits and waste

Whether a circuit can operate isn’t just a matter of providing enough power; developers also need to account for losses along the way to ensure reliability, longevity, and user safety. With every element added and connection made, leakage currents and static power rise, as does the heat generated within the build.

Mitigating and managing this heat waste must be a top priority, as poor management can result in component failure, performance and efficiency loss, and even potential combustion. As such, understanding and operating within safe thermal limits must be a priority from start to finish.

Operating speed

Timing constraints (or “timelinks”) have direct impacts on the placement and routing of a system’s physical design, as trace distance correlates to resistance, energy waste, and other factors. In the

long term, the careful consideration of timing constraints helps developers avoid setup and hold time violations and ensure longevity and reliability.

Similarly, clock domain management and synchronization are key to perfecting an FPGA-based build. Failure to properly manage clock domains alongside traditional time constraints can lead to malfunctions, metastability, and data corruption, as well as impact the efficiency of the build and the circuit’s power needs. Adding these functions later on in the process is often more challenging, as clock domain management has a direct impact on resource allocation and overall build requirements (Figure 2). What’s more, bugs in this area can be difficult to address and significantly delay development time.

Ultimately, each of these factors impacts the others in some form or another. Higher power needs may increase the potential for heat waste, timing management influences system size and space constraints, and thermal limits restrict computing capacity. This network of interconnected factors means that the system’s needs will change with every new design and development decision. This underscores just

FIGURE 1 Diagram of an FPGA chip structure, outlining key considerations for successful board integration.
FIGURE 2 Key timing specifications within FPGA logic timing

how crucial identifying and proactively avoiding design challenges is when using semiconductors.

The Importance of Checking Your Work

Before pushing a newly designed system or device into production, developers need to check that these chip-based designs operate as intended. This testing helps reveal any issues that may have been missed in development, enabling proactive remediation and helping avoid timely and expensive post-launch repairs or replacements. But there are also ways to model, test, and improve upon builds along the way.

By assessing system feasibility early and often, developers can ship their finished designs with more confidence and assurance that they’ll get the job done. Advanced simulation software now allows designers to evaluate the feasibility of designs – and assess the impact of changes – before they’re too far along in the process, saving the time and money associated with redesigns and faulty prototypes.

These tools help developers bring their ideas to life even more quickly and enable more innovative and exploratory design processes. They allow engineers to test theories without worrying about waste, as well as template designs for future iterations of chip-based builds.

Leveraging Semiconductors with Confidence

High-performance semiconductor options – like FPGAs – have transformed the way that we build and interact with computing systems. They offer developers the flexibility, power, and processing capacity necessary to create innovative solutions across various industries.

These benefits are only accessible, though, when semiconductors are incorporated into designs in a thoughtful and balanced manner. As technological solutions become more complex and capable, mastering the art of balanced FPGA-based design is the key to unlocking – and sustaining – the next generation of computing.

Eliminating Costly Factory Downtime with Modern Edge AI: Self-Recovery Mechanisms Are Key

Unplanned production stoppages in factories and logistics operations can lead to significant financial losses, supply chain disruption, and reduced productivity. Conventional IPC maintenance methods, such as dual BIOS or out-ofband (OOB) management, often fail to provide rapid diagnostics, secure recovery, or reliable system stability. ASUS IoT overcomes these limitations with intelligent, self-recovering IPC designs. Disassembly-free troubleshooting, BIOS Smart Recovery, and industrial-grade system stability enable autonomous edge recovery –ensuring secure, reliable, and continuous factory operations while lowering maintenance costs and improving efficiency.

https://embeddedcomputing.com/technology/ai-machine-learning/predictive-maintenence/ eliminating-costly-factory-downtime-with-modern-edge-ai-self-recovery-mechanisms-are-key

QUESTION:

The rise of the OSM module for edge AI product development

Accelerating Edge AI Development with the OSM form factor

How does the Open Standard Module (OSM) help developers move more quickly from concept to deployment in edge AI applications?

OSM modules, such as our new OSM-LF-IMX95, accelerate the path from concept to deployment in Edge AI applications by combining high performance, integrated AI and vision capabilities, and robust connectivity within a compact, ready-to-use form factor. The Open Standard Module (OSM) Specification 1.2 and small 45 x 45 mm size enables straight-forward integration into various designs, which minimizes hardware development time.

Leveraging the Power of the NXP i.MX 95 Processor on OSM

How does the NXP i.MX 95 applications processor enhance AI and vision performance while maintaining low power consumption?

The OSM-LF-IMX95 module’s powerful NXP i.MX 95 SoC, featuring a six-core Arm Cortex-A55 processor, dedicated real-time cores, advanced GPU, VPU, NPU, and image signal processor empowers developers to efficiently build and deploy sophisticated AI-accelerated vision solutions for tasks like predictive maintenance, object classification, and production line monitoring. Importantly, the module’s direct soldering design eliminates the need for connectors, which greatly streamlines the assembly process, therefore reducing time to market. By providing a feature-rich, cost-optimized, and production-ready platform, the OSM-LF-IMX95 and NXP i.MX 95 SoC pairing enables developers and design engineers to quickly transition from prototype to full-scale deployment in demanding Edge AI environments such as industrial automation, robotics, and advanced vision systems.

Enabling Advanced Vision and AI at the Edge

How do the integrated NPU, GPU, VPU, and image signal processor work together to support demanding AI-accelerated vision applications?

The neural processing unit (NPU) handles complex machine learning tasks such as predictive maintenance and object classification by accelerating inference operations directly at the edge. The graphics processing unit (GPU) provides highperformance rendering and parallel processing for visual workloads, while the vision processing unit (VPU) manages 4K video decode and encode for real-time image and video analysis. The

image signal processor optimizes raw image data, enhancing quality and enabling advanced vision features. By combining these specialized processors, the OSM-LF-IMX95 revolutionizes applications such as production line monitoring, robotic vision systems, vehicle autonomy and other Edge AI workloads in an elegant blend of interoperability that ensures high performance and low power consumption across diverse industrial and mobile platforms.

Staying Ahead in the Edge AI Market

How does the OSM-LF-IMX95 help OEMs and system designers stay competitive in the rapidly evolving edge AI landscape?

The OSM-LF-IMX95 module helps OEMs and system designers stay competitive by delivering the highest levels of performance, security, flexibility, and power efficiency, all in a compact 45 x 45 mm form factor. Crucially, the module’s compliance with Open Standard Module (OSM) Specification 1.2 and integration of the powerful NXP i.MX 95 SoC enable developers to boost the performance of advanced embedded and Edge AI systems while significantly reducing development time. The module’s direct-solder design eliminates the need for connectors, which further accelerates assembly and time to market.

With advanced security features such as the integrated EdgeLock Secure Enclave, the OSM-LF-IMX95 simplifies the implementation of security-critical functions like secure boot, cryptography, trust provisioning, and secure remote management. These benefits collectively ensure that the resulting innovations created by developers meet the strict requirements of modern industrial and mobile applications.

And on top of it all, Tria offers a comprehensive development platform and starter kit, along with a Yocto-based Linux Board Support Package, making design-in and evaluation straightforward. Once again, these features empower system architects and developers to create cost-optimized, production-ready designs quickly, helping them maintain or in fact seize an advantage in sectors like industrial automation, robotics, vision systems, and mobile devices.

Managing and Optimizing Soaring Levels of Smart Label Data

As the logistics and transportation industries continue to digitize, the adoption of smart labels is accelerating. These small, Internet of Things (IOT)-enabled devices are modernizing how goods are tracked, monitored, and managed across supply chains.

As enterprises deploy more of these smart labels, they are amassing significant volumes of data that, when properly analyzed, can reveal valuable insights to improve operational efficiency, enhance supply chain security, and enable proactive decision-making. But how can businesses manage and make sense of this growing sea of data, and what does the future hold for data sharing across the broader logistics ecosystem?

The Surge in Smart Label Data

Smart labels are a relatively recent innovation, with commercial deployment only beginning to gain traction in 2023. Despite their novelty, adoption is rising rapidly.  According to ABI Research, the global cellular smart label market is expected to grow from 2 million in 2025

to more than 21 million shipped units in 2028, which equals a projected growth rate of 83% (CAGR 2025-2028)1. With this growth comes a tidal wave of data. A single smart label may generate between 0.5 and to1.5MB of data over its lifecycle, meaning that millions of deployed labels could easily produce terabytes of data each year.

The value of this data lies in its ability to provide real-time visibility into the location and condition of goods in transit- an increasingly important need in today’s fast-paced logistics environments. Business intelligence and analytics platforms are fundamental for interpreting this information. Without tools to aggregate and analyze smart label data, its potential is largely unrealized. When applied effectively, the data can support improved decision-making and operational performance.

For example, a U.S mobile phone retailer can use smart labels on incoming shipments of iPhones at its central distribution center to better manage the supply chain of phones to retail outlets across the country. By monitoring inventory levels in realtime, the company can anticipate stock shortages at specific stores, such as a dwindling supply of black iPhone 16s, and adjust shipping strategies accordingly. This kind of responsive supply chain management can reduce delays, prevent stockouts, and enhance customer satisfaction.

Looking ahead, AI will play a growing role in automating these insights. As smart label data is continuously analyzed by BI systems, AI can be used to trigger appropriate actions – reallocating inventory, adjusting routes, or predictive maintenance needs for transport assets – based on real-time and historical data patterns. Over time, AI systems will become more accurate and proactive, ultimately reducing human intervention and streamlining logistics operations.

Enabling Data Sharing Across the Logistics Ecosystem

Currently, many organizations deploy smart labels independently for their internal tracking needs. In a typical supply chain, a product may be handled by multiple stakeholders, including manufacturers, logistics providers, warehousing firms, and retailers, with each using separate systems and collecting siloed data. This fragmented approach limits the full potential of smart label technologies.

For the logistics industry to fully benefit from smart labelling, a more collaborative model of data sharing is needed. Instead of each stakeholder using their own smart label, a unified system could enable a “single source of truth,” where one label generates shared data accessible to authorized parties throughout the supply chain. This approach would reduce duplication, improve consistency, have a single source of truth, and facilitate broader operational improvements.

Returning to the mobile phone retailer example, if transport companies, warehouse operators, and retailers could all access shared data from a single smart label, it will improve coordination and decision-making. For instance, a courier could reroute deliveries in real-time to avoid disruptions, while warehouse teams could better manage unloading schedules based on incoming shipment status.

However, implementing such ecosystem-wide visibility brings challenges.

These include establishing interoperable systems, ensuring cross-border connectivity, addressing network compatibility, and protecting data privacy. Cultural barriers also remain, particularly when it comes to sharing proprietary data with third parties or competitors.

Despite these hurdles, the logistics industry is gradually moving toward greater interoperability and collaboration. As smart label technologies mature and standards emerge, data sharing will become more feasible and secure. Predictive analytics will also gain prominence, enabling enterprises to anticipate supply chain disruptions, optimize asset utilization, and enhance loss prevention strategies.

For enterprises adopting smart labels, the initial focus should be on capturing and interpreting the value of data through integrated BI platforms. From there, layering AI for automation and predictive analytics can lead to faster, smarter decisions. The final step in this evolution will be to collaborate with ecosystem partners to securely share data, creating a more sustainable and resilient supply chain model.

While these developments may still seem aspirational, the pace of innovation in IoT and smart label technologies suggests that a more connected, intelligent logistics infrastructure is not only possible – it’s well on its way.

1  https://www.abiresearch.com/market-research/product/7780042-smart-labels-technologies-andmarket-oppor

Roundtable Event: AI at the Edge –How AI will Drive Industrial Technology in 2026 & Beyond

Developing, Deploying, and Delivering AI solutions for the Edge has been the trend and best practice for all of 2025, and the Industrial, Manufacturing, and Automation fields have been right up front and center, driving the trend forward. But is it only a flash in the pan, or will Industrial AI at the Edge become the Best Practice for efficiency, safety, operational reliability, and downtime avoidance that’s been promised?

Watch On Demand: https://resources.embeddedcomputing.com/EmbeddedComputing-Design/Developing-with-ROS?utm_bmcr_source=cal

The Good Invasion of Robots

Robotics is invading. Cities with robotic delivery services, automotive with self-driving taxis, retail, industrial, and even healthcare are all integrating robotics. It seems that interacting with robots is increasingly becoming the norm, and therefore inevitable for our future.

For example, robots integrated in medical environments may sound tricky, but they’ve actually become quite normal. Not just robots that move around hospitals delivering items from one place to the next, but robots that are specialized to perform specific surgeries.

The Serial Entrepreneur of Robotic Startups

Stéphane Lavallée was introduced to the world of surgical robotics in 1986. While earning his PhD at Grenoble University in France in 1989, Lavallée was tasked to work alongside a robot that would assist him in placing electrodes on the brain of a patient suffering from tremors.

“It was my PhD thesis with developing the first robot used in clinical routine for deep brain stimulation, so for neurosurgery,” said Lavallée, who signified this surgery as the catalyst of his career with surgical robots.

The patient was awake during the brain surgery, and while placing the electrodes, Stéphane and the other professors saw the tremors cease in real-time. This research marked the development of the first robotic system linked to intraoperative x-ray imaging used in clinical routine in neurosurgery.

“We did it in three years, we developed a complex system with a robot based on some X-ray imaging to define targets in the deep brain, and we could use it successfully on many patients to implant, to do biopsies, but also to implant electronics,” said Lavallée.

Today, Lavallée owns more than 400 patents and has continued his work in digital surgery and robotics. He created his first startup in 1998 and refers to himself as a “serial entrepreneur of startups.” He’s also the president of MinMaxMedical, a French company focused on the research,

development, and manufacturing of critical robotics, navigation, and medicalgrade components.

“We created about 20 companies; all are specialized in different applications and technologies of digital surgery and robotics. It’s a little bit more than robotics.

It can be sometimes navigation, augmented reality, surgical planning, etc.,” he said.

“And today I’m very proud, because in the south of France, we have now created, with all the companies I co-founded, with my partners, more than 700 jobs from scratch,” he continued. “We are running very fast with a powerful ecosystem with companies like MinMaxMedical which develop technologies in collaboration with QNX for everything related to the safety of robotics.”

AI and New Digital Tools Reshape the Operating Room

Lavallée defines two categories of surgical robotics. The first category is where robots assist in surgery, for example, with the dexterity of the surgeon. The second being where robots are specially designed to perform a specific surgery, to which Lavallée belongs to.

“Patients want robots. Surgeons want robots. That’s very clear. Hospitals want robots also,” he said. “I was considered as crazy. We were, as a team, considered as crazy. ‘You know what those guys, they want to use a robot on, on patients. You can’t imagine that they’re going to replace surgeons. Really? Are they serious? That’s not possible.’”

But from a healthcare perspective, he believes there are existing technologies to be improved upon the medical sector. He affirms that the industry is on track when it comes to this category of specialized surgical robots, while also noting that there’s still a lot of potential and a lot to accomplish, and that existing technologies still need to improve as well.

“Tracking is when you put sensors, trackers, on the robot, on the patient, on the instruments, everywhere. You put those little things everywhere. They are basically the eyes of the robot. They give you the information in real time. Today we use a lot of optical tracking, which is a little bit painful because it has some issues with the line of sight,” said Lavallée.

He also mentions surgical planning as another area that can improve, “because we need to define for each patient what is the best surgery? Should we do surgery first? That’s the first question. We need to help the surgeon to decide. Then we need, if he decides to go with surgery, what is the best strategy for this patient?”

According to Lavallée, AI advancements will also continue to be beneficial in this

space, mentioning that what may have been the standard 10 years ago is not the same today and that AI is needed for what he calls the “big loop” where data is taken from the patient to the doctors to better create and understand post-operative measurements.

“That’s really the key we need,” Lavallée said. “We need to close the loop in order to feed the AI system … in order to define what will be for this patient, the best surgical planning.”

Earning Trust in Robotic-Assisted Procedures

Trust in a critical and invasive industry like medicine is paramount. According to Jim Hirsch, VP of the North American and EMEA general embedded market (GEM) at QNX, the real challenge is the regulations. The FDA, for example, and meeting all certification standards. When compared to other areas where robotics is getting involved, he states that medical is a main driver. But of course, everything goes back to trust.

Robotics in the medical space differs from robotics in industrial, they’re not nearly as invasive. Robots in factory and warehouse facilities, traditionally, don’t interact directly with humans. Most are working in caged environments, according to Hirsch.

“Whereas in the medical device industry, if you’re leveraging a robot, whether it’s scanning an individual for radiological reasons, or whether you’re actually doing invasive surgery with a surgical robot, obviously, we need to make sure the standards are a lot higher and that the patients are in a safe situation, so that they cannot be harmed,” says Hirsch.

Drilling further into the challenge of trust is connectivity. High-performance embedded solutions providers, like QNX, are tasked with ensuring that connectivity is reliable, and that no unpredictable network conditions occur during these potentially high-risk procedures.

Hirsch notes that latency is a key consideration for QNX’s real-time operating systems. Especially with remote locations that rely on wireless connectivity. QNX’s product portfolio consists of its QNX OS for Safety (QOS) and the QNX Hypervisor for Safety, which are products used by surgical robotics manufacturers.

“We’re a large player in the medical device space, whether it’s for surgical robotics or any robotics within any medical device… we’ve been around for 45 years, and we’ve developed a deterministic, a real-time operating system for mission-critical applications, one of which is surgical and non-surgical robotics,” said Hirsch. “So, it’s a natural fit for us to be in these devices, and it’s a natural fit for customers to actually reach out and actually leverage our solutions, instead of having to try and leverage solutions out there.”

He noted that QNX is hands-off when it comes to the onboarding and training process of getting a hospital started with their software technologies. He states that the real experts are their customers. Instead, QNX does what they can to ensure products meet the expectations and specifications of the healthcare professionals.

When asked where he sees robotic surgery going in the next 5 to 10 years, Stéphane Lavallée says that “one day there will be robots or digital surgery, at least for every specialty of surgery… it’s an invasion. It’s a good invasion of robots.”

QUESTION: What is the embedded trend you think is most likely to become a best practice for engineers and developers in 2026?

Why Designing Embedded Systems for Change Will Be a Best Practice in 2026

An embedded trend that is a best practice in 2026 is designing systems to be ready for change. A lot of what we are dealing with now is that the pace of technology, especially at the edge, is not slowing down. So, if we build something that is locked to a single moment in time, it does not take long before we are forced into redesign, and that is where risk shows up.

AI is a big driver of that. When we make an edge device that has high performance AI built in, there is no way around the fact that it needs more power and it generates heat. But even before we get to power and thermal, we are dealing with the reality that the computing platforms, accelerators, and the software and model side are moving fast. Obsolescence is not a 5- or 10-year problem anymore. If we put an AI system or accelerator in place, it could be obsolete in a matter of months. Anything we design now is probably going to be completely obsolete in 5 years. That does not mean it is unusable, but it means we have to design for the fact that we will be asked to change it.

Another part of this that we are running into is availability and supportability. Processor, GPU, and memory choices matter, and the amount of RAM the models use right now is significant. There has been a real shift in the market from DDR4 to DDR5, and suppliers have moved quickly. When that happens, it puts strain on older systems that rely on DDR4, and we are forced to react. That is a practical constraint that affects what we can build, what we can sustain, and what we can support over time. So, in 2026, I think more teams treat availability and lifecycle as part of architecture, not something that’s to be figured out after the design is done.

From a system design perspective, modularity is a significant part of the answer. It is a practical way to reduce future risk. When we have a computer-on-module core, the processor and the RAM are a tightly controlled system, and we can start with a stable core BSP where the operating system and drivers are reusable. That gives us a stable base of computing, and then we build the rest of the system around that. If we have to change something later, we are not starting from scratch every time.

On the flip side, all that extra performance is still physical. These systems are dynamic now. They take extremely high amounts of current for really short amounts of time, and that produces an extremely large amount of heat in a very small

area of silicon. So, we have got to move those small punches of heat away quickly. At the same time, all things equal, we still try to eliminate moving parts because they are usually the first failure point. That makes thermal design more challenging as performance keeps going up, and it is another reason we need margin and flexibility built in from the beginning.

“IT IS THE DISCIPLINE OF DESIGNING SYSTEMS THAT CAN EVOLVE.”

For aerospace, defense, and industrial environments, the cost of failure and the cost of field repair is high. Reliability matters, but reliability isn’t only about surviving temperature, shock, and vibration. It’s also about surviving change. If requirements change, or components change, the system needs a way forward without starting over every time.

So, when I think about best practice in 2026, it is less about a single interface or a single part. It is the discipline of designing systems that can evolve. We balance cost, functionality, delivery, and packaging and we do it knowing we will be asked to change something later. If we plan for that up front, we reduce risk later.

Jeff Baldwin serves as Director of Engineering at Sealevel Systems, Inc., where he leads the design and development of mission-critical embedded computing and I/O solutions. His experience spans rugged system architecture, thermal and power optimization, and integration across defense, aerospace, and industrial applications. Jeff’s work focuses on building platforms that combine long-term reliability with the adaptability required for emerging AI and edge computing workloads.

QUESTION: What is the embedded trend you think is most likely to become a best practice for engineers and developers in 2026?

How Partnerships Will Power the Future of Embedded Engineering

The real transformation in embedded engineering isn’t technological; it’s organizational, defined by how we work and who we rely on for the most critical system components. Increasingly, teams are recognizing that success comes from adopting a foundational platform based software strategy and from the strength of the partnerships they build around it.

As embedded devices grow to be software-defined, higher performance and highly connected, the room for innovation and differentiation is bigger than ever. We’re no longer just building systems that control hardware; we’re blending real-time performance, security, safety, cloud integration, edge intelligence, and more into a single experience. That kind of integration adds complexity, but it also represents a significant opportunity when the software stack is built on a trusted, proven foundational software platform and reinforced by deep collaboration and strong, long term partnerships.

Instead of trying to master every layer of the stack internally, many teams are adopting more of an ecosystem mindset. They’re choosing to work closely with specialized partners

“AS

who bring deep expertise in their domains. This lets their engineering teams stay focused on what they do best: building products that stand out. Foundational software products and technologies come from trusted partners like QNX, who focus entirely on making those base foundational layers rock-solid and production ready, with products such as the QNX Operating System (OS), QNX Hypervisor and the recently announced Alloy Kore foundational vehicle software platform. This approach allows product teams to focus on differentiation, innovation, and exceptional user experiences.

In 2026, as the demands on embedded systems continue to grow across safety, security, reliability, and performance, success will favor teams that take more of a platform approach and that resist the urge to build everything themselves. The leaders will be those who build on proven foundational software platforms, form strong partnerships, integrate the right technologies, and embed collaboration into their engineering culture. Partnerships are no longer optional; they are rapidly becoming the standard for modern embedded engineering.

EMBEDDED DEVICES GROW TO BE SOFTWARE-DEFINED, HIGHER PERFORMANCE AND HIGHLY CONNECTED, THE ROOM FOR INNOVATION AND DIFFERENTIATION IS BIGGER THAN EVER. WE’RE NO LONGER JUST BUILDING SYSTEMS THAT CONTROL HARDWARE; WE’RE BLENDING REAL-TIME PERFORMANCE, SECURITY, SAFETY, CLOUD INTEGRATION, EDGE INTELLIGENCE, AND MORE INTO A SINGLE EXPERIENCE.”

2026 RESOURCE GUIDE

The 2026 Embedded Computing Design Resource Guide showcases solutions for developers of industrial controls, edge computing, autonomous machines, and more.

EMBEDDED SOFTWARE

INDUSTRIAL

UDE® – Multicore Debugger for MCUs / Embedded MPUs

UDE® Universal Debug Engine for Multicore Debugging is a powerful development platform for debugging, testing and system analysis of microcontroller software applications. UDE® enables efficient and convenient control and monitoring of multiple cores for a wide range of multicore architectures within a single common user interface. This makes it easy to manage even complex heterogeneous multicore systems.

UDE® supports a variety of microcontrollers, multicore SoCs, and embedded multicore processors including Infineon AURIX and TRAVEO, NXP S32 Automotive Platform, STMicroelectronics Stellar and STM32, Renesas RH850 and R-Car, Synopsys ARC, RISC-V and others.

The UAD2pro, UAD2next and UAD3+ devices of the Universal Access Device family provide the hardware basis for the powerful functions of UDE® and enable efficient and robust communication with the supported architectures and controllers. With its flexible adapter concept, the Universal Access Device family supports all commonly used debug interfaces.

FEATURES

Ą Debugging of homogeneous and heterogeneous 32-bit and 64-bit multicore MCUs and embedded MPUs

Ą Synchronous break, single step and restart for multicore systems

Ą One common debug session for complete system / all cores

Ą Convenient debugger user interfaces supporting multi-screen operation and perspectives

Ą Support for special cores including GTM, HSM, ICU, PPU, SCR

Ą Software API for tool automation and scripting

Ą AUTOSAR support and RTOS awareness

UDE® – Trace and Test for MCUs / Embedded MPUs

The UDE® Universal Debug Engine is the perfect tool for runtime analysis and testing of embedded multicore applications. With support for on-chip tracing, it offers comprehensive features for non-intrusive debugging, runtime observation, and runtime measurement. This helps developers to investigate, e.g., timing problems or misbehavior caused by parallel execution.

Used in conjunction with the UAD2next and UAD3+ devices from the Universal Access Device family, the UDE® enables trace data to be captured from various trace sources and via external trace interfaces. Trace modules for the UAD2next or trace pods for the UAD3+ are provided for this purpose.

UDE®'s debugging and tracing capabilities, coupled with a flexible and open API for scripting and integrating with third-party tools, make UDE® an ideal choice for automated testing on real target hardware. During the execution of automated tests, UDE® can also determine the Code Coverage to validate the quality of the test cases that are being used.

FEATURES

Ą Multicore trace support for various on-chip trace systems (incl. MCDS/miniMCDS for Infineon AURIX / TriCore, IEEE-ISTO 5001 Nexus for NXP MPC5xxx, STMicroelectronics SPC5, Arm CoreSight for Arm Cortex A/R/M based devices, Renesas RH850 on-chip trace)

Ą Visualization and analysis of recorded trace information (execution sequence chart, call graph visualization, profiling)

Ą Trace-based, non-intrusive statement coverage (C0 coverage) and branch coverage (C1 coverage) for proving test quality

Ą UDE SimplyTrace® function for easy and context-sensitive trace configuration

Ą Open and flexible software API for scripting and test automation

info@pls-mc.com

www.linkedin.com/company/pls-mc

ADLINK STC2-MTL: Leading the Edge AI Revolution

ADLINK, a leader in edge computing, redefines industrial HMI with the STC2-MTL. Powered by Intel® Core™ Ultra (Meteor Lake), this AIO Panel PC addresses the growing demands of generative AI and complex IoT workloads.

The STC2-MTL features a unified architecture combining a CPU, GPU, and a dedicated NPU. This enables energy-efficient, real-time inference and cloud-independent decision-making, transforming the device into an intelligent edge hub.

Industrial-Grade Reliability: Built for harsh environments, the system features an IP65-rated front panel and high-quality AUO displays offering 500-nit brightness and 10-point PCAP touch screen with 50,000-hr backlight life. Its rugged design and EMC pre-compliance testing ensure 24/7 stability against dust, water, and electrical interference.

Flexible Deployment & Security: With PoE support, the STC2-MTL simplifies installation in kiosks and warehouses by delivering power and data through a single LAN port. The TPM 2.0 and robust optional connectivity (Wi-Fi/Bluetooth) provide a secure, future-proof foundation for nextgeneration industrial applications.

FEATURES

Ą Intel® Core™ Ultra H series (Meteor Lake) processor

Ą 10.1"/15.6"/21.5" 10-point PCAP touch screen with 50,000-hr backlight life

Ą Full customization optimized for rapid deployment within high-performance all-in-one panel PC applications

Ą Wi-Fi/BT module support ensures seamless data transfer and industrial flexibility

Ą Front panel IP65 rating for waterproof and dustproof protection

Ą Ease of installation with optional VESA mount

Ą Supports PoE (on LAN2) and TPM 2.0 for streamlined, secure deployment (Optional)

https://www.adlinktech.com/products/industrial-display-panel-pcs/all-in-one-panel-pcs/stc2-mtl_series?lang=en

ADLINK Technology Inc. www.adlinktech.com/

 ev.mkt@adlinktech.com

886-3-216-5088

 www.linkedin.com/company/adlink-technology @ADLINK_IoT

Turn static files into dynamic content formats.

Create a flipbook