Predictive Engineering Insights: Decision Support for Wind Farm Maintenance

ML-powered predictive maintenance platform enabling proactive intervention for renewable energy infrastructure

Project Details

Role
UX Engineer (UX Design + UI Engineering)
Client
SAP + Fedem
Collaboration
SAP Platform team · Fedem engineers · Wind farm operators · NPD team
R&D Layer
“Translation” between engineering capability and operational meaning
Data
Real operational data from North Sea & Norwegian Sea offshore wind farms
Year
2018-2019
Outcome
Board approval: productization as SAP Predictive Engineering Insights

Overview

UX Engineering role designing decision support workflows for predictive maintenance in renewable energy infrastructure, in collaboration with SAP Platform & Innovations and Fedem.

Worked as UX Engineer with SAP's Platform & Innovations team on decision support workflows for predictive maintenance in renewable energy infrastructure.

Collaborated with Fedem — a physics simulation company with deep operational relationships in the wind energy sector. Fedem brought domain expertise, real operational data from Norwegian and North Sea wind farms, and direct access to maintenance technicians and industry experts.

Problem

Wind turbine operators face critical maintenance challenges:

  • Structural failures cause extended downtime and lost energy production
  • Maintenance relies on scheduled inspections and reactive repairs
  • Physics models can predict failure but aren't operationally integrated
  • Field technicians need actionable intelligence, not raw simulation data

The opportunity: combine Fedem's physics simulation expertise with SAP's platform capability to make predictive maintenance operationally useful.

The Collaboration Model

Fedem brought physics-based structural simulation technology, operational relationships with North Sea wind farm operators, domain expertise in wind turbine structural engineering, and real-world data — wind/wave deflection, material stress, failure patterns.

SAP brought enterprise platform capability, UX/UI design and engineering, product development and commercialization.

This collaboration model: platform capability + domain expertise + operational validation.

The Work

Designed and implemented decision support workflows combining physics simulation, operational data, and field inspection interfaces.

The core design challenge was not whether we could predict failure - Fedem's models could do that. The question was: how do you surface the right information at the right moment for a structural analysis engineer assessing the integrity of a wind turbine's components, interpreting fatigue accumulation over time, and deciding then structural stress crosses the threshold for intervention?

This required understanding how analysis engineers actually work — what they look for, how they read structural load data, what level of detail they need to make a confident assessment, and how to make Fedem's physics simulation legible as an operational decision-support tool.

Design Approach

Virtual Sensor Placement

Technicians could place virtual sensors on structural components - shaft segments, bolt connections, blade attachments. Each sensor tracked stress accumulation over the asset's operational lifetime using Fedem's physics models combined with real environmental data.

Progressive Disclosure

Information revealed in sync with the engineering assessment workflow:

  1. Fleet overview — which assets show elevated structural stress
  2. Asset drill-down — stress patterns per component over time
  3. Fatigue analysis — accumulated load relative to material tolerances
  4. Intervention signal — when structural state warrants action

The sequence matched how analysis engineers actually work — from broad fleet monitoring down to specific structural assessment.

Time-Based Visualization

A 3D model animation stepped through stress accumulation over months and years. Engineers could scrub through operational history to understand load patterns — not just see current state. This addressed a real need: understanding whether current stress is anomalous or part of normal fatigue accumulation.

Predict vs. Prescribe

The system surfaced structural risk. It did not dictate maintenance decisions. Analysis engineers retained professional judgment. They interpret model outputs in the context of operational factors the simulation cannot know: asset history, adjacent failure risks, inspection cycles, and tolerances specific to each installation. The system gives engineers better inputs. It does not replace their expertise.

Project image

Outcome

Impact

Approved
Board pitch
Proof-of-concept presented to SAP board and PEI approved for productisation
Acquired
Fedem → SAP
Fedem acquired by SAP during or shortly after this work, enabling full commercialisation
Live
SAP Predictive Engineering Insights
Launched commercially and now used across industries for asset maintenance

Board pitch approved. Project moved to productisation. Fedem was acquired by SAP during or shortly after this work. The product launched commercially as SAP Predictive Engineering Insights which is now used across industries for asset maintenance.

Reflections

This was foundational experience designing ML-augmented decision support for operational environments. The patterns that emerged recur across contexts:

  • When should systems recommend vs. inform?
  • How do you build trust in predictions users can't directly verify?
  • What's the right balance between automation and expertise?
  • How do you make complex models useful for people doing real work?

These questions surface in agricultural risk intelligence, zero-waste operations, regenerative agriculture platforms - anywhere you're trying to make sophisticated models operationally useful. Designing for operational context requires understanding the work, not just the data.

The patterns that emerged recur across contexts:

Match the grain of the decision, not the grain of the data.
Models produce continuous outputs. Decisions are discrete. The interface needs to translate one into the other — surfacing the right level of resolution for the decision at hand, not everything the model knows.

Design for the moment of doubt, not the moment of confidence.
Engineers trust their own judgment when things look normal. The system earns its value at the edges — when something looks anomalous, when patterns are ambiguous, when the model and intuition diverge. That's where transparency in predictions matters most.

Preserve the human in the loop - structurally, not just rhetorically.
It's easy to say "the human makes the final call." It's harder to design a system that genuinely leaves room for that. If the interface frames predictions as conclusions rather than inputs, override becomes friction. The design should make expert judgment feel natural, not like a workaround.

Build trust incrementally through legibility.
Users can't trust what they can't interrogate. Showing not just what the model predicts but why - which sensors, which load patterns, over what time-frame - lets engineers validate outputs against their own domain knowledge. Trust follows legibility.

Context the model can't see is still real.
No simulation captures the full operational picture. Inspection history, recent repairs, adjacent component states, upcoming weather windows — this knowledge lives with the engineer. The interface should make it easy to bring that context to bear on model outputs, not suppress it.

When should systems recommend vs. inform?
How do you make complex models useful for people doing real work? What's the right balance between automation and expertise?

Working on this project shaped how I think about AI-augmented tools more broadly - not as systems that automate decisions, but as systems that improve the quality of human ones. The design challenge is rarely the model. It's the interface between what the model knows and what the person needs to do.