Short Answer
Capgemini Engineering's Global AI Lab has run real AI programs in aerospace, automotive, and industrial manufacturing since 1998. The consistent finding: AI adoption fails when it is treated as an IT initiative rather than a business transformation. The success cases combine proprietary data, edge deployment, and human-AI collaboration models where the AI handles pattern recognition and the human handles decisions that require accountability. Multimodal AI — combining CAD geometry, specification text, images, and structured data — is the most underused high-value capability in engineering today.
- Capgemini's Global AI Lab has run manufacturing AI programs since 1998 — before LLMs existed
- In 2019, the lab was already using GPT-2 to analyze engineering blueprints with embedded spec tables
- Edge AI (on-device, low-latency) is critical for real-time manufacturing applications — cloud AI is too slow for shop floor use
- Multimodal AI (text + images + CAD + specifications in one query) is the most underutilized capability in engineering
- Your proprietary manufacturing data is your competitive moat — LLMs trained on public data give every company the same capability
- Successful AI adoption requires an automation strategy, not just an AI strategy
- The biggest adoption failure: companies that treat AI as a feature to add, not a workflow to redesign
Company Profile
Capgemini Engineering is one of the world's largest engineering services firms, operating across aerospace, automotive, defense, industrial equipment, and energy. It is not a PLM vendor. It is the firm that implements, integrates, and increasingly transforms engineering workflows for the companies that use PLM.
Dr. Bob Engels leads Capgemini Engineering's Global AI Lab — a cross-business, cross-region, cross-sector function that sits above individual vertical practices. The lab's mandate: understand what AI technology is capable of, translate that into what businesses actually need, and close the gap between the two.
Engels has been doing this since 1998. That history is relevant. He has lived through every AI cycle — expert systems, fuzzy logic, neural networks, deep learning, and now LLMs — and has deployed real systems in production manufacturing environments at every stage. His perspective is not that of a practitioner who discovered AI in 2023. It is that of someone who has watched the field cycle through hype and disappointment three times before arriving at the current moment.
The Challenge: AI That Survives Contact With Manufacturing Reality
Manufacturing has specific properties that make AI harder than other industries:
Determinism requirements. If you build two wings of the same aircraft, they need to be identical within tight tolerances. Manufacturing cannot tolerate the probabilistic error that LLMs carry by design. When Engels joined Capgemini and the AI lab expanded beyond the Nordics, one of the first client requests was blueprint analysis for large construction projects — engineering drawings, specification tables, license data — where the AI had to extract structured information correctly, every time.
Proprietary data scarcity. Unlike consumer AI applications, where training data is abundant, manufacturing knowledge is locked inside companies. Process parameters, material characterizations, failure mode histories, quality data — these are not on the internet. A foundational model trained on public data has no idea how your specific production line behaves.
Edge deployment constraints. Real-time manufacturing applications — quality inspection on a production line, CNC parameter optimization during machining, anomaly detection in assembly — cannot tolerate the latency of a cloud API call. AI needs to run on the device, at the machine, with no network dependency.
Legacy system integration. Most manufacturers are not running modern cloud-native infrastructure. They are running systems that predate the iPhone. Any AI solution that requires a clean modern data pipeline first will never be deployed.
These are not theoretical constraints. They are the reasons most manufacturing AI projects fail in pilot and never reach production.
What Capgemini's AI Lab Discovered
The Blueprint Analysis Problem (2019)
Long before the ChatGPT moment, Capgemini's Nordic AI lab was already deploying language models for engineering document analysis. A client with large complex construction projects needed to analyze blueprints — not the geometry, but the specification tables embedded in engineering drawings. Part numbers, material codes, tolerance values, supplier references — all the structured data that lives in tables inside PDF blueprints.
The solution: fine-tuned GPT-2. This was 2019, pre-ChatGPT, and the team had one engineer who knew how to fine-tune language models. They trained on the client's proprietary document library, and it worked. Extraction accuracy was high enough for production use. The lesson learned: proprietary fine-tuning on domain-specific documents outperforms general models on specialized tasks, even when the general model is much larger.
This is a pattern Capgemini has seen repeat: a foundation model plus your proprietary data beats a bigger foundation model with generic knowledge. Your data is the moat.
Edge AI in Aerospace Quality Control
For real-time manufacturing applications, cloud AI is architecturally inappropriate. The latency of a round-trip to a cloud API — even at 100ms — is too slow for in-line quality inspection at production speed. More fundamentally, sending proprietary manufacturing data to a cloud service raises IP and data sovereignty issues that most aerospace and defense customers will not accept.
Capgemini's approach for these applications is edge deployment: AI models that run on hardware at the machine or on the shop floor, without network dependency. The trade-off is model size — you cannot run a 70-billion-parameter model on a shop floor GPU. But for specific pattern-recognition tasks (defect classification, measurement anomaly detection, assembly verification), smaller specialized models outperform general models, and edge deployment removes the latency and data sovereignty problems simultaneously.
The implication for PLM integration: quality data generated at the edge needs to flow back into the PLM system in near-real-time. This is a data architecture problem as much as an AI problem, and it is one of the most common gaps Capgemini finds in manufacturing AI programs.
The Multimodal Gap
Engels identified multimodal AI — the ability to process text, images, CAD geometry, specification documents, and audio in a single query — as the most underutilized high-value capability in engineering.
The engineering world is already multimodal. An engineer reviewing a quality nonconformance might have: a photo of the defect, the 3D CAD model of the part, the manufacturing specification PDF, the work instruction text, and the measurement data from the CMM report. Historically, these lived in different systems and the engineer assembled the picture manually. A multimodal AI model can process all of them together.
The practical application Capgemini demonstrated: taking a product description document — plain prose plus rough sketches — and having an AI system generate an initial 3D CAD model. The output is not production-ready. But "not perfect" is not the right benchmark. The benchmark is whether it saves days of work. It does. The engineer starts from an AI-generated approximation rather than a blank workspace, and the time savings is substantial even when significant manual refinement follows.
Knowledge Graphs as LLM Guardrails
The hallucination problem in manufacturing AI is not theoretical. An AI system that confidently generates incorrect torque specifications, wrong material grades, or faulty process parameters is worse than no AI — it creates false confidence in bad outputs.
Engels' team has returned to a technique from early AI history — knowledge graphs and crisp logic constraints — as a way to keep LLMs on the rails. The approach: use the LLM for its strength (language understanding, document synthesis, pattern recognition) while constraining its outputs against a verified knowledge graph that encodes the actual engineering rules.
This is AI going "full circle," as Engels describes it. Expert systems of the 1980s were deterministic but brittle. LLMs are flexible but probabilistic. The combination — LLM capabilities bounded by deterministic constraints — produces systems that are both flexible and trustworthy enough for regulated manufacturing contexts.
Business Impact
Capgemini's AI programs in manufacturing have produced measurable outcomes across several dimensions:
Engineering specification extraction: Blueprint analysis workflows that previously required 2–4 hours of manual data entry per document are automated at >95% accuracy. For programs managing thousands of engineering documents, this eliminates a category of work that previously required dedicated analyst headcount.
Quality inspection acceleration: Edge AI quality inspection systems have reduced inspection cycle times by 40–60% in automotive body panel and aerospace composite manufacturing applications, while improving defect detection rates relative to manual visual inspection.
CAD generation from specification: Initial CAD model generation from written specifications reduces the time from product brief to first 3D review from days to hours. The output requires engineer refinement, but the compression of the concept phase is significant.
Predictive Quality Insight System (PQIS): Capgemini's enterprise PQIS framework, combining manufacturing sensor data with quality records and design history, has identified failure mode precursors in automotive production lines that enabled preventive intervention before defects reached production. The system analyzes correlations across data sources that no individual engineer could monitor manually.
Lessons Learned
1. AI adoption requires an automation strategy, not just an AI strategy. Companies that start with "what AI can we add" fail more often than companies that start with "what workflows can we redesign." The AI is the implementation; the automation strategy is the brief.
2. Proprietary data is the only real moat. Every manufacturer who deploys GPT-4 gets the same GPT-4. The differentiation comes from what you train it on. Your manufacturing data — process parameters, failure histories, quality records, tribal knowledge — is the asset.
3. The bimodal generation problem is real. Manufacturers run on two populations: engineers who have worked with the same systems for 20 years and have deep tribal knowledge, and engineers who grew up with APIs and expect tools to talk to everything. AI adoption strategies have to work for both. Force-fitting either group into the other's mental model fails.
4. Start with analysis, not generation. The mature AI applications in manufacturing are in analysis — document extraction, anomaly detection, pattern recognition — not in generation. Generation (CAD from spec, work instruction from process description) is valuable but higher risk. Deploy analysis first, build trust, then expand to generation.
5. Edge before cloud for real-time applications. If your application requires a response in under 500ms, design for edge. Cloud AI is the right answer for batch analysis, document processing, and planning workflows. It is the wrong answer for in-line quality inspection and real-time machine control.
Implementation Advice
For manufacturers evaluating enterprise AI programs: the most important decision is not which model or vendor to choose. It is how to instrument your operations to capture proprietary training data. Companies that have invested in sensor networks, quality data capture, and structured document management for the last decade are ahead of those who haven't — not because they are smarter, but because their data assets make AI proportionally more powerful.
For engineering leaders: start with a narrow, well-defined workflow where you can measure before and after. Blueprint extraction is a good first project. Predictive quality is a second. End-to-end AI-driven design is a third. Do them in sequence, not simultaneously.
For PLM teams specifically: the gap between AI and PLM value is usually a data pipeline problem, not an algorithm problem. If your PLM data is clean, structured, and current, AI can deliver value quickly. If your PLM data is stale, inconsistent, or incomplete, the AI will amplify that problem.
About the Source
This case study is drawn from AI Across the Product Lifecycle Episode 1, a podcast conversation with Dr. Bob Engels (Global AI Lab Lead, Capgemini Engineering). See also: [[AI in Manufacturing]], [[Digital Thread]], [[PLM Data Quality]], [[Edge AI in Manufacturing]].
Want to listen instead of read? 56 DemystifyingPLM articles are available as audio.
Browse audio →Looking up PLM terminology? Browse the canonical reference.
PLM Glossary →Cite this article
Finocchiaro, Michael. “Capgemini Engineering: What 25 Years of AI Looks Like in Real Manufacturing Programs.” DemystifyingPLM, May 16, 2026, https://www.demystifyingplm.com/case-study-capgemini-engineering-ai-transformation
PLM industry analyst · 35+ years at IBM, HP, PTC, Dassault Systèmes
Firsthand knowledge of the evolution from early 3D modeling kernels to today's cloud-native platforms and agentic AI — the history, strategy, and future of PLM.
Related Articles
PLM Case Studies: Real Implementations, Real Results
May 16, 2026 · 4 min read
CognaSIM and Cognitive Design Systems: Closing the Design-Simulation-Manufacturing Gap
May 16, 2026 · 9 min read
Lambda Function and up2parts: How Two Founders Automated the Most Painful Part of Manufacturing Sales
May 16, 2026 · 8 min read