All Articles
plm trendsplmplm technologymanufacturingdigital thread

Human-Centered AI in Engineering: When the Copilot Is in the CAD Tool

Michael Finocchiaro
Last updated: May 16, 2026

Key Takeaways

  • The arrival of AI copilots makes PLM more important, not less — AI outputs need structured validation, audit trails, and change management that only a functioning PLM infrastructure can provide
  • Knowledge-based engineering and AI copilots are converging — PLM systems that have clean, structured historical data produce better AI suggestions and better AI training datasets
  • The human-in-the-loop requirement is not a limitation to be engineered away; it is a quality and liability control that must be built into AI-assisted engineering workflows from the start
  • Organizations that allow AI-generated design outputs to bypass PLM change management will create auditability gaps that are expensive to close retroactively
AI Copilot in EngineeringHuman-in-the-LoopPLM Change ManagementKnowledge-Based EngineeringExplainable AIGenerative Design
Share

Short Answer

AI copilots embedded in engineering tools are generating design suggestions, BOM drafts, and simulation setups that previously required significant engineer time — but the PLM consequence is that every AI-assisted output needs a human validation workflow, making change management and audit trails more important, not less.

  • Autodesk AI, Siemens NX AI assistant, and PTC Creo Copilot are in commercial deployment in 2026 — not in demo stages
  • AI-generated design suggestions require change management workflows just as human-authored changes do; PLM must capture the AI's role in the change history
  • AI-generated BOMs introduce a new category of human review burden — engineers must validate outputs they did not author, against requirements they must surface from PLM
  • Engineering knowledge encoded in historical designs, simulation results, and change histories is becoming training data — with governance implications most organizations have not addressed
  • Explainability is not optional for regulated industries; an AI design suggestion that an engineer cannot explain to a regulator cannot be approved

The debate about AI in engineering spent most of the last decade in the future tense. Generative design was introduced, piloted, and admired in conference presentations. Simulation-driven design was promised as the future of product development. Knowledge-based engineering was held up as the mechanism that would finally let organizations capture and reuse what their best engineers knew. All of these trajectories were real, but none of them moved as fast as the hype. Then, between 2023 and 2025, the large language model revolution arrived in engineering tools — not as a separate application, but embedded inside the tools engineers already used every day. The copilot era in engineering is not coming. It is here, and PLM is not ready for it.

How We Got Here

Engineering software has incorporated AI-adjacent capabilities for years. Topology optimization in finite element analysis tools has automated structural material distribution since the 1990s. Siemens introduced knowledge fusion in NX in the early 2000s, allowing engineers to encode design rules that automated routine geometry modifications. PTC's Windchill has offered AI-based part search and classification for a decade. These capabilities were real but narrow — useful within a specific workflow context, invisible outside of it.

The shift that began in 2023 was qualitative, not incremental. Large language models with engineering-domain fine-tuning, combined with tool-native integration, enabled a different interaction mode: the engineer describes what they want in natural language, and the AI translates that intent into tool actions — creating geometry, running analysis, searching the PLM repository for similar historical designs, generating BOM drafts. The capabilities are imperfect and require significant human review. But they are embedded in the tools engineers already use, which means adoption friction is low and usage is organic.

The digital thread concept is deeply relevant here: AI copilots that have access to the full product thread — design history, simulation results, manufacturing data, field performance — are substantially more useful than those working with only the current design file. The quality of the PLM data infrastructure directly determines the quality of the AI copilot's context.

The Current Commercial Landscape

Autodesk AI has been integrated into Fusion 360 and Inventor as a contextual assistant — offering geometry suggestions, automating drawing annotation, and surfacing relevant design standards. Autodesk's Forma product applies AI to early-stage architectural and industrial facility design, optimizing for structural, thermal, and spatial performance constraints simultaneously.

Siemens NX AI assistant was released in commercial form in 2025. It integrates with Teamcenter, allowing engineers to query PLM data in natural language ("find all designs using this bearing type that have had field failures"), generate design alternatives from performance specifications, and automate routine detailing tasks. The Teamcenter integration is the distinguishing capability — NX AI is not just working on the current file, it has access to the full program history in PLM.

PTC Creo Copilot entered commercial availability in late 2025, offering natural language interaction with Creo Parametric for model modification, design reuse search across Windchill, and automated generation of GD&T annotations from design intent. PTC has positioned it explicitly as a Windchill-integrated capability — the copilot's design reuse suggestions are drawn from the Windchill repository, making PLM data quality a direct input to copilot output quality.

Beyond the major CAD vendors, specialized tools have emerged for specific AI-augmented engineering tasks: Ansys SimAI for rapid simulation surrogate modeling, Cognata for autonomous system virtual testing, and several startups building AI layers on top of open PLM platforms.

Use Cases and Business Impact

AI-Accelerated Design Reuse. An aerospace structures team developing a new bracket assembly used NX AI to search the Teamcenter repository for structurally similar assemblies from previous programs. The AI returned 14 candidate assemblies ranked by geometric similarity and material compatibility. The engineer reviewed the top three and adapted the highest-ranked design to the new requirements — a process that previously required a manual search of the Teamcenter archive (typically 3–5 hours for a complex geometry) and was often skipped under schedule pressure. Adapting an existing design reduced the new design's structural analysis time from 2 days to 4 hours, because the simulation model was partially inherited from the reference design. The PLM configuration management record for the new assembly includes explicit reference to the source design — audit trail for both design reuse decision and the AI's role in surfacing it.

AI-Assisted BOM Generation. A consumer electronics manufacturer piloted Creo Copilot for automated BOM draft generation from new assembly designs. The copilot, with access to the Windchill component library, proposes a complete BOM draft — including standard fasteners, connectors, and PCB references — from the assembly geometry. Engineers review and correct the draft, typically accepting 80–85% of proposed line items without modification. The time to generate an initial BOM draft fell from 4–6 hours to under 30 minutes. Critically, the organization established a data governance rule that AI-generated BOM drafts require engineer sign-off before any PLM release — the AI generates, the engineer reviews, Windchill records both.

Natural Language PLM Queries for Root Cause Investigation. A medical device manufacturer's quality team uses a Teamcenter AI assistant to query the PLM system in natural language during root cause investigations: "show me all engineering changes to part number X in the last 18 months, including the change descriptions and approvers." Previously this required a trained PLM administrator to construct the query. Now a quality engineer who understands the product but is not a PLM power user can retrieve complete change histories in under 5 minutes. The quality and compliance benefit is direct: faster root cause investigation with more complete historical context.

Barriers to Adoption

PLM data quality is the ceiling. AI copilots that query PLM for relevant historical designs, similar failure modes, or applicable standards are only as good as the structured data they can access. An organization with inconsistent part classification, incomplete change descriptions, and irregular simulation result storage gets generic or wrong AI suggestions. Investing in PLM data quality is now also an investment in AI copilot effectiveness — a business case connection that did not exist three years ago.

Regulatory uncertainty around AI-generated design outputs. In aerospace (DO-178C, AS9100), medical devices (21 CFR Part 820, ISO 13485), and automotive safety systems (ISO 26262), design decisions must be traceable to verifiable technical rationale. If an engineer approves an AI-generated design modification, they are accountable for that decision — but the regulatory frameworks have not yet specified how AI's role must be documented in the design history. The FDA has provided initial guidance on AI in software medical devices; aerospace and automotive standards are still developing. Organizations in regulated industries are adopting AI copilots with explicit human-in-the-loop documentation policies to stay ahead of the regulatory clarification rather than waiting for it.

IP and training data governance. Most AI engineering tools, in their default configurations, use customer interaction data to improve their models. For manufacturers with significant IP in their design libraries, simulation results, and change histories, allowing an AI vendor's model to train on that data may represent a competitive risk. Enterprise deployment configurations that isolate customer data are available from all major vendors but add cost and reduce some collaborative learning benefits. The governance decision — what can be used as AI training data, what cannot — must be made explicitly and documented in the organization's data governance framework.

Change management workflow design. When an AI copilot suggests a structural modification and the engineer accepts it, what change management event is triggered? Who reviews? What is the approval threshold for an AI-assisted change versus an engineer-authored change? Most organizations that have deployed AI copilots have not yet addressed these questions systematically. The result is AI-assisted changes flowing through the same change management workflow as human-authored changes — which works, but does not capture the AI's role in the change history and does not differentiate review requirements based on the nature of the change.

Adoption Timeline

Phase 1 (Year 1): Pilot with explicit governance. Select one design team and one AI copilot tool. Establish explicit documentation rules for AI-generated outputs before the pilot begins — what gets recorded in PLM, how AI origin is flagged, what human review is required. Measure productivity impact and human review burden honestly; the second number matters as much as the first.

Phase 2 (Years 2–3): PLM data quality investment. Use the pilot's AI query failures as a diagnostic — every case where the AI returned irrelevant results or missed known-good historical data is a PLM data quality deficiency. Invest in the data quality improvements that will expand the AI copilot's effective context. Extend deployment to additional design teams with the governance framework validated in Phase 1.

Phase 3 (Years 3–5): Enterprise-scale AI-PLM integration. Formalize the AI engineering data layer — structured repositories of design patterns, validated simulation results, and change knowledge that AI systems can query reliably. Connect AI copilot outputs to enterprise rollout change management workflows with appropriate review thresholds. Engage with regulatory bodies on documentation standards for AI-assisted design decisions before they become a compliance gap.

Future Outlook

The 2–5 year horizon for AI copilots in engineering is defined by the convergence of three capabilities. First, multimodal AI that works simultaneously with geometry, simulation data, and textual change history — rather than querying each in isolation — will produce contextually richer suggestions and more reliable BOM drafts. Second, agentic AI workflows that can execute multi-step engineering tasks autonomously — running a simulation, interpreting the results, proposing a design modification, and preparing a change request — will shift the human role from executing engineering tasks to reviewing and approving AI-executed sequences. Third, knowledge graph integration with PLM will allow AI systems to reason over the digital thread — understanding not just what changed, but why it changed, what downstream effects were observed, and what that implies for similar decisions in new programs.

The organizations that will be best positioned for this future are those investing now in PLM data quality, change management governance for AI-assisted decisions, and the human expertise needed to review AI outputs intelligently. AI copilots make engineers more productive. They do not make engineering judgment less necessary — they make it more concentrated, applied to a higher-leverage set of decisions. That is a different skill profile than engineering in the pre-AI era, and building it deliberately is the most important investment an engineering organization can make right now.

Related Resources

Share

Want to listen instead of read? 56 DemystifyingPLM articles are available as audio.

Browse audio →

Looking up PLM terminology? Browse the canonical reference.

PLM Glossary →

Cite this article

Finocchiaro, Michael. “Human-Centered AI in Engineering: When the Copilot Is in the CAD Tool.” DemystifyingPLM, May 16, 2026, https://www.demystifyingplm.com/plm-trend-human-ai

MF

Michael Finocchiaro

PLM industry analyst · 35+ years at IBM, HP, PTC, Dassault Systèmes

Firsthand knowledge of the evolution from early 3D modeling kernels to today's cloud-native platforms and agentic AI — the history, strategy, and future of PLM.