For Series of Articles, Use Links Below:
Kernel Wars PLM Coast to Coast Agentic AI and PLM Conference Recaps Infographics Vendor Infographics Misc: Vibe Coding

Transforming Engineering Workflows: Agentic AI and MCPs Address Daily PLM Challenges in 5 Use Cases

Transforming Engineering Workflows: Agentic AI and MCPs Address Daily PLM Challenges in 5 Use Cases

Let's delve into the Agentic AI use cases in the context of PLM, providing more detail on the pieces and parts, the AI's role, the system interactions, and how the sources discuss dealing with issues. Here are five key PLM-related use cases discussed, integrating the details provided across the sources:

1. Data Quality Enhancement & The "Plumbing" Problem

  • Pieces and Parts / System of Record (SoR) / System of Engagement (SoE): This involves interacting directly with various existing PLM-related systems, which act as the SoRs for product data. These could include traditional PDM systems, ERP systems storing part numbers and supplier info, potentially even disconnected systems like spreadsheets (Excel) or document repositories (SharePoint). The agents might interact via existing APIs (like Configurable Web Services for PLM data) or require deeper integration into data flows. Users interacting with the data through their familiar tools (CAD, Office apps, PLM interfaces) could be considered SoEs, or the agents could act as a new layer enhancing these SoEs.
  • What the AI Agent Does: Agents scan continuously for data inconsistencies across these disparate systems. They identify bottlenecks and inefficiencies by analyzing data flows. They can translate between different naming conventions used by different departments and infer relationships where explicit links are missing. They can suggest optimized workflows based on actual usage patterns. Agents can provide intelligent assistance for system configuration and setup and automate routine data maintenance tasks. They might create "good enough" translations between systems, flagging areas for human review.
  • AI Action/Behavior: This often involves Analysis, Reasoning, and Action. The agent Analyzes data across systems, Reasons about potential inconsistencies or optimizations, and then Acts by flagging issues, suggesting changes, or automating tasks. This aligns with the ReAct+RAG or Tool-Enhanced agent types described in the sources, using external tools (system APIs, databases) for action and potentially Retrieval Augmented Generation (RAG) to understand context from documentation or standards.
  • Ownership: The sources emphasize human-agent collaboration. While the agent performs the scanning, flagging, and suggesting, ultimate responsibility for data accuracy and system configuration likely remains with data stewards, system administrators, or engineering/IT teams. The agent acts as an assistant or augmenter, identifying issues or performing routine tasks, but human oversight is required, especially for areas flagged for review or complex configurations. The idea is that the human workforce is transformed, with agents handling mundane tasks while humans focus on higher-value work. Escalation protocols can route complex issues to human experts.
  • Debugging: Failures can stem from various issues, including poor data quality itself, misinterpretation of data, or issues with connecting to/understanding external systems (Tool Calling Failures). Debugging involves monitoring metrics like Task Completion Rate (did the agent successfully scan all records?), LLM Call Error Rate (were there issues connecting to systems or LLMs?), and Latency per Tool Call (are system integrations slow?). Evaluation tools (like Galileo) can help visualize execution traces to understand where the agent encountered problems, e.g., failing to connect to a system API or misinterpreting data from a specific source. Solutions involve ensuring tools (system connections) have clear parameters and validating tool outputs. Implementing robust error recovery protocols and strict state management helps ensure the agent doesn't get stuck or produce partial results. Continuous evaluation and feedback loops allow for refinement based on performance data. Addressing issues like planning failures (incorrect steps taken) or reasoning failures (misinterpreting data patterns) are also key, potentially requiring reflection mechanisms or fine-tuning.

2. Enhancing User Experience (UX) & Intelligent Search

  • Pieces and Parts / SoR / SoE: This involves AI agents providing a new interface layer or augmenting existing user interfaces (SoEs). The SoRs are still the underlying PLM, CAD, Office, and other enterprise systems containing the product data. The agent sits between the user (SoE) and the various SoRs.
  • What the AI Agent Does: Agents provide natural language interfaces for complex queries across disconnected systems, enabling intelligent search. They suggest relevant information from PLM when users are working in familiar tools like CAD or Office applications.
  • AI Action/Behavior: This uses Contextual Analysis and Information Retrieval. The agent uses LLM capabilities to understand the natural language query. It then performs Knowledge Retrieval from various SoRs (acting as external knowledge sources, like in a ReAct+RAG agent). It then Reasons about the retrieved information to format a relevant response or suggestion for the user.
  • Ownership: The agent augments the user experience, aiming to make the user more efficient. The user remains responsible for the final actions taken or decisions made based on the information provided by the agent. The organization owns the quality of the agent's responses and suggestions. The sources mention the importance of Guardrails to prevent agents from providing incorrect or harmful information. Human-in-the-Loop oversight and feedback loops are crucial here to ensure the agent's suggestions are accurate and helpful.
  • Debugging: Issues might include providing irrelevant suggestions (Reasoning Failures), failing to find information (Tool Calling/Retrieval Failures), or misinterpreting the user's query (Poorly Defined Prompts/LLM Issues). Debugging involves checking Task Success Rate (did the agent answer the query correctly?), Output Format Success Rate (was the response understandable and well-organized?), and Context Window Utilization (was the agent able to handle the complexity of the query?). Continuous evaluation using real-world scenarios (user queries) is essential. Incorporating human feedback is vital; users flagging irrelevant results helps improve the agent. Solutions include refining prompting techniques for better query understanding, ensuring robust Knowledge Retrieval, and improving Reasoning capabilities.

3. Dual-Source Part Number Management

  • Pieces and Parts / SoR / SoE: This specifically targets a common issue spanning PDM and ERP systems, which serve as the primary SoRs for part numbers and supplier information. The agent interacts with these systems via their APIs.
  • What the AI Agent Does: An agent can recognize patterns suggesting that two differently numbered parts (in the PDM or ERP) may be functionally identical despite being from different suppliers. It can maintain "shadow relationships" between these parts without requiring immediate database restructuring. It ensures that changes to specifications propagate across all related parts regardless of numbering scheme. It can gradually help standardize practices by suggesting more maintainable approaches.
  • AI Action/Behavior: This requires Analysis, Pattern Recognition, Relationship Mapping, and Action. The agent Analyzes data patterns (descriptions, specs, supplier info) across different part numbers. It uses Reasoning to infer potential equivalence. It then Acts by creating and maintaining these "shadow relationships" and ensuring data propagation, possibly interacting with the SoRs to update related records or flag changes. This requires Memory (Entity Memory) to track relationships over time.
  • Ownership: The agent helps manage a data problem caused by existing practices. Engineering or data management teams remain the owners of part numbers and specifications. The agent assists in maintaining data integrity across flawed structures. The sources imply that the agent's suggestions for standardization would require human approval or implementation. The agent is acting on behalf of the data management goal.
  • Debugging: Failures could include incorrectly identifying parts as identical (Reasoning Failure), failing to propagate changes (Tool Calling Failure), or not recognizing the patterns in the first place (Planning/Reasoning Failure). Monitoring metrics like Task Completion Rate (did the agent process all relevant changes?), Tool Selection Accuracy (did it use the correct system APIs?), and potentially custom metrics for "relationship accuracy" would be important. Debugging involves analyzing the agent's Reasoning process and Tool Calling interactions. Checking the agent's Memory could also reveal why it failed to maintain or update a relationship. Validation checks on tool outputs (e.g., did the change propagate correctly?) are crucial.

4. Engineering Change Management (ECM)

  • Pieces and Parts / SoR / SoE: This is a core PLM process involving PDM (for design data, BOMs), potentially ERP (for cost/manufacturing implications), MES (for manufacturing implications), and change management systems (the formal ECR/ECO SoR). Users (engineers, manufacturing, quality, procurement) are involved in submitting, reviewing, and approving changes (SoEs). The agent interacts with all these SoRs.
  • What the AI Agent Does: The agent autonomously plans and executes complex workflows related to changes. It analyzes a proposed design change, identifies affected components and documents (automating the "affected items" list). It assesses manufacturing implications, potentially running simulations (Design Optimization Agent). It notifies relevant stakeholders. In a full MCP implementation, it performs autonomous impact assessment and change propagation across systems. It can handle dynamic, risk-adjusted approval routing.
  • AI Action/Behavior: This is a prime example of Multi-step Task Automation and Orchestration. The agent needs strong Reasoning (to analyze impact), Tool Calling (to interact with PDM, ERP, MES, notification systems), Memory (to track the state of the change process), and potentially Planning (to sequence steps). It acts as an Orchestrator coordinating activities across microservices representing these systems.
  • Ownership: While the agent automates significant portions of the ECM process (impact analysis, notifications, routing), ultimate responsibility for approving changes and the integrity of the product data lies with the engineering and change review boards. The agent reduces manual effort and speeds up the process but doesn't eliminate the need for human sign-off, especially for critical changes. The source mentions AI-assisted prediction with human verification in transitional phases. Stricter escalation protocols could route high-risk changes to human experts.
  • Debugging: Failures can include misidentifying affected items (Reasoning/Analysis Failure), failing to notify stakeholders (Tool Calling Failure), getting stuck in the workflow (Infinite Looping, Planning Failure). Monitoring metrics like Task Completion Rate (did the change order progress through all steps?), Steps per Task (was the workflow efficient?), Latency (is the change processing slow?), and LLM Call Error Rate (issues interacting with systems) are crucial. Debugging involves analyzing the agent's Planning and Reasoning processes, checking its Tool Calling interactions, and monitoring for Infinite Looping with clear termination conditions. State management is critical to track where the process is and recover from failures. Validation checks on the agent's output (e.g., did it correctly identify affected items?) and human feedback are essential.

5. Autonomous Quality Management Systems

  • Pieces and Parts / SoR / SoE: This involves Quality Management Systems (QMS) as the primary SoR, but could also integrate data from MES (manufacturing execution), PLM (product structure, specs), and potentially field service systems (for customer feedback/returns). Agents interact with these SoRs. Users (Quality engineers, manufacturing personnel) are SoEs.
  • What the AI Agent Does: Agents evolve from assisting with statistical process control and root cause analysis to autonomous quality assessment. They might monitor manufacturing data, identify potential quality issues early, suggest corrective actions, or even trigger adjustments in the manufacturing process.
  • AI Action/Behavior: Requires continuous Monitoring, Analysis, Reasoning, and Action. The agent Monitors data streams (from MES, QMS). It Analyzes patterns to detect deviations. It Reasons about potential root causes or corrective actions. It Acts by flagging issues, suggesting solutions, or potentially interacting with the MES/QMS to record defects or trigger process adjustments. This might involve Environment-controlling aspects if the agent can directly influence manufacturing parameters.
  • Ownership: Quality assurance and control remain the responsibility of the Quality department. The agent significantly augments their capabilities, providing real-time monitoring and analysis. However, human oversight and approval would likely be required for significant process changes or dispositioning of non-conforming material. The sources emphasize that AI agents should not be used for tasks requiring deep expertise or high-stakes decision-making without human involvement.
  • Debugging: Failures could include misidentifying issues (Reasoning Failure), failing to integrate data from a system (Tool Calling Failure), or suggesting incorrect corrective actions (Reasoning/Planning Failure). Key metrics include Task Completion Rate (did the agent successfully monitor the process?), Tool Selection Accuracy, and custom metrics for "detection accuracy" or "false positive rate.". Debugging involves analyzing the agent's Reasoning logic, ensuring reliable data integration, and incorporating human feedback from quality engineers who validate the agent's findings and suggestions. Continuous evaluation using real-world data streams is crucial.

In summary, Agentic AI in PLM is an intelligent layer orchestrating actions across existing or evolving enterprise systems (SoRs like PDM, ERP, MES, QMS) on behalf of human users (SoEs or collaborators). The AI agent's role involves analysis, reasoning, planning, and executing actions via tool calls (APIs) to these systems. Responsibility remains primarily with human experts, augmented by the agent's capabilities, with critical or complex tasks often escalated. Debugging relies on monitoring agent metrics, analyzing execution traces, validating tool interactions, and incorporating continuous human feedback and oversight. The sources highlight the transition from simple automation to more autonomous, multi-agent systems coordinated across a microservices architecture.