Bridging the Gap: Making Agentic AI Practical in Today's PLM Reality

Following my recent article on "The Agentic AI Revolution: Reimagining PLM as a Flexible Microservices Ecosystem," I've received thoughtful feedback from industry experts that I believe deserves further exploration. While the vision of AI agents orchestrating a seamless digital thread across enterprise systems is compelling, several readers rightfully pointed out that many organizations are still struggling with fundamental PLM implementation challenges. How do we reconcile these ambitious visions with current realities?
As Rob Ferrone aptly noted, "SharePoint can't find my files so what are the chances that AI will be able to work with typical data quality?" Similarly, Jos Voskuil observed that many companies are "trying to get a data-driven infrastructure beyond SharePoint & Excel" — a far cry from the sophisticated AI-driven ecosystems I described. These are valid concerns that deserve serious consideration.
In this follow-up, I want to bridge the gap between tomorrow's possibilities and today's challenges, exploring how Agentic AI might actually help solve fundamental PLM implementation problems rather than simply adding another layer of complexity.
The Reality Check: Where PLM Stands Today
Before we can meaningfully discuss how Agentic AI fits into the PLM landscape, we need to acknowledge some uncomfortable truths about the current state of PLM implementations:
- Data Quality Remains a Persistent Challenge: Despite decades of PLM evolution, many organizations still struggle with inconsistent naming conventions, missing attributes, duplicate records, and data scattered across disparate systems.
- The Digital Thread Promise Remains Unfulfilled: While vendors have promised seamless digital continuity for years, the reality for most organizations involves manual reconciliation between systems, spreadsheet exports, and frequent data disconnects.
- Customization vs. Maintainability: As Jos Voskuil reminds us from the "good old SmarTeam days," systems that are "open, easy to customize and flexible" often lead to "impressive results created by local heroes missing the potential to scale in the long term."
- Change Management Challenges: Even when technically sound PLM systems are implemented, organizational adoption remains difficult, with users often reverting to familiar tools like Excel and SharePoint.
These challenges aren't new, but they're surprisingly persistent. Even as PLM vendors release increasingly sophisticated platforms, the fundamental problems of data quality, integration, and adoption continue to plague implementations.
Starting with the Basics: AI for PLM Fundamentals
Rather than positioning Agentic AI as the culmination of an already-mature PLM ecosystem, perhaps we should reframe it as a tool for addressing these persistent challenges. Here's how AI might help organizations strengthen their PLM foundations:
1. Data Quality Enhancement
The first application of AI in many PLM environments should focus on improving data quality. Consider AI agents that:
- Continuously scan for and flag data inconsistencies across systems
- Suggest corrections for missing or incorrect attributes based on similar items
- Identify and resolve duplicate records
- Help standardize naming conventions across legacy data
For example, an "Attribute Consistency Agent" could monitor part data between engineering and manufacturing systems, flagging discrepancies and suggesting corrections based on historical patterns, without requiring a complete system overhaul.
2. Simplified Integration without Perfect Data Models
Rather than requiring perfect data models across all systems, AI agents can act as intelligent mediators between imperfect systems:
- Translating between different naming conventions in different departments
- Inferring relationships even when explicit links are missing
- Creating "good enough" translations between systems while flagging areas for human review
This approach acknowledges that perfect data harmonization may be unattainable but creates pragmatic bridges between systems as they exist today.
3. Enhancing User Experience without Replacing Existing Systems
Instead of forcing users to abandon familiar tools, AI agents can meet users where they are:
- Providing natural language interfaces to complex PLM queries ("Show me all parts affected by this change")
- Enabling intelligent search across disconnected systems
- Suggesting relevant information from PLM when users are working in familiar tools like CAD or Office applications
This approach respects organizational inertia while gradually bringing PLM capabilities into users' everyday workflows.
The Dual-Source Part Number Problem
Rob Ferrone raised a specific challenge: "I'd love to know how it will maintain the relationship between data when humans do stuff like creating new part numbers to dual source etc."
This scenario highlights a common disconnect between the theory of PLM and practical realities. In an ideal world, a part would maintain its identity regardless of source, with sourcing information maintained as an attribute. In practice, organizations often create new part numbers for identical components from different suppliers, breaking the logical relationship.
How might AI help? An agent could:
- Recognize patterns suggesting that two differently numbered parts may be functionally identical
- Maintain "shadow relationships" between these parts without requiring database restructuring
- Ensure that changes to specifications propagate across all related parts regardless of numbering scheme
- Gradually help standardize practices by suggesting more maintainable approaches
This wouldn't instantly solve the problem, but it would create a pragmatic bridge between current practices and better data management.
The "Plumbing" Problem: AI for Data Infrastructure
As Rob noted, companies need "AI that can help companies stand up a solid product data management operating system." This is the unglamorous but essential "plumbing" work that PLM consultants often focus on.
AI could assist here by:
- Analyzing data flows across organizations to identify bottlenecks and inefficiencies
- Suggesting optimized workflows based on actual usage patterns
- Providing intelligent assistance for system configuration and setup
- Automating routine data maintenance tasks that often fall through the cracks
These capabilities wouldn't replace human expertise but would make it more scalable and consistent, addressing Jos Voskuil's concern about "local heroes" creating unsustainable solutions.
Business Realities: The Vendor Perspective
Jos raises another important point: "The main question will be for [vendors] - how do I remain profitable as I am so open?"
This gets to the heart of the business model challenges that PLM vendors face. Traditional PLM business models rely on a combination of software licenses, maintenance fees, and professional services. An open ecosystem approach threatens this model unless vendors can find new ways to create and capture value.
Some possibilities include:
- The Platform Model: Vendors focus on creating platforms that host third-party applications, taking a percentage of revenue (similar to app stores).
- The AI Services Model: Vendors provide specialized AI services that work across any PLM ecosystem, charging for capability rather than data lock-in.
- The Solutions Model: Vendors shift from selling software to selling business outcomes, with pricing tied to measurable improvements in time-to-market, cost reduction, or quality enhancement.
Each of these approaches would require significant business model innovation, but they represent potential paths forward that balance openness with profitability.
The Digital Thread as Essential Infrastructure
Martin Eigner offers valuable perspective on the importance of the digital thread concept. He notes, "I completely agree that if we use 90s technology for PLM, we will end up in a dead end. Like you, I see that a digital thread running across the many legacy systems along the product lifecycle offers two advantages. On the one hand, it enables holistic engineering process support by providing all configuration items, e.g. for engineering release and change management. On the other hand, it is an essential prerequisite for AI agents due to the comprehensive collection of information."
Martin's perspective reinforces the idea that the digital thread is not merely a PLM buzzword but essential infrastructure for both traditional engineering processes and emerging AI capabilities. His experience at Bosch highlights a practical application: "This brings us closer to my dream after 5 years of global change management at BOSCH, the automatic completion of affected items in the ECR (see also Oleg Shilovitsky's blog AI-powered CCB Agent)."
This example of automating the identification of affected items in Engineering Change Requests represents exactly the kind of practical application of AI that could deliver immediate value while building toward more sophisticated capabilities.
Martin also offers insight into how the digital thread might be implemented: "In Figure 1, I show that the digital thread as a prerequisite can be provided in parallel above the legacy systems as a stand-alone solution or via a PLM system based on modern software architecture. With its NO/LOW code engine, repository, and containerizable Web services technology, Aras is definitely a candidate for such a solution."
This architectural perspective aligns well with the microservices approach discussed earlier, suggesting practical paths forward that don't require wholesale replacement of existing systems. We'll be exploring these ideas in greater depth during our upcoming discussion at the ACE Conference (March 31-April 3), where Martin and I will delve further into these concepts.
Evolving Gradually: A Practical Roadmap
Given these realities, how might organizations practically approach the integration of Agentic AI into their PLM environments? I suggest a phased approach:
Phase 1: AI-Enhanced Data Management
- Deploy AI tools that improve search and discovery across existing systems
- Implement agents that monitor and improve data quality
- Use AI to simplify user interactions with complex PLM functions
Phase 2: Intelligent Integration
- Develop AI mediators between key systems that handle translation between different data models
- Create natural language interfaces for cross-system queries
- Implement "shadow" relationships for key data that exists in multiple systems
Phase 3: Process Automation
- Deploy agents that can orchestrate simple cross-system processes
- Implement predictive capabilities that anticipate bottlenecks
- Create self-service capabilities for routine PLM tasks
Phase 4: Full Agentic Capability
- Deploy autonomous agents that can handle complex cross-system tasks
- Implement predictive engineering and manufacturing optimization
- Create truly seamless digital threads across the enterprise
This graduated approach acknowledges that organizations need to strengthen their PLM foundations before pursuing more advanced capabilities.
The Arrowhead Connection
Jos Voskuil mentioned the Arrowhead project, which focuses on creating service-oriented architectures for industrial automation. This project shares philosophical similarities with the microservices approach I discussed previously, emphasizing interoperability, security, and scalability.
The Arrowhead approach could indeed serve as an architectural model for how PLM systems might evolve, with discrete services communicating through well-defined interfaces. AI agents could then orchestrate these services to create coherent workflows across system boundaries.
Addressing Advanced PLM Requirements: Ontology, Semantics, and Servitization
Steef Klein raises important questions about how modern PLM systems like Aras support more advanced capabilities required for truly integrated enterprise solutions. Specifically, he asks about ontology, semantics, dynamics, analytics, and support for the emerging servitization business model.
These are excellent questions that get to the heart of what's needed for next-generation PLM systems. While Aras has traditionally excelled in PDM workflows and change management, the requirements for a system that can truly support AI agents go beyond these traditional capabilities.
Ontology and Semantics for Cross-Domain Integration
The ability to maintain consistent meaning across different domains (engineering, manufacturing, service, etc.) requires robust ontological models and semantic capabilities. This is especially critical when integrating across PDM, CRM, ERP, and Field Service Management systems, as Steef notes.
Traditional PLM systems have been built around structured data models rather than semantic relationships. For AI agents to effectively bridge across these domains, they need a deeper understanding of how concepts relate across disciplines - not just how data is structured within each system.
This semantic foundation becomes even more critical in servitization business models, where the boundaries between product and service blur, requiring integrated data models that span the entire product-service lifecycle. The article Steef references on servitization highlights how manufacturing organizations are shifting from pure product sales to integrated product-service offerings, fundamentally changing how they need to manage information across traditionally siloed systems.
The Microservices and Event-Driven Architecture Question
Steef also raises questions about Aras's capabilities regarding "Agentic AI integration within event-driven Packaged Business Capabilities, Microservices, seamless upgrades, etc." This speaks directly to the architectural foundations needed to support the kind of flexible, responsive systems required for modern digital thread implementations.
The evolution toward event-driven architectures and granular microservices represents a significant shift from traditional monolithic PLM platforms. This architectural approach allows for more responsive, scalable systems that can adapt to changing business requirements - essential capabilities for supporting servitization business models where the relationship between customer, product, and service provider is dynamic rather than static.
As PLM vendors evolve their platforms, the ability to support these architectural patterns - along with the semantic and ontological foundations mentioned earlier - will be critical differentiators in their ability to support true digital thread implementations and AI-augmented processes.
Whether existing PLM systems like Aras can fully transform to support these capabilities or whether new approaches will emerge remains an open question worth further exploration. This represents another dimension of the pragmatic idealism discussion - balancing what's possible with current platforms against where the technology needs to evolve.
Conclusion: Pragmatic Idealism
The feedback from industry experts like Rob, Jos, Martin, and Steef highlights the tension between visionary ideas and practical realities in the PLM world. Rather than seeing this as an either/or proposition, I believe we need a form of pragmatic idealism.
Yes, the reality of PLM implementation today often involves struggling with basic data management challenges. And yes, the vision of seamless Agentic AI orchestration across systems remains aspirational for most organizations. But by applying AI technologies first to these fundamental challenges, we can begin building the foundation for more ambitious capabilities.
The future of PLM will likely involve both incremental improvements to today's challenges and transformative new capabilities. The most successful organizations will be those that can walk this line - addressing immediate pain points while gradually building toward a more connected, intelligent product lifecycle ecosystem.
What do you think? Are there specific PLM challenges in your organization where AI could make an immediate difference? Or do you see other barriers to adoption that need to be addressed? I'd love to continue this conversation in the comments.
Fino's Articles about Agentic AI and PLM:
Part 1: The Agentic AI Revolution: Reimagining PLM as a Flexible Microservices Ecosystem
Part 2: Bridging the Gap: Making Agentic AI Practical in Today's PLM Reality
Part 3: Future Horizons: Model Context Protocol (MCP) and Autonomous Systems in Manufacturing PLM
Part 4: Transforming Engineering Workflows: Agentic AI and MCPs Address Daily PLM Challenges
Part 5: The Bill of Information: Beyond Bill of Materials in the Digital Thread Era