🔮 Future of PLMEpisode 2
🔮 Future of PLMEp. 2

ALM Meets PLM: Application Lifecycle Management in the Age of AI

Michael Finocchiaro· 47 min read
Guests:Future of PLM Panel
Share

Episode Summary

The episode "ALM Meets PLM: Application Lifecycle Management in the Age of AI" explores the integration of application lifecycle management (ALM) with product lifecycle management (PLM), particularly in light of advancements in artificial intelligence (AI). The guests are Jas Voskul, an experienced professional from Aveva; Valentina Futurinova, who works for Aveva and focuses on PLM asset lifecycle management; and Rob Veroni, known as the "PLM plumber" due to his extensive experience. These experts discuss their backgrounds in PLM, emphasizing the challenges of managing vast amounts of data across different industries and ecosystems.

Key insights from the discussion include the importance of connecting disparate systems and data models for efficient management, especially in industries with complex assets like wind turbines. The panelists also highlight how AI can be applied to optimize maintenance processes by analyzing historical repair data and predicting future needs, potentially reducing downtime and improving operational efficiency. Additionally, they discuss the potential of AI in generating innovative solutions, such as personalized meal kits through predictive analytics.

For PLM and engineering professionals, the key takeaway is that while integrating ALM with PLM presents significant challenges, leveraging AI can offer substantial benefits by enhancing data management, predicting maintenance needs, and driving innovation. The episode underscores the necessity for continuous improvement in data connectivity and the strategic application of AI technologies to streamline operations and improve overall performance.


Full Transcript

Michael Finocchiaro

All right. Excellent. So we'll go ahead and get started. My name is Michael Finnecaro. I'm sort of the host, the master of ceremonies and the least experienced in this particular field that we're going to talk about today, which is asset lifecycle management. We're joined by Jas Voskul, the Dutch, the flying Dutchman who has very long history with PLM and has worked extensively with customers doing asset lifecycle management. Valentina Futurinova, I think I said it right again, who works for Aveva and who has also a really long career in PLM and now is responsible for the heiress relationship and so the PLM asset lifecycle management connection with heiress. And then we're with Rob Veroni, the PLM plumber. We're privileged to have him with us too. He's got a lot of experience as well. And he's a main promoter of one of the topics on the first call, which was digital threat as a service. we're hoping Rob will bring his insights into how the digital threat as a service is particularly pertinent when we're talking about asset lifecycle management. in order to start off, why don't we go once around the horn? So everybody, I gave quick intro, why don't Yoss, you want to introduce yourself and then Valentin and Rob, and then we'll have Yoss introduce the subject. Well, my history started with initially Smart Team, which was a client server toolkit, more PDM than PLM, but very flexible. And I got a role there to develop industry templates, initially for fabrication industry, electronics industry. And in 2006, 2007, we said also this kind of concepts of PLM are also valid for the asset lifecycle management, for the continuity of data. And there I did my first projects with some nuclear owner operators, because they were also in the need of having In particular, the configuration management and the change management of their plants controlled. And it was one of my side jobs to promote PLM also for other industries, not only automotive and aerospace. And you might have read about my PLM experiences since 2008 in my virtual Dutchman blog post. And there I also wrote about asset lifecycle in the beginning as a separate stream.

But now as we can see, it becomes a commodity because we have connected products and we have connected systems. So I was very enthusiastic when we were going to raise this topic today. Great. So great to hear about your experience, Joss. So I've worked as a diesel engine engineer for about 20 years for Cummins and Caterpillar. So I was a user of different systems, Teams and Windchill. probably all the different types of PLMs and experience frustration first-hand in the systems not being linked properly and double up of data. But now in Aviva, working with a different industry with huge assets compared to different products, I can see different challenges, being responsible for our relationship with Ares and the development of our asset lifecycle management platform. based on our Aviva tools integrated with our Asinnovator. I can see lot of challenges with the amount of data our customers are using, but very different workflows that they need for different applications. So it's a really interesting time right now trying to figure out how do we optimize the data that we generate in Aviva tools with the data model that we have, and how do we link all that together. in a way that is most efficient for our customers where they've got huge different ecosystems of tools and all of these tools use different data models, use different approaches. So it's a real challenge. So really excited to talk about it today and get your perspectives and opinions. yeah. Awesome. Thanks, Valentina. So Rob, I think that actually Valentina gave you a nice intro because she mentioned that it's problem of connecting data and that's sort of your, your bailout. First of all, thanks very much for having me. and a big hello to all those people watching live and those people that will watch this back. yeah. So, I apologize that I missed the, last installment. I had flu, which I seem to get every month these days. And, I think that's from having young children.

So my background, I think I've always been, you you see me in the PLM space, you see me talking about PLM, but actually my background has always been more closely tied to engineering. So I studied engineering, I went into an engineering role in about 2000, and I didn't even know about PLM at that stage. And I... just saw the engineers were struggling. In fact, not just engineers, but the whole business project management, purchasing, everyone was frustrated because the systems weren't working in the way that people wanted them to. And I, yeah, I gravitated towards that challenge and got my arms around the data and put my data plumber hat on and effectively. helped information flow to people, get it to them in the right format so that they could do the thing that they needed to do most effectively. And things just worked and people loved it. And they said, okay, can we get some more of this? that was the of the origin story of the company that I formed back then called QuickRelease. And it grew into a large product data management consulting company. And I then sold the company to an engineering company, a huge engineering company that provides engineering services and solutions to, you know, engineering companies, you know, so it's kind of a continuation of the engineering journey. And, you know, now I am independent and enjoying, you know, this next phase of my life and my career, working with many different companies and, you know, lots of exciting projects. And yeah, this is this is one milestone on the journey. Thanks, Rob. And I guess some of us will see each other in Spain next week, right? Finally, yeah, exciting for share PLM. So yes, why don't you guys start in and explain a bit of your experience on this particular field, which, you know, typically, when we think of PLM, we think about more of the creating products and the ideation and engineering and here we're talking about, it's already built and how do we manage it and how does that inform the decisions going from upstream? can you give us a note for you? Okay.

Although PLM could also be planned lifecycle management. So you can always change the acronym. And listening to Rob, Rob is really the guy of the future because when ALM started in the early days, it was very much document driven. And you had really two different worlds, you had the design world where assets were developed, and often the owner operator of the assets was not developing themselves, it was outsourced by engineering companies had so-called EPC companies, and they were responsible for the engineering, the procurement and the construction, and then there was the handover to the owner operator. And it looks like a logical process. You design something and then you dump all your knowledge to the owner operator and you hope that you dumped not too much because you want to keep your IP. From your end, they must have just enough information to keep the plant or the asset up and running. And on the other side, Now you have specialized systems like SAP PM module and Maximo from IBM because the big difference is that it's not a classical bomb management in a plant. When you are talking about assets over a life cycle, you have to talk about functional and logical components of the asset because the bomb might change over time. And it doesn't make sense to keep track on the details of the bomb. It's the functions of the assets and the logical structure that you're maintaining. And that's what I learned when I started to work for the first time with some nuclear plant owners, operators on building a complementary data model to the SAP PM module at that time, because they wanted to manage change and they want to manage also the configuration, the pure configuration management, which is not on the ERP or a site. And the good thing of nuclear is that there is a big margin. people have time, there is no pressure. So yeah, we could also explore and investigate scenarios to get the optimum benefits of least a PLM definition complementary to the assets in operation. And then I thought, let's do it also with the EPCs. So I started talking with the big engineering companies. said, well, we have PLM and fantastic infrastructure to collect all your design information from the assets and then we can push it.

to the operational systems. And that's where I met, of course, Aviva, actually one of the key players in this world. But I also met the challenges of the EPC contractors, who are often very fragmented, tool-oriented, and how do you let them work together in one single operation or environment? And I think slowly with cloud, this has become much easier because it's not local systems anymore. Then we also got IoT, where we are in the mainstream of PLM, where we have connected devices. And suddenly we also have a connection between the design world and the operational world with the DevOps approach. And all those concepts that were developed in the ALM type for plants now suddenly are also useful for the current systems that we develop and differ. And that's why I'm here. Also curious to learn from Valentina, how are you evolving in that time from the good old document-driven approach to the data-driven approach? Yes, and I think you also mentioned just as we were doing the introductions, you mentioned about Aviva's tagged approach compared to the part numbers. It would be good to hear a bit about your experience with that because I think that's a real sort of data-driven approach that's different from a traditional PLM approach. Exactly. mean, I also discovered and learned that the tag number is the crucial placeholder both for the design information, but also for at the end, the instance when it's installed. It's a missing link. The free body problem, if you know, I would say. You have three entities instead of two, because historically we always thought, okay, you have an E-bomb and an M-bomb. That's the way we thought in the PLM world. And now we also have a functional placeholder or location placeholder. And the more they are connected in a data driven manner, the more easy you can also present them in a GUI combined for the end user. Because one of the tricks that people did in the past with the tag number, they say it's an attribute of the part. So they just add a property to the part. But then, of course, when

part changes, but the function remains the same. You suddenly have a new object in your data model that is not logical because the function hasn't changed. Only the part has changed. I've got a question in that case. skipping back to the introduction, Michael talked about PLM and how asset lifecycle management is a lesser known part of PLM. But ultimately, if we're talking about PLM, it's really everything. And I see asset lifecycle management sitting within PLM and just as important. not, for some companies, it's actually where the money's made so that the product itself is breakeven and the money's made in the servicing the operation. Mike, so Mike, you're talking about, you know, things like nuclear reactors. So where does BIM fit into this, to throw another acronym into the mix? Okay, well, if you look at a nuclear reactor, it's a lot of concrete. and inside this concrete there are some equipment that is probably dangerous outside of the concrete. So BIM, the Building Information Model, is also a methodology at least for asset development so that you can coordinate with the different contractors on space and functions together. I would say people in every nuclear plant, BIM became also crucial. But before they were not aware of BIM. It was also document-driven. So the move from documents to 3D models, because BIM is also more about 3D models, is something that we've seen only the last maybe five, ten years. Especially the UK and China were pushing a lot for all new constructions to be BIM level 3, which means it's the advanced. level where also PLM makes sense. I was going to say, I was having a conversation with a company that is developing small modular reactors and they're at the very start of the journey. And so they're thinking ahead to the information that needs to be there in 20, 30 years from now. And so the question is, first of all, what information, what data should they be collecting on that journey? And secondly, what

what systems are going to be relevant in 30 years. Maybe you can rule PDFs out on the basis that they won't be supported or what will BIM, BOM, PLM look like in 30, 40 years, especially with AI having such an influence on technology. And that's why a data model is so important. mean, you can't use special document formats all the time because then you have to upgrade your applications. store your information in databases with the right data model, then you can use it in different apps. And I think that's the big development also that has been in the process industry. The ISO 5926 standard is one of the few that has been developed by lot of contractors also and also owner operators in the oil and gas industry to make sure that they can exchange data as data elements and not in file formats. And to come back to your challenges, I worked with a project that in the end didn't go on from Sellafield, where they were building a nuclear waste plant that was the idea, how to process nuclear waste in this plant. And this plant should operate on its own because you can't go inside anymore. So a lot of complexity with locations, and that's where BIM was also relevant, but also simulations and also understanding. where, where or how can I repair my equipment when I can't enter it. And so it's a huge topic, I would say, where the virtual twin, which we haven't discussed yet, is so relevant of working. And it's not even in space this time, the virtual twin is now in the land. and actually, Rob, to your point about small nuclear reactors, we get a lot of these conversations. And that's where the topic of concurrent engineering comes along and modular design, because you're trying to reuse the parts of the same design and also update some of the safety components on the current design. So you want to run two projects in parallel. And that's where a good PLM system where you can introduce change management is so important.

Yeah, configuration management is a huge topic that we're seeing. I love it how it all comes together and you need all of these things to kind of play together in order to get the right outcome. It makes me wonder, and this is a leading question, I guess, for Valentina, but the systems today, the big three are not really as focused on this particular market as perhaps some other ones. how is it that, you know, because it's sort of that vision I said earlier where the Traditionally, PLM is focused more on the front end before we manufacture it and not after it's already built. Whereas as Rob said, it really should be a holistic view, but I guess that's also a function of the industry you're sitting in, Disposable products, we don't really care about the thing at the end. So when we're looking at AVEVA and the Ares partnership and we're looking at how does a PLM system... expand in order to include that asset part. How is that working and how do you guys handle that? So I a lot of experience in that in trying to get them to work together. And it starts all with the money as you said, Rob, where is the money on the and if you are an asset lifecycle manager, and plant in operation is making the money. As long as the plant is running, that's where the millions sometimes per day are generated. Where this engineering projects they are in a way peanuts. compared to the other operations. And I spoke with big dairy companies also about controlling their asset development. And they were not so interested. They would like to standardize, but they can't standardize their contractors. So it's part of the accepted costs of operation and maintenance. Because on the management level, people think it's all about keep the plant running. as good as possible. It's really a mind shift, I think, for smaller products where you are going to have products as a service, where this asset lifecycle management philosophy will change. In a nuclear plant, works because the cost of change was like $4 million per year on just managing changes. But if I talk with a refinery, they say the work we are doing is peanuts compared to what

refinery is producing per day. So let's more people do it. And unfortunately, if there is an issue, it will burn. But I think to your point, Michael, about the data centricity really of the handover. So when you've designed the plant, the handover is very important so that the data centric model of the plant can be the one that's handed over where, as you're saying, whereas it's the IP, you can't hand over everything, but you've got to be able to hand over more than just the paper, the drawings, you want to be able to hand over a working version of the plant that can go undergo changes. And that's where Harris innovator comes in as well, because within that environment, can hand over, you don't have to hand over engineering tools, you can hand over a database of a data-centric model-based system that can still go through all the change management, all the configuration management and be kept up to date by the owner operator. So this is what effectively we hope to have in the future. Today we already have that handover in the data-centric way. but we don't offer the full capability of keeping it live. And I think that's the part that's going to be very important is being able to keep that data alive. But the challenge, of course, is the data standards. I think, Jost, you mentioned some of the data standards already out there, but not everybody is using the same. data standards, they're different for different industries, you know, we don't operate just in oil and gas, operate in marine industry where they've got different standards. and data generic, right? that poses an important question, who owns the data, you know, is it the operator that owns it? Or is it the company that developed the, you know, the product? Because you could have someone that developing a

small modular reactor that they sell it to a country and that country's laws prohibit the data leaving the country, cetera. So there's some really interesting themes. there are a lot of busy discussions here. also had an equipment manufacturer who wanted to work with connected products, but the customer didn't want them to read the data. because they say then you can see how much we operate with the machinery, you can see the value of the products and maybe the prices go up. So the data can also give a lot of information that you don't want to expose to others. and so I mean, that's, you know, the large value that you get from PLM is the idea that you can take, you know, the information from that in service and feed it back up to the the place where products are developed and so that you can inform them. So that's a really interesting challenge. How do you do that where you don't own the product once it's in service and you don't own the data, but you could use the information to inform it. Do you have to buy the data back from the company that's operating it? Does it have to be anonymous? I guess some industries are more regulated, none of them is Valentino. What do you think? What is your experience? Well, what I was gonna say is, I think that's that's something everybody is trying to figure out at the moment, you know, lot of our competitors and us, you know, trying to develop platforms like connect where the users of data, the generators of data are all living on the same platform, and there's agreements in place. But, you know, medical industry has answered some of these questions, you know, they've anonymized the data, so that you could still use it for research, but you don't know. exactly who is the, I guess, where the data came from. there may be different ways we can borrow these ideas from other industries. And do you see, I mean, my experience is that you've got been talking a lot about, you know, infrastructure projects, but there's, you know, lots of other products out there. I mean, and you mentioned some of them, Valentina, maybe just talk through some of the differences that you experienced between different products and how they, you know, operate.

asset lifecycle management and the value they get from it or not and for who is it important and for who is on the journey. think there are lot of similarities. If you talk about a ship, it's a lot closer to product than the plant is because you build it in the factory in a shipyard. So you can set up your tools and set up your shipyard as effectively an automotive. you could almost say it's a production line, right? If we get to that point, a lot of the shipyards are trying to get to that point where there are a lot more mass produced than in the past. But ultimately, you know, you could look at the ship as a moving plant or a moving building, right? So in a way, yes, there are differences, but there are also lot of similarities. I think it may be more to do with the actual product development stages. They are quite specific in marine industry to a plant industry to nuclear industry where you've got different regulations, different simulations you're running as well, different tools you're using in the process. So the data types that you're storing along the way and all the different types of digital twins, I think that's the main difference really that you get. There's a question coming in that might be interesting at this point. It's actually to Yoss from Yens Chimnitz. He's asking, Yoss, you may recall our plant bomb implementation at FLS back in the day, along with classic product e-bomb. Is this still big practice or should we split these functions along multiple best in class systems? So it goes well with what we're talking about right now, right? So what do you think, Yoss? When we worked together with Jens, we were trying to have a kind of generic e-bomb as a result of a system definition, I would say. And of course, this e-bomb definition, at that time, we had only one system as a target. But it could be, of course, coming also again from different systems. And that's where we talk about digital threat as a service. The future I'm seeing, it's not longer

building information in one system, you have a federated environment of connected systems. And the most important thing is, of course, there is the commonality of data model and not the replication, so that you have direct access to additional information and not, I would say, replicate data from one system to another and with all kinds of transformations. I think that's the ultimate goal. If you can realize it now, I'm not sure. But historically, I mean, we have a lot of tools and a lot of legacy and moving out of the tools and legacy to a more futuristic environment. I think that's what is really the biggest challenge. There is no greenfield, unfortunately. Well, that's it. Because that's the next question after that was actually from Sashkina Ravikanti, who says, the cost of building software projected go down because of AI. What do you suggest for someone who wants to build a PLM tool as a greenfield implementation? Like what mindset? Major players, DS or Siemens are evolving, but he thinks new players will have to take advantage of AI being more natively built into the solution. So like Valentino, is that one of the considerations that Veva is doing as you're building out this tool on top of Ares to handle asset lifecycle management? Yeah, definitely. think back to the Bill of Material Management, that's exactly what we're trying to achieve, actually, is because we're building our system based on tags and class libraries and definitions, metadata and attributes. The way we're approaching the Bill of Material is a tag-based structure that comes out natively from the design tools and stays in that format. So we're not translating it to parts along the way. So that's exactly what we're trying to build out and bring AI into play as well. I think Aris is doing a lot of work in that space as well. Valentina, just for me and for the other people listening, what would you mean with the tag base? Can you explain that to us and where that information comes from and how do you need to keep it live or is it set and it's kind of done or?

I'd love to know more about that. I think there's a traditional difference between industrial design tools and CAD tools for products. So if you look at CAD tools, they tend to be part number based in a way that you design your part and that part is carried forward in the bill of material. Whereas if you look at the design software for plant that developed back in the 70s, you know, it was all based on tags, equipment, all the equipments are tagged, you know. And so if you're trying to apply a traditional PLM system to a design tool that was developed for plants with a data model behind it, then it's very difficult because you're having to translate between two different systems. So what we're trying to achieve with ARI is being such a flexible PLM environment is continue the data model. that we have in our native design tools into the PLM system so we're not breaking the data flow and we're not having to translate and effectively compromise on the data that's coming from the design tool in one format and then having to change it to the part number based approach. Maybe to complement the dark, I know this thanks to the sunny background I get a little dark. In theory, the eBOM looks very much like a tank number, because you can say, okay, this is my engineering specification of a component that I'm going to develop. Unfortunately, in asset lifecycle management, this component is going to change, but the function, and the function is defined by the tank number, stays the same. It still remains a pump. It still remains a tank. But the total physical solution can be completely different over time when you do your maintenance. And that makes this tag number the crucial element in asset lifecycle management. And coming back to the question, Fino, to develop a new system, I would say don't develop systems anymore. Develop connected environments that can communicate with other connected environments and the platforms because the challenge we have with systems that you, in the end, want to be the center of the world.

because everything can be done in my system, but the world is open. So we have to find ways of having open connected systems. Aeros is a good example of that. mean, and the big challenge I see is always where is the business model for those software vendors? Yeah, that's really the problem is that if you open you, nobody's going to pay for you. Which there's a mooly mooly Mohan Srinivas had a sort of a two-part question. was saying, which comes to what you're saying, Yas, he's saying that why don't the big three, know, ZMAS, PTC, ZS, why don't they have a very specific asset centric solutions, even if there's a revenue stream that would be longer and bigger, because we're talking about projects the last decades. And then specifically for Yas, he wanted to know If you think that asset definition is highly vertical and specific and therefore a generic approach like PLM wouldn't fit when you come to asset life cycle management, I guess, how do you boil the ocean between all of these things are very specific to building a nuclear power plant or ship or an airplane. And how do you get something generic enough that I can build a system that one size fits all with a little bit of customization, which is the whole promise of SaaS based cloud PLM that we've been talking about for the last 15 years. Exactly. We are continuously trapped in the discussion of PLM is the system or an infrastructure? Yeah. In systems, then we have vendors and if we talk about infrastructure, then it's about connected environments. And coming back to the first part of the questions, why are vendors not interested? I know both Siemens and Dassault are quite active in the nuclear industry on the PLM side. I mean, they have templates, they have ways of working. But I think, especially in the process development side that you see, it's almost not visible. So nuclear is for sure a place where it happens. Well, I think that my answer to Murli would also be that PTC sees it as well, otherwise they wouldn't have bought ServiceMax. I mean, that was the reason. One of reasons they bought it, right, was this after sales stuff. And they also have ThingWorks, which is the

the biggest IoT platform. So I don't think they're ignoring it. It's just a for a lot of them, it's such a different look to their solution that they can't really, everybody thinks when they think of airplanes, they think of Siemens and DS and they think of motorcycles, they think of PTC. When they say nuclear power plant, they don't think PTC, DS, Siemens, think of Ava and Hexagon, right? I mean, that's sort of the Back to PTC, the first question you have to ask yourself is where is the functional and the logical definition in the infrastructure? Right. And if it's not inside your PLM infrastructure, you're having only part of the solution. Absolutely. I mean, it's challenging, isn't it? If you think about it, all these different products that you could possibly have a system solution offering for, they all function, even though they might be similar. They look just like a car companies. And if you look at their tech stack to manage the life cycle from ideation all the way through to only manufacturing, I'm talking about, you've got each of them has 10 different systems to do that. They don't have one PLM system to execute that. And they're all different. No one has the same setup. And so, you know, to try and have one solution that covers all of that for multiple companies, I think is unrealistic. the, you know, these system vendors, they'll have an algorithm that kind of says, you know, what's the effort required to... to create this capability versus around a solid investment, you know, and that's, that's ultimately going to affect their, you know, how far they lean into this. I think that the future terminology should also be nearest source of truth and single sort of change. mean, this is the paradigm that we are working on. That was before AI came into the picture. And this is where I will talk also about in Jerez.

next week because AI is definitely going to change the picture. You don't need to have this connected digital threat anymore. Is that right? Because surely AI has to source that information from somewhere. So it needs to be, you need to have the data and the connected enterprise in order for AI to work on the, you know, to leverage that information. not sure AI will do the connecting itself. The large language model can. create the connections that you haven't connected in your digital thread traditionally. Which is good lead into the next question. Sorry, there is a good question, which is exactly to this point. Sanjay Talakar says to Valentina, the elephant in the room is interoperability and scale, which is what we were just talking about. So what is used by AVEVA for semantics, syntactical? you know, how is AVEVA connecting to so many systems in order to, you know, relieve Rob's stress over this digital thread and connectivity? exactly. So our semantic model, you know, is built into our database from the design tools all the way to operational tools, and all the way to our PI systems, which is operational, sort of gathering data type tools, they're all carrying the same tag based DNA. And the way we're plugging in our Ascinovator into it is following exactly the same approach. So we're able to trace that tag from when it started its life in the design database all the way to when it's now in what we call Asset Information Management Tool, where you can visualize what your tag is doing, where it is in the plant. And now all your sensors gathering the data from operations monitoring and doing predictive maintenance. They're also linked to the same tag. So you're able to trace the full life from start to finish on the same data, data model. And I think that's where the scale comes in as well, because if you think of a part number, when you're trying to build relationships between part numbers, you may end up with 40 different part numbers.

describing the same tag. And the minute you amplify that from say 1,000 parts in a car to 6 million parts in a naval ship or a nuclear plant, you now have millions exponentially growing, right? So that's where the scale comes into play as well is that data model is very efficient. It's effectively carrying the minimal number of information in a very sort of effective way. I'm not sure if I described it well, but that's kind of what we're trying to achieve. I think both of you mentioned IoT. This raises another important question in my mind, because I think there's some... We all know that it's important to have good data, and that's the basis, that's the backbone of your product lifecycle management strategy. to when you're in an engineering environment and everything's in PLM, that's easy. Well, sometimes it should be easy in theory until engineers get involved. But when you get into assets, these are real world conditions. So maybe nuclear plants, are more robust and aerospace, cetera. But when you get stuck talking about products that are perhaps more simple, especially in defense where you're talking about ground vehicles or even weapons. You've got, you don't even know where they are a lot of the time. You know, they're hard to track down. And then I know there's some cases where you have, you know, ground vehicles where they've, they've snapped the aerial off because it was getting in the way of their line of sight, you know, or their line of fire. And so, you know, your products in reality are very different, different, sorry, to the, what you might be tracking digitally. And I think that's another big challenge. How do you, how do you reconcile that, especially, you know, when the people that are servicing these objects and these products. might not have the parts to hand and they'll even go to the local DIY store and pick up something and retrofit it just to get it operational again. I don't know if either of you got any thoughts on that? Well, I'm just laughing when I hear as a lifecycle management and defense equipment. mean, I never had an unhappy customer coming back with the broken tank or equipment. It's statistics.

It's very difficult to track them individually and maintain them. That's a big difference. It's a discipline, isn't it? You know, when you change things on a live system, do you go back to the database to maintain your digital twin and actually log that? know, I think if that discipline is there, then it's right that unless you do the scans, you know, I mean, that's another thing that that is available is there are tools to scan your system and compare it to what it was designed as. have the people, do you have the time, do you have the budget to do it? Exactly. But back to data quality, you just raised the point Rob, is that one of the things that we are doing with our systems is we have two tools that are built in natively into what we're connected to our innovators, information standard management. So we're actually managing the class libraries of our equipment as part of the data sort of handover different processes so that it's consistent. And the other part is validation as a service that we've built into, again, our RIS implementation is that the data gets checked as it travels from one system to the next. It gets validated that there aren't missing parts. that the data is of high quality so that you can actually use it. And do you think you could do that live and ongoing? Let's say you've got something that's operating, could you have constant checks comparing, let's say in high volume manufacturing, for example, constantly checking between the eBOM, the M-BOM and looking for misalignment, not just an infrastructure project where it gets handed over to the operator, but something where a daily check is required. It's definitely possible whether that's built in today, I would have to. There's another interesting question from Patrick Hilberg who asked whether we need a nuclear power plant to provide power to the AI. And I think that was already answered by Jensen Huang and his GTC address a month and a half ago when he said, every factory when we build a product will have an AI factory and those AI factories will have many nuclear

modular nuclear plants, which the ones you were talking about earlier, in order to power them. So I think the answer is yes, we're going to have I mean, the constraint that Jensen said he had was not technology anymore, it's power. He just doesn't have the power to you know, if you put 100 of these insane black, the new chips that he made, you just can't do it. There was another question. On that, Michael, one comment also, I'm an optimist in technology. that the amount of power that we throw into AI at the moment is not necessarily also the amount of power we need to throw into the future. True. Our products were always very inefficient. You're saying that we shouldn't be generating those images of ourselves as the Barbie figures. There's another good, and this is a good one for everybody, think. Christian Nadinger. is asking in the automotive industry, we're now talking a lot about software defined products, right? That's a big thing. STVs. So what about an asset life cycle management? What about these kinds of products? And what do we, do we have that same kind of paradigm of trying to do software fast iteration in order to inform this stuff? Or is it so slow that, that, approach doesn't really apply. What I mean, my experience in automotive is, and this is, You know, what Chris is saying is not new in the automotive industry. know, there's companies that have been looking at connected vehicles for a long time. you've got, so on one level, it's just making sure the products work in the field, you know, and that they've got the latest level of software and that that's not going to crash the vehicle. And then you've got other... things, which is bringing new services online, which enabled that vehicle to connect to other services and products, whether it's navigation or whether it's the fact that you can drive into a car park without having to put in your credit card because the car park recognizes the vehicle because the vehicle's got a relationship with the park service provider. There's those kind of things as well. So I remember a time in automotive, it was very much kind of fire and forget where you'd make the car and sell it. And that was it. The people doing the aftermarket.

would work out how to service the car when they look in a paper catalog of parts. And now, the companies are much more interested in the connected vehicles, especially when you get into the commercial vehicle space where you're actually providing vehicle uptime as a service. So if people are buying fleets of commercial vehicles, how can you guarantee the customer that that vehicle will be on the road for the majority of the time? So very, very interesting space. And maybe to complement this discussion, it's not about tracing the individual vehicle like what we often do in an asset lifecycle management, often it's a batch of products that have the same configuration characteristics where you can have connected vehicles. I remember my old 2014 Nissan Leaf. I mean, it doesn't work with 2G anymore, but it was this category that was produced in that time where the software be upgraded. It's interesting because I don't know if that's how they do the updates on the iPhones, but I know that for example, if you've got your phone connected to the vehicle, etc., then your phone is the personalized key and then you could even buy services like the heated steering wheel over winter, so you don't have to have it the whole time. The partner can buy the other partner an upgrade for winter if they get cold hands. So it can be very very in a vehicle or person specific even if you need to have a heated steel wheel in order to implement it then yeah they well see that's the other thing the the automotive companies are giving them away with the vehicles on the basis that they might not even be switched on but but there's potential to switch upon okay yeah so i've got a two more questions well i don't have any more questions from our queue from today's call but we have two that we forgot to do this in the first call. The first one was a question that you answered, Rob, already. But I just wanted to pitch the question and you can give your answer again. So Frederick Hattier of Kona said, how suitable is digital Thursday service for machinery equipment, which is what we're talking about, right? Going from a layered architecture, so core data layer, a data layer, a composable app layer, UI, instead of be car,

or custom off the shelf or a composable app like custom build. Yeah. I mean, I think I answered some of it in the chat, but really ultimately it depends. No one should be doing digital thread just for the sake of it. There has to be business value behind it. the, you know, what's the intent, what's it going to deliver? And I think that's my approach to digital thread is really to say, what's the business value you want to achieve? How can we, how can we prove that out? Let's, let's create it. You know, we don't have to do a huge, you know, system implementation. to try it out. How can we do that today with people, Excel, whatever it takes just to connect systems together, present the information, see if people do make business decisions differently. Does it give us the information that we're expecting? Is the data quality not good enough? the issue is fixing the data quality to start with and making that reliable. And then once it's up and running and proving out the value, then you can say, right, how do we do this sustainably in long term? And what does that look like in context of the... the larger strategy. I know that that would be my answer. That's good. Valentini, you're nodding or not all smiling. I don't know. Do you agree? Or is that? Definitely. That sounds perfect. Yes. And then there was a question that you asked my answer, but that was good to mention from Steve Klein, where he said, How do you how do we realize a highly structured event driven AI enabled environment that ensures an ontological consistency among the systems? semantic interoperability for intelligent data queries, which I think Valentino was already addressing, kinetic tracking of real-time product and process behaviors, and advanced analytics for AI power, decision-making and the loopback and all that stuff. Yeah, I know Steve for a long time already, and yeah, we have been discussing those type of questions a lot. I think it's a very advanced concept, and I would say from the

the IT perspective or the architecture perspective, you could think it's the ideal solution. And maybe you've read also the post from Benedict Smith recently talking about reasonable or what is it, robust intelligence or robust reasoning. And I think this is the ultimate dream if we were not people. The challenge is that we are people and we have people that we have to energize to work with us. We have to find budgets to that everyone believes in, this budget, that this wake up work is going to work. And that's where I see in the field all the time the challenge. How do you get people enthusiastic as a company to move in that direction? Because everyone thinks about this personal comfort zone and only when the inspiration is that big, they will move. so it will be. in small steps. And in particular, it will be done, I would say in parallel, it will not be any more big migration in a company, companies will evolve by learning and expanding their digital and advanced skills. That's what I believe. No more Big Bang deployments. That's probably a relief. think the time of Big Bang is really over. Do you think, Joost, I was reading about some PLM system that was written natively in a semantic model that was different to, you know, the the last 20 years of PLM development have been different. So do you think there will be new companies spinning out that will design effectively PLM for AI as a native kind of system? I think there is a little bit of contradiction PLM for AI. mean, for me, AI is a layer on top of everything and PLM not necessarily is a the system. So in that there won't be a PLM in the future, will be layers of AI. That's what you see a lot about this microservices discussion at the moment that as long as you have open and accessible systems, then the AI can generate a business logic or business scenarios on top of that. Several people are hinting in that direction as the future. Build a PLM system. That's my

I think what we'll find is that in the same way that people came along and said BLM is wonderful and it can fix all your problems. think there's a lot of that being said about AI and AI agents. I know the work that my company used to do and I know that AI agents would be able to take some of that on. The question is, to do that, need a tech stack, you need you know, technical people to help stand it up. You need the models, you need the logic behind how these are to operate. need the quality data, et cetera. So I think there's probably just as much effort required to get AI working with the information and doing what you need it to do as there is with a PLM implementation. you know, it's, well, let's see. we also need the guardrails because we don't have really a robust way of figuring out what went wrong, right? mean, you back up the entire 32 billion parameters plus the six weeks of prompts that you were writing in order to train the thing. Exactly. But I'm always thinking back and how did we do it before there was IT? We had human brains, AI, but they're not artificial. That's an interesting point because no one ever goes back and says, if we took the system out, could could we still do this or how do we do it knowing what we now know about the system? Could we solve it without the system or with Excel or with a simpler BLM? Everyone seems to kind of want to go more complicated and further. you know, there's some for Patrick online, there's probably some interesting academic thought studies that could be done around this. So I've got a final, we're at about seven minutes to the end. So think it gives us about time to do one more turn. around the table here. And there's a great question from Merly Mohan Srinivas again. And he said for everybody, how soon will GIN-AI and large knowledge models or large language models, I think he meant, influence companies managing their assets? I how far are we away from GIN-AI and LLMs having a direct impact on asset lifecycle management, which is the subject of our call? So who wants to field that one first?

I have a limited knowledge of that. So I'll go first, because I know you'll probably say a bit more. But I think that the problem we're facing is a lot of the people who have the data to train the models on to train a good model, you need a lot of data. So you need people to get together to get that data in one place, you know, and actually, or maybe not in one place, but to use all of it, right? and the unwillingness to do that is probably the barrier that we have. So a lot of AI that a lot of people are using at the moment is the Microsoft and AWS is the more kind of generic trained models that aren't using the data from the industry from those real assets, right? So I think that's where the limitation is at the moment. If there was a way of getting a lot of use cases from the real usage, that would be the deal breaker. That would be the real step forward. I'll give you a concrete example. There's a wind turbine company. They've got lots and lots of assets in different shapes and sizes and they're operating. Some of them are near the end of life. Some of them have just been stood up and they've got you know, ones in development that are being produced. And, you know, that's huge amounts of data. That information is there. You know, if they are, so they've got teams of people managing, you know, the serviceability repair, et cetera. But I think that's something that you could apply AI to. You've got the data history. You can look at how, for example, repairs have been ordered in the past. How long did it take for parts to become available? What was the downtime and... And I think then if you can get AI working with watching how people are working and operating and solving these challenges in the real world, then I think you could very quickly create at least a parallel team of AI bots, et cetera, that help give the real world users answers before they even ask the question. I think there's companies doing that today in some form or another. And I think there's

You know, it's probably only five years away for companies that really want to, you know, if they started today, they can have that up and operating within five years. Yeah. And maybe the word AI is a buzzword because we have been modeling already a lot the last 20, 30 years and getting better and better on that. And as you mentioned that, yeah, improving performance, I think that's probably the most applicable way of AI because that is predictable. can be standardized that is comparable. And the part where we still see challenges is in innovation. But if you then ask yourself, will AI help making decisions? I think it already does. I mean, maybe we don't call it yet AI because we call it algorithms or reports. The thing, just to give you another example, and it's a life cycle example, but I don't know if you know of a company called HelloFresh, but they do meal kits that you can order to your house. So imagine, I know they use lot of AI already, AI already, but imagine, there you go. So you've got, you know, they've got all of the data about what people are cooking, what are they ordering, what are the preferred dishes. You've got all the information about the supply chain, which vegetables are seasonal, et cetera. So it won't be long before you could even use AI to create the recipes that are then offered to people to then order. And then it, you know, it could almost be, you know, human hands off, assuming that that AI can get good at creating recipes. Cause I know that it's not great in the same way. It's not great at doing jokes. It's not great at creating recipes. Well, it's not great at the chopping vegetables yet either. We're gonna have to cut it short there because we're at 59 minutes. I wanted to just say thank you to the three of you. It's been a really entertaining discussion. I personally learned a lot. Um, that was really cool to visit tags and see all these different aspects. think, um, for the next call, would be since A lot of us are going to be at SharePLM, maybe the next call we can talk about our impressions of SharePLM and kind of the way forward from there because I think it'll be a pretty interesting inflection point in PLM in Europe because it'll be one of the first non-vendor associated, it's completely unaffiliated with any structure or vertical, it's just a PLM, a neutral site.

So once again, thank you, everybody. And we'll see you in a couple of weeks. We'll announce, you know, in the next week or so the next date. And it's been fantastic. Thank you. And I hope again, if you guys continue to add questions to the thing, we will answer them next time. we did today. you for tuning in. was for the questions. I love it. Thanks for the invitation, Michael. And thank you, Valentina and Yoss and Rob for joining. It was fantastic. Pleasure. Bye bye. All right. Take care.

Share