🔮 Future of PLMEpisode 4
🔮 Future of PLMEp. 4

Analyst Panel: Where Is PLM Headed?

Michael Finocchiaro· 51 min read
Guests:Industry Analyst Panel
Share

Episode Summary

The episode titled "Analyst Panel: Where Is PLM Headed?" brings together a diverse panel of industry analysts to discuss the future trajectory of Product Lifecycle Management (PLM) and its integration with engineering software and AI in manufacturing. The guests include PJ Yakovlevich from Technology Evaluation Centers, Jim Brown from TechClarity, Joseph Sirosh from Beyond PLM, George Laurie from Forrester, Rob Ferroni from PLM Research, and Peter Bulello from SimData. Each guest brings unique insights into the challenges and opportunities within the PLM landscape, focusing on areas such as generative design, AI-driven decision-making, and the potential impact of quantum computing.

During the discussion, several key technical and strategic insights emerged. The analysts highlighted that while there are significant investments in emerging technologies like generative design and AI, practical applications are already making a tangible difference. They also emphasized the importance of leveraging data from various sources to make informed decisions, suggesting that companies could benefit greatly by identifying hidden insights within their existing datasets. Additionally, Peter Bulello discussed how quantum computing might revolutionize IT landscapes, potentially enabling new levels of data analysis and insight extraction.

For PLM and engineering professionals, the key takeaway is the need to stay adaptable and open to leveraging advanced technologies like AI and quantum computing while focusing on practical applications that can drive immediate value. The panelists underscored the importance of integrating diverse data sources and fostering a culture where innovation and technology are continuously evaluated for their potential impact on business processes and outcomes.


Full Transcript

Speaker 1

Hello, this is Michael Finnecchiaro coming live from the Future of PLM podcast. I'm extremely happy to have this amazing panel of analysts today from technical evaluation centers, from TechClarity, from SIMData, from Forrester, and of course, Oleg from Beyond PLM. And I just think it's really exciting. Why don't we go around the horn and you guys can introduce yourselves and your particular consulting company. And thank you, Joss, co-hosting with me. Why don't you start, PJ?

So Rob's a little late joining but Rob's there. We're going around the table now. Go ahead PJ. Okay, it's still with me. Well, I'm PJ for those who are curious about what PJ stands for. It's Predrag Yakovlevich, Serbian. And yes, as you said, I'm with the company Technology Evaluation Centers or TEC based in Montreal. Been covering the enterprise applications since 99. The company helps and customers select the best solution that they need for their business. So I think that's enough for the sake of time. Okay. Hey Jim, does Tech Clarity do? What do you do for Tech Clarity? So Tech Clarity, we're over 20 years old. We focus on the business value of technology. So we definitely focus on PLM, but also all things for the manufacturing, pretty much everything along the product digital thread. I think what makes us a little bit different is we kind of come from the enterprise in. Most of us have engineering backgrounds, but we've also got supply chain, ERP types of enterprise systems backgrounds. So really look at that.

Speaker 5

Look the full context of PLM. Joseph, I think you might be in charge now. Yeah, well, I'm a PLM coach and a semi-analyst when I'm working in a very small scope with companies. Sometimes I have to review also some of the potential vendors and I'm doing this for now more than 25 years, helping companies translate business needs in technology or the other way around.

Speaker 6

And then it's all. It's me. right. So good morning. Good afternoon, everyone. My name is Oleg Shulovitsky. So I am writing Beyond PLM blog for the last 15 years in parallel developing different technologies and product work for the source system and for Autodesk and currently open BOM. So I think that's it. Thank you. Peter? Peter Bulello, SimData's president and CEO. SimData was formed in 1983 because IBM came to a couple of individuals and said, can you measure the size of the SIM market, Computer Integrated Manufacturing Market? And from there it evolved to what we do today, which part of our work is pure analyst work. We're still measuring the market, now the greater PLM market globally, but actually more of our work nowadays is around education and significant. of our work is consulting and delivering consulting services, strategic management consulting services to solution providers, investors, and industrial organizations.

Speaker 1

Great. And George? I'm George Laurie. one of the analysts at Forrester. Forrester, many of you will know, its mission really is to help C-level executives in global billion dollar organizations making decisions about changes in technology. I work in the advanced manufacturing team and my particular focus really is on discrete manufacturers. So my job is to help our clients in automotive, aerospace and defense, medical devices, high tech. And the last five years or so, I've been looking at PLM as the foundation for their pivot to smart manufacturing. Awesome. Rob, you came a little late, but why don't you introduce yourself? Yeah, hi. So my name is Rob Ferroni and I'm a product data plumber and like many of you, I'm obviously in the PLM space and I'm system agnostic, but unlike many of you, my focus is actually more, less on the system side and more on process data and specifically the people as well. So yeah, good to be here.

Any others? I'm just bugged. Okay, sorry, was I've got a look my apologies for the quality on my end I'm in Berlin and my network connections pretty poor here So I think I first want to ask like what I'm not sure that everybody listening to this broadcast knows what analysts do You know, we see your reports. We read about what you guys say We know you have an impact on the market But I guess I wanted to go around the horn to understand a bit of you know, what is what is now is do I'm a You probably have a engineering or technical background and then you have a lot of number crunching terms of market, but there's probably also understanding customer problems and talking to customers. So maybe we start with George on this one. Okay, thanks. So what we try to do is to be driven by our clients. So our clients have an agenda and we publish in advance and you can go onto our website and you can see for each analyst on a rolling 12 month basis what they're planning to do month by month. And that's really, that's driven by the kind of decisions that our clients are trying to take. What are we doing in that research?

Speaker 7

Well, we're consulting with people who've already been through a change. We're consulting with systems integrators who are helping the change to happen. And also with vendors to understand how they're incorporating new technologies in their platforms and what they do to help people in knitting together a portfolio that's really going to help them manage the change at hand. Does that make sense?

Speaker 3

We certainly use those types of mechanisms as well. But one of the things I mentioned earlier, we were founded to measure the market. So we do a lot of market measurements. We still publish a whole set of PLM market analysis reports, simulation analysis reports, and so on. So in that, and I would call that the analyst work, we're trying to publish what's going on in the industry, where the money was spent over the last year, and then talk about and where do we expect the market to go? So definitely similar to Forrester and other leading analyst firms out there. As I said, that's only part of our work we're doing, but I think the major add that we have to the overall market is the market numbers where there's not as many. It used to be more published by others. Some data seems to be one of the few that is left that really published a full breadth of the global market, where it's at and where it's going from a numbers perspective. Yeah.

Correct. Yeah. Well, you know, as we've described PLM, a strategic business approach from concept through life. So everything that touches that. the technologies would be CPM technologies or PDM technologies, CAD CAM technologies, simulation technologies, visualization technologies. So it's pretty broad, but it's, it's PLM. It's not the broad IT landscape that a Forrester or some others definitely cover.

Speaker 5

Yeah, I think it's different. It's interesting. mean, there's obviously some commonalities as well, like, some data. We're really only focused on the manufacturing industries, a little bit of AC. You know, we start started product innovation. We work down through manufacturing execution systems as well. So, you know, systems in the plant. But I think, I think. Every analyst does something a little bit different. I mentioned earlier, we focus on the business value of technology. We're not necessarily comparing vendor offerings. We're trying to help educate the market on why it makes sense to make those changes George talks about. What's the value behind them? What are the drivers? And, you know, I think what makes analysts unique is, you know, I think, you know, looking around this group, there's a bunch of smart people, but we also just have a different perspective. We get to talk to a lot more people and In our case, we have a lot of survey data that we gather. So we just have a different view on the world, maybe a broader view than sometimes people in an individual manufacturer or vendor. So I think it's nice to have analysts of all different flavors to share that information. And it's kind of cool. have lots of different flavors represented today. How about, uh, how about you, PJ, before I go to Oleg, who kind of sits in both chairs at the same time, uh, PJ, the technical evaluations, I think you guys have even different approach than, uh, what we've seen so far from the other three, right? Yes, for example, we don't do any market sizing like what SIMData or Gartner or IDC or whoever else does. Where we differ or I'm sure lots of other people help companies select, but we do have our sort of decision-making sourcing kind of a software called Tech Advisor, used to be called Ergo. Previously that's based on some...

Speaker 6

multi-attribute utility theory. I won't bore you now with that. in any case, the easiest way to describe it, are sort of like a matchmaking site. are finding proper solutions and vendors for the appropriate customers. And the customers can do that better online. We give them temporary access or they can use our consulting services like a traditional consulting services on site and all that kind of stuff. And of course, we create the value for the end customers. They don't need to create RFIs from scratch because we have those evaluation centers on knowledge basis and they can always add something that's special for them that some requirement is not already there. On the other hand, how we help vendors We can tell them who's looking for what. Of course, every vendor wants lead. And also we can also do some kind of debriefing. We can tell the vendors why they lose some deals or give them some kind of tips what they should improve in their offering and all that kind of stuff. I think that maybe that's a little bit that's different. Probably someone else is doing that, but... I'm not really aware of any tools or any proprietary software that they're using for that, which we have. But also you're doing a lot of analysis. You do publish quite a lot of really good analytics on what's happening on the market,

Yes, yes, of course, that's that part that we are doing like all other analysts, but that is not in any case revenue generating. How we make money is by helping both the end users and also providing some competitive intelligence to vendors. Just a lead you.

Speaker 1

How about you, Oleg? Maybe your perspective is more as a blogger and a vendor at the same time. How does that work for you? How do you balance being the open-bomb guy and also the beyond PLM guy? That's a great question. Thank you, Michael. So it's a little bit story here, which goes back 17 years ago that I believe you remember as well, because we've been working together in the same company, Smartim. so the story is that I found that there is a really lack of information online for someone who is interested in different PLM related topics. and it can be technological topics, can be best practice, it can be different topics related to PLM. And I just found that it's really not normal that people are writing blogs about everything but not about PLM. And I think that was a kind of start to the idea that I can share experience and I can share knowledge. and do it in the way so it's not necessarily reveal the secrets of companies that I'm working for. And I remember the first question from the SO official saying, you going to share all our secrets on the blog? I said, well, there are some other ways, better ways to do it if I want to share a secret. let's not talk about this. Not making it associated with my name. And since then, Beyond BLM survived the SO. Survived my first company that I co-founded survived all the desk and successfully surviving also open bomb and when I'm saying surviving is because it's really a time and the really kind of time allocation that you need to make to have this blogging and to share information and then obviously people want to talk to you and obviously companies And everyone have questions. So it's it's a it's a informational resource. I'm not trying

Speaker 4

I think I call my first thank you for inviting me as an analyst because I don't think I am doing any analytical work. But I think I'm sharing experience, sharing technology, knowledge and sharing knowledge about how to develop different PLM technologies. And I think that's the value proposition for everyone who comes to Beyond PLM and get exposed to this information. I think we're sort of teaching together AI now. At least I think Joss and myself had this conversation that we're writing for ChatGPT and then ChatGPT can read it and explain to others even in a better way. So that's about my story. I don't want to dominate all the questions. Yoss or Rob, you guys can jump in. Rob, you look like you're chewing on a question there. You Talking now a lot about AI and yeah, it's interesting here also, we share the same blogging history where I was more the guy in the field observing resistance, high expectations, the people side. And I had to explain that there is technology to help. And Jim, you also said that as an analyst, we are the ones collecting information and sharing. And that's also very much what we do with our blogs. But now thinking about AI and the business model of an analyst.

Speaker 6

How will it impact your business model in the future? And for that short anecdote, more than a year ago, I was starting to work with a company on their PLM infrastructure and based on their needs, we selected four potential vendors for the RFI phase. Then I asked JetTPT, which vendor would you recommend and giving the characteristics of the company. And they came with a list one, two, three, four, and I kept it secret. Now, a few months ago, we selected the final vendor. It was exactly the ChetTPT list. So a lot of the knowledge that you all spread on the web is now consumed by ChetTPT and AI. How do you differentiate or do you keep on publishing to keep the opinion there? I'm curious about the future. Who wants to start on that one? Yeah, I'll take a run at it. The large language model continues to get larger. mean, there's no doubt about it as more experience, more failures, more successes, you know, the refinement of capabilities and of course the, extension or the expansion of capabilities. So we're not, if this was a, let's say a market that was standing still, you know, everyone was implemented already or the, you know, the technologies weren't changing that much and there was no problems. Well, then large language model. You know, historically is accurate, but does it predict the future? this, you know, if we were just talking about generative AI, mean, there's other AI, course, as we know. So the way I look at it is, you know, AI, certainly mundane tasks of what are people saying? No problem. You know, I think, know, Gemini and chat GPT and all those other ones out there can bring that in. Now I have found, I go look at it sometimes to see what it's saying and.

Speaker 3

Sometimes it's accurate, sometimes it's not accurate because it on, you know, obviously what's in that large language model. So that's the other thing. The internet is full of information that is not necessarily complete or accurate. So, yeah, we're going to, we're going to have to help people evaluate even the AI that's coming out of the answers that are coming out to some extent, but we're very much building the large language model. So one of the things that SIM data has is internally is a large language model of all our reports. So that we know we can curate that and we can go look at, what have we said for the last 20, 30 years or so. And then, you know, I'm sure we'll make that. And some of you guys have made some of that stuff available, which is helpful. And I've always looked at it is when a new technology comes out, you're always better off putting yourself out of business as someone else putting yourself out of business, like, like AI, for example. So it can augment, augment what we do. It can augment what we've done for our clients. And of course it's better than. to change how we're doing things to take advantage of it, but also allow our clients to take advantage of it. that will shift, has shifted some of our work slightly as we look at stuff that we don't need to go do that administrative task, so to speak anymore. That doesn't make any sense. There's other value. So when I joined the company 28 years ago, we were doing a lot of reporting on comparing technology. So we did a big PDM guide, CAD CAM guide, guide, but those things have well went out of date, know, rarely available information on the internet. And so we haven't published those in 15, 20 years, probably. So again, we've been around for 40 years. We've had to make changes like that as other information's become available or freely or become a commodity. You move on to other things that add additional value. And we're going to have to pay attention to AI continually because it's rapidly increasing its value that it can provide. Making sure that we continue to add a value on top of that. I think the big, and then the day it's about experience. you know, hindsight is important, but it's looking forward and how do companies have to look forward where AI is difficult at, at the current time, because it's more looking at the past.

Speaker 6

Maybe short point. think I'm fully aligned Peter and when you say that the large language model more consolidate existing knowledge, the challenge I always see with companies they say you're talking too much about the future, tell us about what we have to do now. They are still also very conservative. I like maybe piggyback a little bit on what Peter said, because I totally agree. mean, I think when Google, you know, Google came out and really the Internet in general search engines, you know, everybody's like, well, why do we need analysts? Because, you know, a big job of the analyst firms, you know, I used to go when I was in the, you know, on the other side of the fence to find out who are the players in PDM or who are the players that have CAD solutions. And that became totally obsolete. And I think there will be things that we do that become obsolete. which will allow us to do more, right? And so internally, AI is making us more efficient, which is fantastic. When we do our research, we can consolidate things from the outside and learn that much faster. So what is gonna be important for analysts to do is to be more analytical and take it up a notch, take it up to the level of what does it all really mean? Not just gathering data, not just gathering facts. What does it mean? What do you do with it? How can that drive business value for your company? Or how can it help you achieve faster time to market or whatever your goals are? It's that higher level, I think, that's gonna be important, not just providing the basic facts and the basic data. And I think LLMs will do that. Unfortunately, they'll probably do it to about an 80 to 90 % accuracy and sometimes 100 % is useful. We find that a lot with people writing eBooks and white papers. with AI and if you don't have somebody that really knows the subject matter and that goes back through it and checks it, there's sometimes some pretty funky things in there that can slide under the radar. So you still need people that understand.

Speaker 1

Rob, you had a question or clarification? Jim, I think you're right. It's about domain expertise to constrain the results that you're getting back from AI. We conduct hundreds of surveys and hundreds of interviews. There's no way you can absorb all of those. So AI can help you do that and also can help our clients get more direct access to it. So I think somebody else already said this, all the research we've ever done, that's all available to, to, Izola, which is our AI agent. That helps them, but you need to apply judgment to it as well and to constrain the results that you get back. So I think there's probably still a role for an analyst for a couple more years anyway. Rob, you had a question. Yeah, it's actually building on something Yoss brought up, the idea of future focus PLM versus PLM of today. I'm interested in the target audience that the analysts are speaking to. For example, some people might actually be the thought leaders in this space, in aerospace and defense or automotive, whoever you think it is. But then you've got other companies who might just be starting out on their PLM journey. So how do you cover both those bases?

Speaker 2

at the same time when you talk to some companies who want to actually create the new capabilities versus those people who want to be having the most simple capabilities. Is that a good one for you PJ? boy. I'm not sure we are getting that sophisticated yet, at least talking to sophisticated clients at the moment, because I think at the moment everyone is still sort of experimenting and maybe learning to crawl before they can even walk, let alone run with the AI. Also what I'm hearing from some experts and all that, even the largest corporations or whatever else, their focus at the moment mostly about using AI is, how do we cut costs? It's not even necessarily maybe headcount, but let's see, for example, can we get rid or reduce expensive legal departments because those LLMs or whatever else, they can certainly you know, suck in all the data regulatory and all that and help us in that regard instead of having expensive attorneys and all that. So to cut the story short, I think we are still at an early stage trying to figure out what is it that I can help other than maybe automate some really mundane and repetitive or whatever tasks.

Speaker 6

What I heard Jim saying that some model that LLMs can be 90 % accurate or even more. As a matter of fact, yesterday I attended some data and AI focused event where some expert says that maybe to reach 75 % accuracy, yeah, it's sort of possible. Everything else then requires lots of PhDs, data scientists and all that and every like percentage. higher when it comes to accuracy is much more difficult than the previous percentage. I think we are still far away from AI replacing the humans or eliminating humans in the loop and all that kind of stuff. So as you can see, I'm a little bit skeptical and I still think we are in a little bit in an AI bubble at the moment. I think Rob's question was more about... Go ahead, Rob. Go ahead, Jim. I think it's a maturity thing and I think AI is a great example of that. But overall, when you're looking at the future versus where people are today, the future is unequally distributed. There are companies that are doing things that are looking like lights out plants and they make an engineering change and it hits their plant. For most of the companies you talk to, that's science fiction.

Speaker 5

Right. They can't even imagine their plants looking like that. You know, they're just trying to get their data under control. And so, you know, I think there I think you have to look at everybody along the maturity curve and then look at that maturity curve along their different capabilities. And, you know, know, Peter, when you go in and do consulting with people, you know, you may find somebody that's got one thing nailed, but they've got a whole other area that that's really deficient. So it's it's, you know, that kind of thing. I don't think I don't think AI is going to help with necessarily, or maybe it'll help with the benchmark to help them understand where they are. But then how do they improve their maturity and different capabilities? What are the most important capabilities for them to improve? And, Joseph, you mentioned this sort of your AI list coming up with the right list of vendors with the right requirements. But how do you even start with, do they have the right business priority for technology to help? that's, you know, That to me is a place where a group like this can still add a lot of value. I think we got one. Let me jump on something that Jim, think it's just there.

Speaker 6

Yeah, but I was just confirming to Jim that's my job also to set up the right requirements and work with the company. So the requirements were perfect. That's all that matters. Yeah. I want to jump on something that Jim, you mentioned about organization of data, because I think what was happening, very interesting to see. certainly AI reached a level of maturity. And at this moment, we've seen application like ChatGPT that opened to everyone the adoption curve with the use case that we haven't seen before. However, it's just a technology. And it's interesting if I look back like 25 years ago and 15 years ago and now, we are trying to solve the same problem and the different technologies solved for the same purpose. If you go back when the SQL servers and SQL technology was introduced, it was in order to organize data. Just to remind you can find those reports. It in order to organize data.

Speaker 4

and help to search through proprietary chunk of data. just bring it to SQL and the problem solved. And then we got like 10 years forward, we got the different search technology, obviously very inspired by Google and everything else and some other open source libraries. And everyone said, okay, let's develop search. Again, it was very interesting to see that the same search technology is very useful for the same purpose. Organize data, get access to data on like the same that Jim you mentioned. Now, I will not mention one of the AI startup that is currently developed because I just want them to make any promotions, anything like this. But, you know, they are trying to get access to information. And one of these use cases is a find part, it's very easy. that's the, like you can see, the multiple pieces of technology over the last 30 years were introduced to solve the same problem. So I think... the AI in the way we see it, it's maybe again, AI need to be clarified as a more specific type of technologies and the technologies of the large language models. I think what it opened in front of analysts, or at least I see it in front of myself, as an opportunity to drive conclusions and aggregate amount of information. and analyze this information in a certain way that was not possible today until now. So I think, you know, if I go think about 10 years ago, I said, okay, I need to analyze this information. takes time, it takes reports, it takes everything. So I organized this data, I can drive the conclusion, which was the key work that I would say, analyst, like part of the work at least. Now today, if we can bring this data, to different large language models that can just do a job and help us to drive conclusion. And it's dependent on the data that we bring in. So those are processing algorithms like databases. So the large language model, in this case, it's a processing algorithm that can be guided with the data that we bring. So I think it's up to analysts for everyone else to bring data to large language model to get the output.

And that's, I think like every technology, just will find its path, but will not replace people. But I think Rob, your question was more aimed at Peter George, right? Did you want to reframe your question? Rob? I mean, you know, you've got people like Oleg and even yourself, Michael, that doing a lot of online publishing around the future of PLM. But as Joss mentioned, a lot of people are struggling with the reality of today, which is the basics, you know, the ABC of PLM. So, you know, how do you address both those groups of people with the work that you're doing? Jim's answered it, I think, a little bit. And maybe Peter and George, you've got something to add. Yeah, it's great question because there are a lot of people who are in a poor situation. We had one client who said, you know, I have different PLMs in different divisions. I'm unable to report on the state of a project to the client across divisions. So we do some research for people like that and to say, here's some things that you can do to transform the situation that you're in. But it's a good point. always you have clients on a spectrum really of maturity and with different problems. I think you're right that analysts tend to focus on the exciting new thing when perhaps they should be spending more time on the basics. Rob, did you want to question to me?

Speaker 2

I'll let Peter go first, but then I've got a follow-up question that is related to that. Yeah, as George said, there's definitely a spectrum. One of things that people can't lose sight of industrial companies is how do they set themselves up for success? So, yes, they have current state. You can't go define a way of implementing that will boil the ocean because you don't have a filament big enough to go do that. But you have to take into consideration building out a foundation for the future so that you can take advantage of additional capabilities as they become necessary, available. useful and so you can continue to build out and evolve your digital capabilities as and when needed. And that's where a lot of companies I think still fail though because they don't plan the plan and they don't execute plans often and it's more of a hit and miss set of opportunities and then they struggle with multiple systems, multiple let's say approaches to doing things and they don't even understand that the data is the core of their business anyway. The data is what's most important, not the systems. And how do you even define data governance and other important issues to help them then build out the technical environment that can again continue to support them and evolve as their business evolves. And if I may, Michael, I've just got to follow up question which builds on everything that's been said. Is there an inherent bias within the PLM analyst industry due to the fact that some people have got money to pay for your services and other people might not? And therefore you end up kind of focusing on the type of PLM that people are paying for.

versus other companies that might not have that kind of budget. It's a great question. SIMData has always had an attitude of tracking the market is we'll take a briefing no matter what. We don't charge for briefings and we're always open to do that. But clearly those that brief us more, those that may also do more interactions with us, various things, maybe we're doing ebooking for something like that, we'll know them better. There's no doubt about that. But our intent has always been independent and to publish the numbers across the market. That's why we're tracking 6700. vendors in the market. So we do our best to keep that independence, but the more people going to interact with us, clearly there's going to be a more, a better understanding there. That's why we've always opened up and said, if you want to brief us, we will always take a briefing. It's not something that we're going to charge for. have to be part of a membership. And then it's up to them to promote themselves to the market. And hopefully that becomes as balanced as possible in the way we look at it. Same here, anyone can us. You don't have to be a Forrester member to brief us. So all sorts of people brief us. Who gets more, let's say, attention, it depends on how much effort they put into it. So some people do something beyond the briefing. Some people go, we like to tell you about our wins and go lives. Actually, that's pretty interesting. If you're a vendor, that our clients are interested in. We'd to know about your wins and go lives. So some of them do that as well.

Speaker 6

And that has always been the policy with the TC and with me I'll talk to anyone who seems to have some interesting value proposition and as I said I'll if I think it's worth writing a blog post or whatever about that I will no charge whatsoever you know so I was thinking about it more from the client perspective. The clients, like Peter said in the beginning of some data's day, it was IBM who said, can you do this work? So therefore, effectively, IBM was the one that set the terms for the inception of some data. So I just wonder if it's actually the clients with the big bucks that are steering things more. But anyway, not necessarily one. I Michael's got some questions from the the chat. Yeah, there was a question about the usage of of AI with legacy data because it's nice to have shiny new systems and stuff, but we're always going to have a legacy. One would think that rather than burning cycles, dealing with legacy data and formats, that that would be a place to use AI, right? So anybody want to want to pick up on that thread? I can pick on this because I think this is what another use case when people are looking for any magical ones, like, you know, just bring the entire data in and our problems will be solved immediately. I think it's a huge mistake to bring technology on top of data that is, can be messy, dirty. Like you need to understand your data first because you're bringing technology in because then the results will be unpredictable.

Speaker 4

because that would be just a large language model algorithm that will translate something that you bring in into some outcome. I think that's one of the dangerous things that can be taken. From the experience, I can say we can import any Excel to any system and then what? The data in Excel will be dirty. If you have duplicated part numbers, you will get duplicated numbers. Part numbers after import the question if you want to import them on And maybe to comment on that, Oleg, also before AI we had a lot of discussions on migrations from old systems to new systems. Even there we already observed that the quality of data is not the same quality as needed. So it's very risky to rely on the past if you don't understand. Yeah, it's good. Yeah, there's a real lack and feel him traditionally for data custodians, right? The ones that are supposed to keep this thing clean. Anyway, think Rob, you wanted to bring another one? Otherwise, someone was also.

Speaker 2

This is a quick one and we could do it with a thumbs up or thumbs down. You just mentioned or I heard that data is very, very important. Obviously, you've got the technology, is part of it. Data is another part of it. You've got the people and the process. All the focus is really on the tools and that's the business of the analysis here. What percentage would you say of the PLM solution is tech? versus the other stuff. So if you think it's 50 % this, if you think it's above 50 % this, and if you think it's less than 50%, then thumb down. Just a quick show across the group. So what is what is again you want this like Yeah, so this would be if you said, example, that technology is 80 % of the solution for achieving PLM in its purest form, then you do this 50 % is the solution is technology this and if you think that, for example, data and processes and people are even more important than the technology, then you maybe do yeah.

Speaker 4

Thanks again. You can feed it those all the way. Love it. PJ what about you? Well, wouldn't go all the way down because of course, technology is important. Yes, you have great people, process or whatever, but you are working with AS400 and green screens. What are you going to be innovative as your competitors? So that's where I'm sort of divided, but definitely I'm again inclined to say that people and processes are more important than just the mere technology.

Rob, think it's a first of all many years ago someone told me if you want to decide something always bring the odd number of people to decision otherwise you won't It's good that we have an odd number of finalists at least like Michael so think we can... until someone decides not to vote. I think here we agreed. Yeah, but I would say...

Speaker 1

I've got another question from the audience from Rono Chauvet, a friend of mine back at my DS days about, and this I think is a good question. said, isn't there a difference in approaching AI between different organizations? you guys, you if you're going after a boat, talking to a Boeing or Airbus compared to, you know, an SMB manufacturer, or is there a big difference between high tech? where who's supposed to be bleeding edge and you know say industrial equipment or agriculture which has a feel or process and you know even process manufacturing which Traditionally has been a bit further behind the curve. I mean, how do you guys see that? Maybe we could go around the horn with that one. What do think? I'll jump in first and just a quick comment. think anybody that's looking to implement AI is making a mistake. I think people need to have business problems that are worth solving, whether it's improving productivity or I've got a problem in a plant or I want to make my quality, my DFM more effective or something like that. If you're not going at it from a business problem perspective, then you're doing it wrong. So it's less about size of company and that and how do you apply AI. It's what are you trying to do? It's more BLM-mergimmer than AI. It's, yeah. Well, that was a problem with the LN2, right, Josh? Yes.

I think PJ and then Oleg. Okay, so I'll definitely, I wholeheartedly agree with Jim. And also what does it mean, implement AI? I mean, what exactly do we mean? Do we mean, I mean, some things we are already using and which are AI, don't even know, we don't even consider that as AI, like OCR or like, know, digitizing our invoices and all that. That's all. that's already been used by almost everyone. So that is AI, but is it AI now? Or we have some agents that are co-working together with our people. it AI just, we have some predictive analytics so we can do better forecasting or is it AI? mean, also that it's very fuzzy to my mom. What do we mean by AI? AI has been around for a long time. I remember I've been in this market like at least in the ERP markets. So maybe 20 years ago the big deal for some ERP and even some other solution was business activity monitoring or BAM which you could set up to sort of mind the store for you to monitor some KPIs and alert you or whatever else. which has now evolved into more intelligent kind of stuff that will not only alert you, but can maybe even suggest what to do about that. That is AI. So when you said who's implementing AI, is it a large company or small? I think everyone is implementing AI because AI is becoming trivial in one case and very complex in a more sophisticated case.

Speaker 6

I'm not sure I'm concluding anything, but as I said, it's all about what is it that you want to improve, start from a business process, from some strategy, from some KPI. Yeah, that event yesterday that I attended also, one of the reasons why AEI is becoming unsuccessful is people cannot determine the value. What did we achieve by using this AI? I can, I'm happy to share the experience from my side in two perspectives. On one side, OpenBomb works with a lot of small and medium sized companies. And the situation is like from my perspective, for the moment is catastrophic. Because basically what is happening that every company or person, whoever comes to you, basically they took every problem they have. and asking you to solve it using AI. basically, know, every problem, whatever problem we had, like they ask me questions sounds like this. When is your AI is available to solve this problem? So that's more or less the question. Doesn't matter what problem doesn't matter what they do. Like when you go to ordinary like SMB company, this is what most of our customers. Like, you know, Can your AI solve X? you know, engineers never talk about problems. They talk about solutions. So they speak about solution that they like, can your AI solve this? then, okay, we got it. So, but then yesterday I had a meeting with a very large company, I cannot say the name of course, but their perspective was a little bit more balanced and a little bit of what Jim, you mentioned. So they understand the AI. is a piece in the puzzle and there is a business strategy and there is a technology and they need to understand goals that I know you speak a lot about this. So they need to decide what is their goal, what do they want to achieve and they want to understand how AI can kind of accelerate them. This is usually perspective of the larger companies that they are more balanced. That's just what I can say from a different diverse group of...

Speaker 4

of customers that I'm talking to. The. Application, I this is any kind of technology application. People look at new, exciting technology. They want to apply it or find where it's applicable. There's so much press in the last couple of years about AI. So it must solve our problems. I found the new screwdriver and everything's a screw or I have a new hammer and everything's a nail. It's typical. mean, we've seen technology adoptions and these technology adoption curves, of course, where there's the hype and so on, as we are very well aware of. And I've not found really any differences except the larger companies and going along with Oleg. They tend to have more people that they can go and do some more strategic thinking about it. And they're more likely to do AI governance structures around that, for example, that allow them to probably ultimately take a better advantage of it than a smaller company. Like smaller companies are struggling the same way larger company. That's how do we differentiate ourselves? And if technology is the differentiator, that's kind of sad because someone else can go buy the technology and probably do the same, quote, differentiation. But as we all know, it's about how you use the capabilities. It's not the capabilities necessarily themselves. Because you can buy a capability and not use it the way it's meant to be used. It's like using, I'll use something simple, Google Docs, just like you used Word that's on your PC.

Speaker 3

Now it's not the same thing. It's not the same kind of collaborative environment. know, they're different collaborative environments. One's more synchronous. The other one is asynchronous typically. So we have, you know, as we work with different types of clients, different sizes, it goes back to some of the gentlemen mentioned about, you know, where does it apply? What makes sense? But we're so early in this process of AI adoption and AI understanding, there's going to be a lot more coming out over the next couple of years. That what people will still struggle with, you where do we apply it? Where does it add value going back to Jim's common clarity has been about is where do I add value? Cause that's ultimately hopefully what people are considering. It's not about, I have to implement AI. And we used to say a lot when there used to be magazines on airplanes and it doesn't seem to be as much anymore that, you know, the executives come back from flying somewhere and they go, we got to go do this because I read this in a magazine and AI is all over the press, all over the media. And so I got to figure out AI now because if I don't do this, I'll be left behind. think there's some truth to that by the way, but I don't think it's also about just jumping in without paying attention or some critical thinking and planning to get it right. Because I apply AI to incomplete data or inaccurate data. I've done nothing. I've caused more problems for my organization. And certainly we've seen. There is one thing to mention about what you said about hammers.

Speaker 1

Okay, go ahead Oleg. No, my point was that, Peter, you mentioned about hammers and nails. it's one of the things that is very exciting in the way AI was presented. The large language model is a universal data hammer. So we can bring any data in, and it's partially proven true until now that any type of data can be brought in, analyzed, structured. and then reused, that's where it's becoming very interesting. I think we are not there yet, but certainly presented as a universal data hammer. We might see some interesting development in this space. I was going to say that there's this knee-jerk reaction, right? You've seen Klarna that decided to fire everybody and had to be hired already. IBM decided they didn't need an HR department and my God, actually we do. So I think that part of that question that was coming in was more also about the pressure that the boards are getting from people that are not very informed. In other words, they're investors that don't really know technical stuff and they're just being told you got to do AI and so. It's probably this panic moment that just going to do something and whether it's right or not. I guess that's your role as analysts, right? Is to refocus a customer on what are you really, what problem you're trying to solve? You know, what's the best way to get there? That's not going to be incredibly disruptive, like firing people. You're going to have to rehire again anyway. I guess maybe that might be another angle that look in this one, right? Peter, you seem to be agreeing with that or George.

Speaker 3

Yeah, I agree. George. Yeah, I'd say you're exactly right. There is a lot of pressure to do the new shiny thing. The large companies usually have a kind of strategic group and they're able to point to them and say, yeah, we have a strategy group. They're doing some POCs. They're understanding what it really means. But the point that Jim's been making all along is it's not the technology. It's what you're trying to achieve. And the point other people have made is that actually there's a ton of stuff you've got to do around data. but also around your own people as well before you bring something in. You can't just buy something and suddenly be successful. Right. Rob, you had a. Many years ago I had a boss who once said to me, stop asking me questions and just tell me what I need to buy. That's my thing. There are some people like that still.

Speaker 2

We've got eight minutes left and I don't know whether you want to use that time for questions in the chat, Michael, or I've got a meaty question which we could get our teeth into perhaps. Go ahead. Go ahead, Rob. Yeah, so there's a lot of debate about monolithic versus federated PLM. You're all tech analysts, and so it's quite easy to pick or compare vendors. But if the solution is federated PLM, and the solution is a data platform of several systems connected together, how does that affect your work? Jim was nodding, maybe Jim you've got some thoughts on this. Hahaha

Speaker 5

I was nodding until you asked me how it affects my work. I have opinions on federated versus... One of the things I would say is that everything is going to be multiple systems. The idea of the Uber system, ERP was supposed to be that and it never was. PLM is supposed to be that on our side of the world from an engineering perspective, product development. It will never be that. There will always be multiple systems. One of the things that's going to be the most important, know, that view, I'm going to buy the universal data hammer as soon as it's out of like, so put me down as the beta customer for that. You know, it's about organizing data and organizing processes and flows across multiple systems. will always be that way. There's never going to be a digital thread system, right? That you get all of your digital twin system. It's always going to be bits and pieces from somewhere else, whether it's multiple PLMs. or it's PLM, ERP, quality management. you know, I do think that the, you know, looking at how do you pull information, how do you pull processes together across disparity is going to be the value going forward. you know, and I will help in that, by the way, but that's not the only answer. Does that make your work more of a custom bike shop where you're pulling different tools together and saying, I think this will actually give you what you need based on your requirements. Yeah.

I mean, it's an assembly. mean, I would totally agree. And I'll turn it over to people that do more direct client work. But from an overall analysis perspective, someone's always going to come up with something disruptive that disrupts a piece of the big, giant monolithic system that then becomes big enough to be its own thing. And that's going to continue. And we'll continue to talk about it. Yeah, I fully agree. It actually makes our job more interesting, but also continues to have work that's necessary because it's an industry that continues to grow. More more technologies come into it. And how do I put them together at the right level of complexity without overkill it? You can overkill stuff really quite quickly. And every solution, or I want to say solution set of capabilities, set of systems that make up a business solution, can be different to some extent. The concept of the product innovation platform, something that we've written about with IDC and Gartner and many of you guys know that, is this concept like an iPhone or an Android device where there'll be applications that need to plug into maybe a backbone. Some will provide backbones more than others probably. But even that, if you really want to implement a digital web, digital thread, you're cutting across process systems. No matter what I 100 % agree with, with Jim, even the few big solution providers out there. And I won't name the three or four real big ones that do a lot. They don't do everything, especially for more complex products, especially there's, there's pieces that are missing there. Now, will they do more? Yes. Will you get it all from them? Probably not. Because then there'll be a software, you know, a simulation code from someone else or some other specialized capability that won't exist to build out that true digital thread or digital web across an organization. So it keeps us busy. I'm happy with that. We're not, wouldn't want to force it, but I think it's the nature of and the complexity of the life cycle. Not just product, if it was just product development, more likely that it would be less, it would be more homogeneous. It's still probably be heterogeneous because know, CAD systems are for multiple people, simulation systems and so on. When you, that multiplies that complexity and the amount of capabilities that are out there or systems that are out there increases significantly.

Speaker 3

once you start talking beyond into manufacturing service and end of life type issues, for example. Well, we're almost, almost at time. was thinking of asking another kind of a around the table question. Like we're talking about AI and some of the breakthroughs we're going to see that are going to affect us as, engineers. Right. So maybe it could be fun just as a last round to say, do you think that the next big breakthrough will be, will it be generative design, some new thing in terms of modeling? Will it be an analysis simulation with, you know, like quantum physics or something like that? Is it going to be more on machining? Where do think the next break for is going to be for engineering? You know, that could be a fun one, right? So we start with Rob, maybe we go to Rob PJ, Jim Yass Oleg and Peter George. Is that cool? If what they say is true, the next breakthrough has to be on energy creation. Right.

Speaker 2

I wanna be AI. you mean like, so like small nuclear plants, that kind of thing? Okay. What about you, PJ? What's your thing? I know you have more of an ERP angle on it, but what do where do you think the new big breakthrough is going to be? boy, I'm not really a futurist kind of a guy, I wouldn't know anything about breakthrough and all that, but where I would see AI maybe helping engineers, as you said, maybe the generative design, giving some suggestions or also being aware of all the standards. or regulations and all that kind of stuff that they should be cognizant of or aware of. So sort of like having a co-pilot when you are designing, telling you, you should use this material for this strength or for this lightweight thing or for that kind of stuff. So that's what I'm thinking about. When it comes to ERP and supply chain, of course, doing what if scenarios with what to do with, let's say now. tariff, which is the word de jour. Should I source from here? Should I manufacture here? Should I do it there? I mean, that's some place where AI and analytics can help. But okay, we are at the end, so I don't want to use up too much time.

Speaker 1

Thanks, Peach. Where's the next breakthrough going to be? Thank you.

Speaker 5

That's it's a, I mean, yeah, it's a, I mean, there's there, I know it's a bubble and I know that there is over inflated expectations, but there's so much investment, so much exploration that, and we're seeing very practical things come out of it that are working. So I know people highlight all the failures, but there's just so much going on. There's going to be a lot of real value that comes out. Yes. I think generative design is already there and part of the AI will be also supporting people to make more efficient decisions. innovation will be at the people. And I somehow think it will enable also much more more based system engineering in the long term.

Speaker 4

I think the most important thing that may happen is that we will be able to make decisions based on data that we have, but we don't know we have. So company wants to buy a part, but they don't know that the last year they bought it from the wrong supplier and they get the failure and they don't need to buy from the same supplier again. Believe it or not, it's an unsolvable problem because data everywhere. Mechanical, electronic, software engineer designing product together. Discovery. Speaker 4 (01:00:26.136) cannot connect dots together. So I think the making decision based on data, this is the most critical and let's see if it will happen. It will be a big, big break. Peter? think Michael is frozen. I'll go ahead. Beyond AI, AI, gentleman said, I fully agree with, and there's a lot of breakthroughs that have come, and I think there'll be many that go. to me, quantum computing and its application across all kinds of IT spaces can really change significantly. And maybe even what Oleg is saying, some of the other colleagues were saying about being able to extract intelligence or insight out of information. You start applying quantum computing I mean, it's mind-blowing, I think, has that opportunity to really change the whole IT landscape. And thanks for mentioning that because, know, I'm interviewing Quantient, who's this amazing quantum startup doing CFD using quantum mechanics. And it's going to be absolutely fantastic. It's been great. Maybe we should do this again. I mean, I feel like we've gone over by a few minutes, so we apologize, but it's been such a great discussion. I really appreciate all you guys. Thank you for being here. Unfortunately, we had two, we had, it wasn't supposed to be all men. There were two ladies, but those of them had to drop. Speaker 1 (01:01:47.918) So hopefully next time it'll be a little more divorced, but this was absolutely fantastic. really loved you guys. think George had to drop So again, thank you very much everybody. I hope the audience you guys enjoyed it, too And if you guys are cool, then maybe we'll do this again in a couple of months And I'll certainly be seeing Robin Yoss soon because that's a and real legs and say the regulars Take care. Thank you very much. Everybody wants to say goodbye Thank you. Bye bye. Bye. Lovely to see you all. Thanks for the audience participation too. Thank you. I'm just going to cut the recording. I think that

Share