Skip to main content

SAP: ChatGPT in the SAP context

Submitted by Stefan Barsuhn on

Not sure if you're familiar with the saying "You don't need to know everything, you just need to know where to find it". I have a strong can-do attitude, so I regularly work in areas I haven't worked in before. This means starting at square one and having to google all the beginner questions about transactions and tables or even debugging.

So when I heard about the language generator (I'm not going to say AI, see below) ChatGPT, it sounded like it could be a great help in terms of helping me skill up in new areas. But like all things that sound too good to be true, this (mostly) was as well.

To check it out, I've been using ChatGPT side-by-side in my daily work in the last weeks, to see what kind of help it could be.

First challenge I gave it was a question I already knew the answer to. What BAdI I can implement that executes when SAP CRM saves a customer? I know there are a few BAdIs that execute on commit, but there are none that execute before save (enabling you to raise error messages etc.). It's suggestion was the BAdI "BADI_CRM_ACCOUNT_SAVE", which neither I nor Google had ever heard of (until now, I guess 😂). When I pointed this out to ChatGPT it apologized and instead recommended BAdI "BADI_CRM_SAVE_FILTER" (which doesn't exist either). When I pressed it provide a source for this answer, it pointed to pages on help.sap.com and blog.sap.com that were all "Page not found".

The same happened when I asked it to tell me, in SAP Cloud Integration, how I can extract a variable from a payload and inject its value as a header. The general approach was correct, but for the extraction, it just made up methods that didn't exist. Again, apologizing for the error and generating more (incorrect) methods.

However, when I asked it to answer a few newbie questions like: Which transaction do I use for this and that? Which table is the data for object ABC stored in? - ChatGPT usually gave the right answers.

So here are my takeaways from using ChatGPT:

First: ChatGPT is a language generator, not more, not less. It generates nice-sounding texts. It doesn't know what's true and what's false, it  only approximates your prompt with an answer that is as closely matching as possible. While this approximation is usually also the "right" answer, that's not always the case, as ChatGPT will fill any knowledge gap with a generated answer. And if ChatGPT doesn't get it right the first time it usually doesn't get better if you ask it to try again.

Second: It can't think logically, that's why I wouldn't consider it an AI. If you ask it "What is 1+1?" it will give you the correct answer. I think it doesn't really "generate" this answer but simply does the calculation. But go ahead and ask it to list 12 European cities with exactly 10 letters. You'll get cities with 8 or 12 or 10 letters. Or ask it  to provide an ASCII art representation of your name. It simply approximates your answer, like Google Maps approximates the best route between point A and B, but it is not able to do a simple sanity-check to make sure the generated cities are indeed only 10 letters long. While this approximation is fine with texts (like: write a story) which have a lot of variability, it's not enough in an IT context where the answer is either right or wrong.

Third: It's good for simple beginner questions as mentioned above and pointing you in the right direction for complex questions. It saves you a decent amount of googling and debugging. But in the end, you need to make sure you understand the answer you've been given and place zero trust in it.

In summary, it is great progress that ChatGPT saves me time with beginners questions and points me into the right direction with complex ones. That's a course of technology evolution that I've waited. But after my tests, I'm a bit baffled why nobody is pointing these inaccuracies out? Most articles I'm reading are all about how "AI Bots" are soon going to make experts obsolete in the near future, pointing at how ChatGPT is able to pass difficult exams. Maybe that's the explanation in itself. A lot of the exams I wrote in my life, you didn't really have to understand anything, just practice and learn your stuff by hard. And once the exam is over, you can forget it all. That's what ChatGPT does, it knows a lot of things but it doesn't understand anything.

In the present state, ChatGPT is no better than a mediocre Consultant. When I started consulting, someone once told me: "You don't have to be an expert, you just have to be one step ahead of your customer." While this provided some reassurances when I was a Junior Consultant, it's not exactly my work philosophy. My job is not to provide mediocre answers to customers who can't tell the difference. I strive to provide solutions that will impress other experts looking at them.

The one benefit that machines have over humans is that they are able to execute their programming 100% error-free every single time (unless humans have made a mistake). So if all that ChatGPT does is generate texts that sound good (but are not necessarily correct), I don't see how this makes experts obsolete. So until language generators actually understand what they are talking about and are able to apply logic to their answers, I'm not going to call them an AI and I'm not going to be too concerned.

I'll revisit this topic again once they are able to be 100% correct all the time and are able to reflect on their answers - at which stage I would be calling them an AI (and hope they're not going to take over the world 😉).

Tags