GPT/AI is everywhere, and yet, it eludes many of us!

Join us as Ian Gotts from Elements.cloud discusses the potential of GPT/AI, what is possible now, and how it is shaping Life Sciences.

Best of all, Ian discusses the skills that will be in demand and how to plan your career to stay relevant.

Our Life Sciences Dreamin’ web series is ongoing; register for the live events here: https://cloudadoption.zoom.us/webinar… …and while you’re here: if you’re having Salesforce user adoption challenges, we have a brand new guide that takes you step by step through the principles of teaching adults technology, and drives you to develop a plan for implementation. It’s got room for you to make your own plan – check it out: https://cloudadoption.solutions/teach…

 

 

VIDEO TRANSCRIPT:

SPEAKERS

Ian Gotts, Shannon Gregg

 

Ian Gotts  00:00

I’ll repeat that AI is not going to take your job but someone who’s mastered AI really well is going to take that take your job

 

Shannon Gregg  00:19

welcome welcome everybody to the life sciences dreamin webinar series, which is kicking off the same exact way that the August Life Sciences dreamin event did with our friend Ian Gotts. Ian Gotts is the founder and CEO of elements cloud and the absolute smartest person I know when it comes to all things AI and Salesforce. Ian is a relentless researcher on this topic. And his session at the two day event in August had people on their feet standing in line waiting to talk to him afterwards. And I know just from watching his LinkedIn profile that this guy is doing everything he can to make sure he knows all of the things that are happening at the intersection of Salesforce and AI. We’re so excited to have him today. And once you one more time recognize all the sponsors of the 2023 life sciences dreamin event series, which includes cloud adoption solutions as docs, elements cloud cap, Storm, customer times, recipe pro mind matrix, easy protect by adapters, Salesforce, steady state media, Fido SEO, and wise wolves. So without any further ado, I am so excited to introduce you to my friend and yours, Ian, Gotts welcome Ian.

 

Ian Gotts  01:40

Thank you so much for that introduction. But I’m not sure that anyone can say they’re an expert in AI. It’s moving so quickly. But what I want to spend some time is thinking about not necessarily how AI works. There are plenty of presentations out there, but more actually what it means for us. And the presentation is labeled FOMO fear of not being relevant fear of being obsolete. And I think that’s a concern for all of us whether we’re in the Salesforce world or not. I want to spend some time thinking about the skills which are relevant the skills which we need to learn, and spend a bit of time exploring that. As Shannon said, the technologies moving on to so far something we all thought we knew how things worked with typing prompts into chat GBT. Now you can give it an image. And it will tell you it’ll write code based on an image it will interpret an image. So things are moving on. So quickly. I joke that probably all I need to do is just present my LinkedIn stream of new announcements around AI and that probably would feel 45 minutes. But I don’t want to do that I don’t want to get locked into technology. I want to get locked into the people side of it and what that means to people. So let’s let’s set some ground rules. First of all, obviously, AI has been around for a long time. It’s been around since 2016, which was when Einstein came out with Salesforce. But what really captured everyone’s imagination was probably this time last year when chat GPT got launched. And suddenly we could type in prompts and it gave us back things that felt like we’re actually talking to a human. Absolutely stunning. We got a you could say and I write me write me a song about a cowboy whose pickup truck left him or whatever, and it will write you really good lyrics. And that’s only just accelerated. And as you can see from that slide there, it took two months to get to 100 million chat GBT users interesting little side note. Open AI who created GPD didn’t know what to do with it. They went well, let’s just launch it and see what happens. This is what happened. So but one thing is, I want to make sure people don’t think about chat GBT as the endpoint GPT is so much GPT is so much more than chat GPT you can use that we’re at the flip phones stage of the evolution of the iPhone. So there are so much further to go. And an analogy I like to use is all of us here can drive a car and there’s a Honda Civic, amazing car. Everyone, every one of us can drive. Some of us have been lucky enough to drive what is a road car but on a on a track. That’s a Porsche GT three $200,000 worth of car. absolutely staggering when you drive it on the track the levels of grip, the acceleration, the things you would never ever experience on the road. And he walk off having driven that and you think you’re a Formula One driver that goes amazing. I now understand what it’s like to be Lewis Hamilton, a Formula One driver, and then he bumped into Lewis Hamilton on the pitch and he shows you the steering wheel for his car. That’s a Formula One steering wheel. And somewhere on there is a button which is the launch control to get it off the Get off the grid. Any one of us, even if we proven sportscars wouldn’t even be able to dry or get the car started, we stole it even trying to get it started. So the point is that not using GPT was the Honda Civic, it’s been a dramatic step forward in terms of using chat GPT. That’s the Porsche. But actually, where GPGPU is going is where Formula One cars are. And we’re only really at the very early stages of this as, but it’s accelerating so quickly, it’s it’s quite difficult to stay on top of all the technology. But it’s not difficult industry to think about what it means to us, and sort of the sorts of skills we need to grow. And that’s what I want to talk about. I can at any point, please post questions into the q&a. And I’ll try and pick them up as we go along, or I’ll have time to capture those at the end. So I want to play a video that virgin voyages created, and has been very successful. Millions of views. And I think just so you understand just how pervasive the AI is.

 

06:11

Hi, it’s me, Jennifer Lopez, Chief celebration Officer of virgin voyages, here to invite you to celebrate. So come celebrate your

 

06:21

anniversary, your birthday, which is generally been fabulous.

 

06:26

Really, Kyle. Good, right. Jenny i is supposed to be inviting people to Virgin voyages, not doing whatever that was, you know, just give me that.

 

06:35

As I was saying,

 

06:36

why not celebrate on an award winning voyage with Michelin starred chef curated menus, world class destinations, award winning, honoring.

 

06:44

I’ll hop in there. I’ll hop right in that soon.

 

06:51

I can’t watch this. Give it to me. There’s no kids here. That’s all you need to know.

 

06:58

Oh, can I live there? Jim, invite your crew to voyage create your custom invite good virgin voyages.com is not just the Yeah, it’s a super Yeah.

 

Ian Gotts  07:11

That’s already fun video, and you can go find it on YouTube. But I think there’s some really important messages that come out of that. Number one, the fact that they people making videos like that, or ads like that shows that the audience understands the potential of AI the ability to go and create an avatar that looks like a real person. So therefore, even though we’re not all using this level of technology, there’s a general awareness in the population about what’s possible, number one, number two, the script writers nearly back to work after five months on strike, concerned about AI generating scripts, that actors are actually being replicated by AI. And whilst they might be able to write some of the script, the idea behind that just a brilliant idea of melding almost four different marketing messages into the same video. No AI is going to be that creative. So there is still always room for the creative to actually even think of think of an ad like that. And then the last point, which I think is slightly more concerning, is that clearly Jennifer Lopez can make some money out of that, because she’s well known. But all the bit actor, the actors in the background, maybe they in the future will be avatars because actually they don’t have there’s no there’s no differentiator because no one knows who those other actors are. So there is a natural concern from actors, which is if all the people in the background are all avatars, and where’s the where’s the work going to be for the smaller actors? And then how can they grow to be the next Jennifer Lopez. So a number of number of concerns that come out, just have that short video. For a bit of fun, I actually spent about an hour with several bits of AI technology. And I actually turned that advert into an advert for for elements GPT I’m not going to play it. But in an hour, I managed to take someone like Jennifer Lopez voice, get her to talk about our new product. And it didn’t take me very long. I think that also then raises the whole issue about who owns the IP here. The video we made, we didn’t claim it was ours. Absolutely. We talked about the fact that if I’m just doing it to illustrate a point, but with a couple of bits of technology and an hour’s work and and very little training on my side, I was able to spoof this video and turn it into an LMS GPT advert. So there’s AI is actually changing things in some quite dramatic ways. So let’s think about what how this applies to all of us in the work world. So I think the first thing is actually we’re not going to see brand new roles are arriving. We’re not going to find the prompt engineer necessarily the same way as maybe no one I don’t think anyone’s got a job title, which is I’m the Google Search Person, what we’re going to find is that actually, we’re gonna have to gain new skills rather than having specific roles. I think probably a year ago, we thought that a prompt engineer would be a job title, I don’t believe that’s necessarily true. Now, that’s the first thing. The second thing is that we are we clearly, we’re using AI as something which supports us, rather than replacing us. So we need to delegate, we can’t add, and we can’t abdicate. We can’t just go AI will do that job. We don’t need to hire somebody, clearly, it is going to accelerate lots of the things that we do. But we need to be good at delegating that. And again, if you will see a little bit later. But if you don’t delegate very well, the answer you get back on can be very good. It’s just like the real world. You have an assistant or you have somebody in your team, you asked them to do something, if you’re not very clear about what you’ve asked, they come back with the wrong answer. Look at yourself and go, did I ask the question properly? Did I did I help the person scope what they needed to get to do correctly, rather than them having to guess what like what was in my mind? So AI, again, is making us think about how we delegate more effectively. So a couple of things that we that we need to think about with the backdrop of the skills that I think we now need to double down on. And the next slide is got those those skills. And what’s really interesting is only one of those are skills that we don’t currently have prompt engineering. So again, I’m narrowing this down to thinking about what it takes to use AI to make Salesforce better to help us change Salesforce more quickly, to make sure we change it in the right way. I’m not thinking about, but lots of different areas, other areas of business. This is where I’m focused really on, on the whole way, the area of managing Salesforce of making Salesforce work better for our end users. If I was presenting this for and how you change the name of the business in general, I’m sure I’d have other things on the list began, let’s focus just on what it means if you’re in the Salesforce ecosystem with your Salesforce professional. And in fact, probably, if you’re involved in any systems implementation, this list is so true. But let’s again, let’s spend time thinking about Salesforce. So I want to pick up each one of those, in turn, happy to take questions as we go along. But let’s start with prompt engineering, which is this new skill that people are discovering? So the first thing let’s think about that, what a prompt really is, the moment people think about a prompt is I’m just going to type something into Google into open AI chat GPT, and then get a result back. They’re using it almost like google plus plus. And they’re relying on what the prompt is, and relying on what open AI knows. But what GPT or is very good at is actually if you give it all the puzzle pieces, it can then solve those solve that puzzle really well, using its ability to work out what the next best word is. That’s fine. We could do that just independently, we type something and we get some answers back. And what we’re discovering is the bigger the prompt, the more detailed the prompt, the better the answer. But what if we now take those prompts and embed them inside our applications. So it’s generating emails for us, it’s generating user stories it’s generating, we’re using it embedded inside the application. Often we’ve written the prompts. And we’ve embedded it, we’ve attached it to a button we’ve attached inside Salesforce, think about everything they’ve talked about, as at Dreamforce, around prompt studio. All that prompt could well be embedded in a third party application the same way as elements dot Cloud has got those prompts. But when we think about that prompt in terms of the risk inside our Salesforce org, it’s only slightly less risky than a virus. Let’s explain what I mean by that. If you do something that’s declarative, like a flow, that flow should execute the same every single time. We’re not checking on again, oh, well, if it executed correctly, maybe after we’ve had, say, the three releases each year, we may need to go and check and understand whether that flow still works the same way. But we’re not checking every day. The same as if we write code that Apex has been compiled, it should run the same every single time it runs. But with a prompt, you’re typing a word that prompt is now hitting a large language model, a foundational model. And that foundational model could be changing. It’s being it’s learning. It is being optimized by whoever is running that model. It’s being it’s being tuned for performance. So therefore, the changes, the results you’re getting for that prompt could change daily and Oli. So we’ve now inserted something into our application, which is to simulate slightly uncontrolled. And we need to think about those prompts as code, but code that actually is relatively high risk. So I think when people think about prompt engineering, they go, Oh, that’s about how I type things into prompt. Now, actually, it’s a little bit more involved. And I want to take you through a few steps, just to maybe broaden out what I think prompt engineering really is. So the first is if we think about prompts, like code, we need to understand what the requirements are. And we need to create some acceptance criteria. So let’s think about we’re going to attach a prompt inside of buttons. So it will generate an email based on a customer success a customer query for to give a customer success answer. So what are the requirements or requirements are generating email, but what are the acceptance criteria, and also of what’s the candidate generating email, but is the email written in a way where it thinks that we’re going to get the outcome we expect, which might be either closing the customer query or upselling, a customer or whatever that is. So again, we need to think before we start typing, we need to start, we need to work out what the requirements are in the acceptance criteria, just like code. There’s the famous cartoon, which is, you guys start coding, and I’ll find out what the what we really need to do. So absolutely, we should be thinking about requirements and the acceptance criteria. Then when we write the prompts, we’re gonna have to write that iteratively. At the moment, if you’re writing code, or declarative, you’ve got a pretty good idea about as you are writing it, what’s going to happen, but we’re still in quite uncharted territory here. So as we were going to write that prompt and actually go, Oh, I wonder what the answer is going to come out what the result is going to look like. And what we’ve discovered from a lot of work over the last year in terms of writing prompts, is that it? It’s quite subtle, you reverse two words in a sentence in the prompts, and it will come up with a dramatically better or worse answer. So there’s still quite a learning curve in terms of being able to understand how to write prompts really well. Interestingly, for our VP of Product Management, surrealism ski, he writes prompts really well, and we talked to him, I said, Well, why do you think that is? And he said, Well, I used to watch Star Trek all the time. And Star Trek, they were constantly asking the computer for something. And writing a prompt is actually very similar to be asked the computer for something. So again, it’s a skill in terms of getting writing these prompts. And it may take a while to really tune into the best way of writing them. If you think about when Google came out, we got we used to type things into Google, we got answers. I was lucky enough to be on Microsoft’s worldwide Advisory Council for four years. At the point that a lot they launched Bing, we actually said Bing was a really weird name. And they went, we’ve chosen the name, let’s get on with it. But then people started typing, I’m using Bing and going, it’s not giving me very good answers. And actually, as a search engine is really good. But we had already been trained for years over how to put things into Google to get the right answers. If we applied those same queries to Bing, we were getting different answers, because we hadn’t been trained by Bing. And I think the same will be true here. As you start hitting certain large language models, it will train us to get to type in the prompts in a way where we get the right answers, we moved to a different large language model, we might find that actually those those prompts don’t work the same way. So there’s still quite a learning curve in terms of after working with a large language model to make sure the prompts we’re writing, get the answers we expect. And again, longer prompts, very detailed prompts get far better answers is what we’ve discovered, be very, if you get this take this back to the real world. If you said to your assistant, could you put me a restaurant? They come back and go, Yeah, put your restaurant. Oh, no, no, no, sorry. I needed it next week. And then you go back and forth, back and forth, and back and forth, and back and forth. And ultimately, you the right, aren’t they get to the right answer. And that’s how people tend to write prompts. They write a loose prompt, and then say, could you make it this? Could you make it that? Let’s rerun that the restaurant request? Could you book me a restaurant next Tuesday because I’m entertaining a client who loves Indian or Italian foods, and we’ll be driving there and there’ll be four of us. That’s a good prompt, suddenly, we can have a restaurant. And from what I’ve just inferred, it’s we know how many how many places that we need to book for, we need parking space, it needs to be a certain distance from the office, and we know what started restaurants. So we think about that analogy or that way of working when you’re thinking about writing the prompts. Spend more time thinking about making it very tight and you’ll get better answers. So clearly we need to once we’ve written that prompt, and we’ve now embedded it inside our application, we need to test that it’s delivering the results we expect isn’t, particularly if it’s using metadata, it’s pulling, say, particular fields from records, we need to make sure that all works. And then also the the, the quality of the that in this case, the email that it’s writing we’re happy with. So, all very good, we’re thinking about it like code. This is all as we’d expect. Now, the other thing is, actually, we need to think about managing versions, which is probably not quite how you’d expect code, because actually, we’re, we’re iterating versions relatively quickly for the versions of the code we’ve written. But also as the large language model changes, we may need to be tracking, okay, why is this, this prompt changed, and the version of the prompt versus, versus the version of a large language model we’re hitting. So we need to track a bit more information than just this is the particular version of the prompt. But again, we are actually got versions of prompts, we’re storing them, we’re iterating on them, we’re improving them. And again, that’s not something I’m seeing necessary at the moment from things like prompt studio is the ability to manage it almost like source code source control. And now we’re going to get into a few things, which I don’t think exist in terms of managing code. The first is tracking prompt usage and performance. So our we’ve written some prompts, we’ve embedded inside Sales Cloud is now giving us an email. Now, we need to track whether first of all, people are even hitting the button and reading email and using that email, but also how much they’re having to modify that email. Again, we shouldn’t be, we should be coaching our end users not to go hit the button that and then send email but hit the button and Okay, look at the results, and then see how you need to tailor it. Again, GPT, is really good at coming up with a first cut of something. But it’s not very good at making it perfect. And I think we need to apply our skills on top to look at whatever has been created and go oh, just tweet that tweet that tweet that back to the delegate, don’t abdicate, I can’t just go is created prompt, that’s good enough, you need to read it and tweak it. But the point is, are we tracking whether our end users are having to make major changes to the emails that have been generated, or they’re using it, and it’s not too bad. So we need to track that. On the back of that, we then need to work out how we refine those prompts back to this managing versioning. And then we’ve got an interesting concept called drift. I think I alluded it to it at the beginning, which is as the large language model changes, then we’re getting the results we’re getting will change over time. And we need to track for that. We never had to do that in the past, we never need to worry about whether the compiler suddenly compiled and run it run the app ran our code in a different way. But those results coming out will change. And if it’s a high risk email that’s maybe going out and making an offer to a customer of a discount or some offer, where maybe we need to be checking that for drift daily or weekly, if it’s an email that say going internally, maybe monthly. But again, we need to put in place some way of reminding us to check and some way of monitoring that. And then the last thing on that list is if our prompt is using metadata from an application, we need to note know that so that someone doesn’t go and change that field, not realizing it’s used in a prompt. The same way as actually, if it were used in an email template, we need to know that to use an email template before we start changing that. But actually, again, as I said, it’s slightly higher risk because AI is taking some views based on that data and making some decisions for us. So let me bring this to life with an example. There’s a question and I’ve seen this popped up that I’ll pick up. Let’s assume it’s this email it’s going into that is generating as MIT going to make an offer based on the value coming out of a say an opportunity in inside Salesforce. So let’s say that opportunity is where the US company, so it’s in dollars, and that’s fine, fantastic. But then we go and acquire a company in the UK or in Italy. And suddenly we decide that that field in the opportunity is now going to be multi currency. We can put Sterling euros, we can put dollars in there. And we’re going to we’re going to create an extra field which is the currency symbol. And that’s great when we’re actually typing this into an opportunity because we can see that if we didn’t realize that the prompt was using that that cut that number field It has no knowledge about this currency, it’s still making decisions. And there may only be 10% off because the differential between Sterling and euros is only 10% is 20% off between Sterling and dollars. And that may actually be material. So again, we need to think about a prompt not as Oh, it’s like a clever Google search, but actually that it’s really high risk code that we’re injecting into our organizations into our orgs. So I’ll pause there for a moment. There’s a question here. So the question is, do we think developers will add prompt engineering to their skill sets? Or we will be prompt engineering via job in itself? No, I believe that the concept of prompt engineering will will be something it’s not just for developers, actually, it will be for an business analysts need to think need to understand what prompt engineering is, developers clearly will need to do that. It will be like a different form of writing code. It’s also broaden this out, being able to write prompts to get to for up for AP, or for GPT. To do research, for us will be something we all need to know whether you’re a marketer, whether you’re business analysts, whether you’re a developer. So I’ve given us sort of made you think through the eyes of the developer. But if you think about what Salesforce is now doing is asking admins, business analysts, architects, to think through the eyes of a developer in terms of what’s being built, the more we get, the more lo code exists, the more we’re actually becoming developers, the need to have a developer mindset. And it goes the other way, as well, a developer needs to have a bit of an architect mindset. So this, I think, is a pervasive skill that we will all need. It’s not something we can go don’t worry, our team over there, it’s got that covered. Now, roll the clock forward 123 years, and maybe maybe there’s a level of refinement, or maybe an AI has actually taken a different turn and actually is very good at coaching us into writing prompts, I have no idea. I’m afraid my crystal ball is as murky as yours. But from what I’m seeing at the moment, we all need to make sure we understand how to write prompts, and not just write really good prompts.

 

27:16

Thank you. Great question.

 

Ian Gotts  27:20

So I said there are a number of other skills, prompt engineers play new but there are a set of skills, which I think AI is going to make sure that it’s going to bring into sharp relief, it’s going to accentuate the importance of these skills being really well honed and used. And I think if I look across the Salesforce ecosystem, and I’ve been around about 22 years now as a Salesforce customer, the Salesforce ecosystem has evolved very quickly in terms of Salesforce as a strategic application, it’s critically important to organizations, and things like architecture, business analysis, have become data quality have become really, really important. And I don’t think they necessarily become a being given the amount of time that they really deserve AI is going to force people to genuinely take notice. The A and AI is the A stands for Augment, AI will make the best things better, it will make the worst things even worse. lovely quote I heard. Ai punishes mediocrity, if you’re not very good AI is going to is going to demonstrate that very quickly. So some of these skills, I’m not going to talk about our core skills, which we double down on architecture, think about the Salesforce as well architected program. They’ve put an enormous amount of effort into helping us architect Salesforce well. But suddenly, with AI, we are now complicating that architecture even more, we’re now obviously, including external systems that are having an influence on our internal org. And therefore we need to understand how it’s architected. Where things like Salesforce data cloud, again, we’re no longer just on the platform, we’ve now got data cloud, which again, has got some architectural considerations of its own. And as you can see there on the right of the slide, these foundational models, the large language models are external. And again, you don’t need to, you don’t need to understand what this diagram is. But you need to understand that this diagram actually is what our world looks like now. And it’s getting more and more complicated, and making sure that we understand what we are architecting or how our systems are architected, not just Salesforce, but the connected systems is going to be increasingly important. I would say this, because I’ve been on my soapbox the last 30 years saying business analysis is really important, but it’s I think it’s going to become a superpower. For a couple of reasons. Number one is that as AI is enabling us to change the way our businesses work, we need to have the business analysis skills to understand how our business will change. This is not us going oh, I know how Salesforce works. I can slap a bit of AI on Here’s a bit bit of AI on there, I think it’s going to dramatically change the way businesses can operate. And therefore the business analysis skills of understanding what the implications are, and really digging in before we start implementing AI is going to be important. Okay, that’s number one. Number two, AI actually will supercharge a lot of the business analysis activities. But as I said, AI will be really, really good if it’s got good content. So if you’ve got good business analysis that’s working on the back of it will be the results will be fantastic. If you’ve done poor business analysis, then the results are gonna be awful. So again, it’s going to show those people who are really good at business analysis. So it was think about some of those acronyms out there UPN. That’s the universal process notation. That’s the process mapping standard that’s been around, oh, I don’t know. 1015 years used by 1000s of us all of all, of organizations. Salesforce has made it the way that they think that people should map there are Trailhead courses. And by the way, at the back of the presentation, which we’ll be sharing with people, there are a whole series of links to trailhead, links and so on some some useful resources. But UPN there are Trailhead courses, there are architect courses, the Salesforce business analysis certification, which is that that certification I’ve talked about, part of that is how to draw process maps as UPN ERD entity relationship diagrams, part of the services architecture world, you need to understand how to draw out a data model is no good. Just understanding how the business works, we need to understand the implications on how your Salesforce data is structured, which then leads to DFDs data flow diagrams, if we’re going to start using AI, we need to understand where the data has come from, how it gets transformed and changed down it sort of the data stream. So being able to map that out. And data flow diagrams is again, it’s one of the Salesforce architecture standard type diagrams. MDD will be something new to people, it’s metadata definition description, sorry, wrong way metadata, depth, a description definition, how should I document my metadata. So if AI can read your metadata, understand your org, it can make some recommendations, if you’ve got poor or zero documentation is going to do a very bad job. But we need to think carefully about how we document so that AI can read it. Think of AI as a 12 year old, who is completely focused, doesn’t understand nuance doesn’t understand your weird acronyms. Makes no assumptions. Does everything absolutely, literally. So we need to write with that in mind. Again, I think that’s a skill. So what we did was we got came together. And we wrote a definition of how to write descriptions for metadata, which is what MDD is. So the outcome of business analysis is a user story. That’s what we would pass to the development team. And a user story or almost is a specification of that piece of work that needs to be done. And again, there are some standards out there industry standards. So the the industry standard is as a dot, dot, dot, I want to.dot.so That I can very standard way. And then there are some acceptance criteria that goes with that. And then the the testing is BDD behavioral driven design. So there are some standards here about how you would deliver the specifications to your development teams. The problem is that user stories are quite laborious to write as humans, we’re not very good at writing them. But AI is brilliant, it really does a good job because it doesn’t get bored. It doesn’t get distracted. It writes really good user stories, it can write BDD, but only if it’s got decent process, content, decent descriptions to drive off. So that’s why I think business analysis is the superpower. Because if you have good business analysis, AI can drive huge improvements. So an eight hour task can be done in five minutes. And that’s really what we need to be doing in terms of accelerating our ability to change Salesforce is make sure that that business analysis is done well but but more quickly, so we can start to make our our organizations agile. So someone asked a question what what what type of BA certification to Salesforce business analysis certification. The good news is that now you don’t need to there’s no prerequisites, you don’t need to be a Salesforce admin. Even if you don’t pass, take the exam, go and do all the training. Every one of us needs to be able to have those business analysis skills. So whether you’re an admin, an architect, a developer, or a BA, the ability to ask those questions to probe to understand how to document a business process. I’ll show you what it means in a moment in terms of how you could use how AI can actually superpower this business analysis. So afraid is also, AI is going to make sure that we write decent documentation. I mean, they say data is the fuel for AI. Documentation is also the fuel for AI when we think about how we would go and make changes to Salesforce, and I know we’re very poor at documenting all this, we go finance, I’m too busy for that, again, AI is going to make sure is suddenly made it documentation really, really important. And it’s not just documentation, good documentation, those organizations that master some of this stuff are the ones that are going to have massive competitive advantage. Someone said, is AI going to take my job? The answer is no, it’s not someone using AI better than you is going to take your job that AI is not going to take your job. But someone who’s mastered AI, really well, it’s going to take that take your job. So it’s, but we really need to get on top of this, we need to get good on top of architecture in top of business analysis on top of top of data governance in top of, of documentation. So there’s the point about data governance, and in data governance is, if your data is the fuel for AI, we need to understand, okay, which fields of the 10,000 in our work are important, which are those are the ones that we can use for AI. But then from that field, we need to then work our way back up the food chain and go, how’s that? How’s that feel populated? Really no different from dashboards? If the dashboards are the this is what our executives making decisions from, we need to make sure the dashboards we understand which reports are feeding the dashboards and which fields are on those reports. They’re the important fields. And then we need to understand how they’re populated. Is it through a screen? And what are the validation rules, if it’s through an integration or a flow, or through apex, it’s populating it from a third party system, let’s go upstream and work out where it’s coming from. And then data governance is actually putting some controls around that, because it’s no point of we’ve cleaned up this data stream. And then we’ve got an intern pumping Excel spreadsheets in because they’ve suddenly been given access to data importer, and populate and polluting all that data stream. So again, once we’ve actually understood, which is the critical data, we need to go and protect that. So if we think about, where would we start with AI projects, let’s pick one area, maybe it’s you think the service area could be improved through AI? Great now is probably the case object, we can now think about which record types and on the case object are important, then we can hone in on which are the key fields. And then we can start to think about understanding where how it gets populated. And as we discover this information, as detectives, all that evidence that we’re collecting should be documentation associated with metadata, as process maps as data flow diagrams as he IDs. I’ve been involved in change projects forever. And I’m always looking for a catalyst, something which is going to drive to drive the changes. And then me hooking the things which I know are important onto the back of that. So if AI is forcing us all to make changes, great. Use that as an excuse to go right before you before make changes. Let’s make sure we’ve thought about the architecture, we’ve got data governance mastered. We’re doing some decent business analysis and getting some good documentation. Don’t miss the opportunity of AI getting the executives, mindshare, and then use that as the catalyst to go. These things need to be done properly. Otherwise, AI is not going to work, or it’s certainly not going to support as long term. And I’m sure everything I’m talking about you people in the audience again, yeah, I know, I know we should be doing this great AI is the is the way that you can make sure you get the time and the investment and the resource to do it properly. So what I want to do is just show you, I’m not gonna run the video, but actually I’ll jump across just show you what’s possible. This happens to be elements, GBT, what we’re doing is because we know about metadata. We know about the industry standards, like well architected, we know business processes, could we write user stories and come back with some solutions. So let’s just jump out of that and jump over here. So this is a UPN diagram. So I’ll just spend a couple of moments just talking about UPN. Some very simple principles. But the idea of UPN is it’s simple enough that everybody can understand it, no matter what level of seniority no matter what area of the business you work in, it all works in very similar way. So the idea of there’s an activity box and it’s and the the text starts with a verb develop, run a pay Yeah, analyze, you have inputs with text on them, every box should have a line input in and out with text. Because the important thing about the text is it defines when that activity is finished and what the handoff is to the next step. In terms of who does it, or who’s involved, who’s supportive, who’s informed, there are resources associated with it. So VP of Marketing is accountable, CEO is supportive marketing team are informed. So it’s called Rasky, getting an interesting standard. And you could drill down to the next level of detail. So instead of having every single activity box on one page, so it’s it’s unintelligible, have eight or 10 boxes, and then drill down to the next level of detail. And then link, that little ie you can link to things, you could link to documents to webpages, and so on. So a very simple approach called UPN. Anybody can do it, you don’t need any special tools, elements happens to a good do a beer, do a very good job of it, and you can go and play with it inside the elements playground. But again, this is a skill you’ll need to master. But the reason I’ve raised this is that if we’ve documented these posts like that, then we can get AI to build us user stories. So if you remember, let’s go into edit mode. If you remember I said user stories are really laborious to write. So this developed marketing strategy, it would have to write three user stories, one from the perspective of the VP marketing one from the CFO and one from marketing team. The same year campaigns were running, writing a user story for every resource type. So I could highlight a couple of boxes. And I could say, generate the user stories for me. Is it a title release? Well, maybe it is, is there a parent requirements. So if I’ve already got requirements, I could tie those user stories to a requirement, I could tag it as something saying as marketing. And now it’s off writing user stories. I guess if I go and let’s go and have a look at a user story. So here’s a list of user stories. There’s been just been created. Let’s, let’s have a look at that one. So here’s a database of the user stories. And there we go. So it’s written it as a marketing team member when demand generate. So you see, it’s written a fairly detailed user story. And here’s written acceptance criteria. And what we’ve discovered is that in using this internally for the last couple of months, as we’ve been no, that’s not a very good user story. And then we’ve gone back and looked at the process diagram and gone. Yeah, we didn’t do a very good job of that, did we, if we’d written a written a better, and a more accurate process map we’ve got would have got a better user story. So it’s interesting, it’s coaching us into actually writing better stories. They’re writing better processes, so we can write better user stories. And again, this may not be 100%. Correct. You can edit this, you don’t have to just do I accept it. Yeah, fine. It’s done the HA that a lot of the work for you. And then you can use your expertise to go and refine it. So that’s really cool. But then the other thing you can do is because elements understands what’s in your org, the hundreds of 1000s of metadata items, I can then ask it, like, is now live, it’s going to the org. And it’s now this is not a screenshot, this is genuinely it’s going through and looking at across the org, for this particular user story, what objects fields I could reuse? Do I need any automations and so on. And again, using this internally, we’ve discovered that it’s identifying metadata items we’d forgotten that we built or that we could reuse. Now again, you don’t have to go Yeah, Coke completely, except that, but again, it’s doing quality research for you very quickly. And all of this is dependent on a couple of things, right? One, can we write decent process maps that the application can read, to be able to write the user stories? Number two, have we got the right skills to to work out whether these user stories are correct, and we’re happy with them. And three, if we have decent metadata documentation, AI can do a great job of pulling out the fields, the workflows, the automations, the record types, the email templates, whatever is needed to go and deliver the solution. So that’s just an example of what’s possible. And why I think business analysis is, is this superpower that will become more and more important as AI starts to touch every area of our Salesforce implementations. I’ll pause there for a couple of questions.

 

44:54

Elements clouds go the next

 

Ian Gotts  45:04

Yeah, so there’s a good question just coming in, which is elements GPT is showing waste, yes, it’s identifying technical debt. And it’s helping us reuse items to reduce technical debt. So I think that there are a number of different use cases that that’s the first one, I think the other really interesting one is, supposing you’re the consultant or an A, B, or you’re an analyst in a firm, and you want to prototype something, you could map out that sort of that area. And then it could write user stories. And if you then made it this, run this against the devil, which is completely clean, you could very quickly identify what you need to go and build that prototype. So there are quite a few different places that we can start to use this. And so that little example there of business analysis, Process Map, user story, recommendations, an eight hour task was reduced about five minutes. And that, but that can only be applied, if we have strong business analysis skills, we are able to understand whether AI has come up with some decent answers and validate those. So again, back several slides ago, we talked about you shouldn’t, we’re not abdicating we’re delegating, we can’t ask you to do things we don’t understand. Someone said to me is, is is GVT any good at writing JavaScript? In fact, I have no idea because I can’t write JavaScript. So I have no idea whether it’s any good. Can it write good user stories? Yeah, it can, because we’ve spent weeks and weeks optimizing those prompts. But I know what a good user story looks like, because that’s what my world is. So again, we can’t expect you to do things that we don’t understand what the answers are. So hopefully that answers that question. I’m conscious, we are getting close to time. So I want to leave you with some things that are actionable, which is what what can I do today, sitting here as a VA and admin and architects and when in the Salesforce world or a developer, the first I think is you need to think about how you reinforce those core skills. Ba ba certification is really good at that. But go back and think about what data governance means think about what, how to do that business analysis. Take stock of how well you’re documenting. And I’ll say there’ll be a, there’ll be a bunch of resources that will help you do this. The next is if you haven’t used chat GPT yet, probably there are very few people who aren’t there, but get hands on with chat GPT. But think of it through the mindset of more detailed, more explicit prompts, and then see how much better those answers are gonna get. Not, can you tell me about, you know, who won the Battle of Hastings or whatever, don’t do things which are associated with its knowledge of history or knowledge of internet. But think of it as a way of how can it solve problems where I give it the puzzle pieces, and it’s really good at solving the answers. So let me give you an example of that. My daughter is a songwriter and is in a band and the band, they wanted to get rid of the one of the band members because he actually hadn’t been engaged for several months. And we said, Okay, well, there’s a termination clause in our band agreement. So they agreed what they wanted to go and do, how they wanted to terminate based on the clauses, but also some other things they wanted to offer. So I gave GVT, the entire contracts, because it wasn’t that big and four bullet points for what they wanted as the output. And it wrote a pretty good letter of termination, including some very nice sentences about how we’ve appreciated his his input and so on things I would probably wouldn’t have thought of, did he do a great the perfect job? No, didn’t, we had to tweak it. But again, it didn’t need to have any knowledge of the bounds or the stuff they’ve written. So I gave it the puzzle pieces. I gave it the contract. And I gave it some of the bullet points of what I wanted to come back out in the contract. And I was quite explicit, and it came back with a really good answer. So when I say get hands on chat GPT approach it from that perspective, not a I’ve just playing around, I’ve given a few sentences, what can it tell me about the world think of it is a very good puzzle solver. which then gets me to the next point, which is about mastering prompt engineering. If you’ve got prompts that you’re writing down, and using, are you storing them so you can then evolve them. So you can go I’ll reuse that this is actually templatized that so I can reuse it. So again, I use an example Savary who leaves VP of is our VP of Product Management. Their team has or his team has has created a series of prompts which generate a website which is our internal website talking about future releases so that our customer success teams or account teams marketing can all have early view of what’s coming. That website is built by GPT. Because they built some very detailed prompts. The prompts are multiple paragraphs long, but they insert into those prompts five or six bullet points about a release. It then writes the HTML code and writes the website. So what are you doing in your organization about finding out who else is using GBT getting those prompts in a central database, evolving them thinking of them as templates and reusing them rather than sitting every day at GBT and typing in from the every time you’re typing it in from clean. So let’s start thinking about this prompt engineering idea. Rather than thinking as, oh, I just type something into Google. And the last is I start thinking about which GPT apps will support you. So clearly, you’ve got chat GPT. But there are a number of apps. I’m adamant star cloud is just one of them, where those prompts and AI is already embedded in it. And they’re taking advantage of it. People like GPT phi i dialog metazoa OS, native video, there are a number of organizations out there now where they’re taking, and embedding prompts inside the application to take advantage of of AI. And again, I’m not expecting to start using them, but at least start looking out there and seeing what’s what’s what’s possible, what’s out there. And there’s some great videos. So quick questions come in and says, I’ve seen people using chat btw to help making prompts they want to ask it. That almost feels quite iterative. But I have seen examples of that. And if you’ve if you go and find I’m not sure I’ll put a link in the in the the next slide. But if you’re going to find Salesforce DevOps net, the guy called Vern key at Vernon Kenan, he’s been talking about how he uses AI, GBT, he’s written some prompts that create the starting point. So it helps you go, thinking like an architect in the financial services industry, I want to so it actually sets up part of that prompt for you. And he’s got a term for I think it’s called ACP. So if you go to Salesforce devops.net, and look at one of his, his blogs, it talks about how you can use GPT, to write some of that early setup in your prompt, in very early days of this great, great question. So there’s a set of resources there, I’m not going to read those out. But they will be available. So let me just summarize this. I think the future is already here. It’s just not evenly distributed. You may think AI is for other organizations, and it’s never going to get to you. There are people out there who are already leverage AI in a huge way. And it’s going to come to your your org fairly soon. So therefore, now is the time for action, I’ll encourage you to start, get back to that action plan that I set up, because things are moving so fast. And we all need to be on that train.

 

Shannon Gregg  53:18

Well, I don’t know about everybody else on this call in but I just cancelled all of my afternoon meetings. So I could start writing user stories using GPT that had me fall off my chair for a little bit. So incredible. Thank you for sharing all this information. Ian, can you tell everybody where they can find you, please?

 

Ian Gotts  53:36

Oh, yeah. Find me on LinkedIn in in Gotsis. Easy to find happy to connect on LinkedIn or in elements dot cloud.

 

Shannon Gregg  53:44

Incredible. Ian, thank you so much for sharing all this information. Once again, you have updated it with the most interesting information that is out there available on AI. Love the concept of prompt engineering. And I think that’s something that we’re all going to be working on training our little internal 12 year olds on how to answer the questions we want them to answer better. Thank you so much, Ian, for your continued support of helping everybody in the Salesforce ecosystem, learn a little bit more about how to use AI to their benefit, and more importantly, sharing it with the folks who are following the life sciences dreaming event series, which we are so excited about. We’ve got a really great webinar coming up next month on partnering 2.0 which was another really popular session at the two day event that we had last August. exciting announcement coming out soon about next fall’s 2024 edition of Life Sciences dreamin thank you to everybody for joining thank you to Ian for sharing all of these resources. We will send out the recording along with his list of resources so that you can try to keep up but I encourage all of you connect with me and then follow him on LinkedIn. I learned something new from him every single day. Thanks very much in thanks for spending your time with us and have a wonderful day.