Uncategorized

You Ask, I Answer: Should Generative AI Be In Your Product? – Christopher S. Penn


You Ask, I Answer: Should Generative AI Be In Your Product?

In today’s episode, Chris inquires about integrating generative AI into complex software products. You’ll learn how to assess whether your product can benefit from AI, especially language models, and understand the importance of internal scripting languages and APIs in this integration. Discover how generative AI can enhance user experience in various applications, from gaming to office tools. Tune in for insightful strategies on implementing AI in your software, ensuring a more engaging and efficient user interaction.

You Ask, I Answer: Should Generative AI Be In Your Product?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:


Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Christopher Penn: In today’s episode, Chris asks, we have a complicated software product.

And I’m wondering if generative AI should be in our product.

How do you know when you should or should not put in generative AI, particularly language models? Okay, so I assume we’re talking about something like Microsoft Bing, where there’s now a chat box, or Microsoft Copilot, or Google Bard or Google Duet.

There’s a chat box that appears now in your application.

And you can type a prompt into that and have the application do something.

When should you use this? When should you not use this? There’s no hard and fast rule about whether it’s a good use case or not a lot of it will come down to requirements gathering, is that something that would benefit your users? And that comes from gathering that information from serving users and saying, what are the pain points that you currently have using our software, if our software is difficult to use? What a natural language interface make it easier to use? One of the ways you can you can sort of tell if it would be a good idea or not, is if your software has an internal programming language and an internal API, something that allows other parts of the software to communicate with itself.

There’s a scripting language built in, because it’s such a complex piece of software that it needs that.

If your software already has a scripting language or an internal API, then yeah, it makes a lot of sense.

Because most of the work is done.

At that point, you just have to take a language.

model, train on your company’s internal scripting language, whatever you chose.

And then when the language models interacting with the user, what’s really doing behind the scenes is writing code in your software scripting language to tell it to do things.

This is how you see these, you know, these these generative prompts appear in things like for example, Microsoft Office, Microsoft Office has had a robust programming language for decades now called Visual Basic.

And there are so much has been trained in lab in language models on how to write Visual Basic that when you tell it to do something like turn this document into a PowerPoint presentation, what it’s doing is it’s translating your words into code, because it’s a language model and code is a language, and then executing that code.

That’s pretty straightforward as to how these things work.

Now, if your software product does not have a, a an API and internal scripting language, the next question you have to ask is, do we have a problem in our interface that natural language will solve? So for example, we have all these smart assistants that are in retrospect, not all that smart, because they have very, very limited vocabularies.

Compare that to a tool like chat GPT, where you can have a free form conversation about just about anything.

Would a language model in this device be a good application? Yeah, that’s a very strong candidate, because you’re already using language just in a very limited way.

And it would definitely benefit from having additional language.

If you make a video game, a game like World of Warcraft, you have NPCs in the game, non playing characters that, you know, have canned dialogue.

If you go to that in in Goldshire tomorrow, and the week after and the week after you talk to the innkeeper, you talk to the bartender, you will have the same canned dialogue over and over and over again, and will never change.

If you had a language model there that was had strong boundaries, but was otherwise able to chat, you could roll up to that in in Goldshire, have a conversation with the innkeeper, and have it be different every time.

And let’s say, Oh, yeah, King Anduin’s procession came through yesterday left a huge mess in the town.

Or no, I saw some folks come in some elves come in from the ruins of Darnassus the other day, and they drank all the wine, etc.

You would have these natural language interactions that makes so much sense to create that sense of immersion and that and that, extend that sense of immersion.

sense of fantasy that you’re in this virtual space.

So there’s a very strong application there.

In that kind of a software product.

In a product like CAD software or photo editing software.

Yeah, there are some things that are, you’re better off just writing out what you want.

And if the software is well trained that the model is good.

It’s, it’s much easier to have the software just translate your user intent.

Companies like Adobe are working on this, there’s generative prompts in Photoshop and in Illustrator, and it’s coming soon to Premiere.

So there’s a lot of applications there.

Remember that language models are good at language, and they’re not good at things that are not language.

So if the problem you’re dealing with in your product is a language problem, a language model would be a great choice.

If you’re if you’re doing image generation, diffusers, and diffuser technology or image generators, if you’ve got an image generation task, then an image generation model makes a great deal of sense.

If you have a task that is not image generation, maybe not.

Now, there are some caveats and some some loopholes here.

One of which is sometimes data can be turned into other formats.

For example, when you’re recording sound, as I’m talking right now, sound comes in a variety of data formats.

But one of the things that can come in as as what’s called a spectrogram, it’s a visual representation.

Of all the different frequencies that are present in a recording, you can take an image model, learn what a sound is based on the spectrogram, and then have a diffuser model predict essentially what the spectrogram should look like, given any prompt.

There’s a lot of research being done in this field right now to replicate the spectrogram of common sounds.

So you’re not using sound to predict sound, because that’s actually surprisingly difficult to do.

You’re using images to replicate sound.

As I mentioned earlier, language models are really bad at things that are not like language, like math, but they’re really good at things like coding, because coding is a language.

So what you’ll see a lot, you see this most in chat GPT, when you ask a math problem, it will actually write code to solve the math problem, because the code can execute the math problem and the language model doesn’t have to.

So those are the short answers.

If you’ve got a language problem, a language model will be a good choice.

If you’ve got an image problem, an image model be a good choice.

If you have an internal scripting language already, then you should absolutely be connecting a language model to that and having it write code that will make your users lives easier.

The final thing to keep in mind is what your tolerance and appetite is for risk.

Language models can hallucinate, they can say things even with strong guardrails, they can say things that are unpredictable, because by nature, they are hallucinatory by nature, they’re making things up.

And so they, your question you have to ask is how much of an appetite for risk do you have if the model does go off the rails in some way that is moderately predictable? Go back to the video game example, the video game example, the language model, the model might say something offensive, is that a level of risk that you’re willing to tolerate? And what level of risk are you willing to tolerate? These are considerations that all have to be done in requirements gathering before you start implementing generative AI in your products.

But it’s a really good question.

And I think it’s one that everyone who has ownership of a software product needs to have this discussion with their teams to decide how if at all AI should be in your products.

Thanks for tuning in.

We’ll talk to you next time.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *