Uncategorized

Dan Wasserman on prompt engineering for generative AI


Dan Wasserman, Chief Operating Officer of KJT, a full-service strategic market research and consulting firm focused on the healthcare industry, joined our podcast. Dan has been with KJT for the past 13 years, leading a research team in market research and health outcomes research. He also oversees information security management, research compliance, operations analytics, and general business operations. He spoke with us about prompt engineering for AI. Below is an edited, abbreviated transcript of our conversation with Dan. 

Michael: Dan, what’s the most significant thing you’re going to share with us today?

Dan: Today, we could talk about various aspects of AI and the insights industry, but I want to focus on prompt engineering. It might sound intense, especially for those not tech-savvy, but there’s a lot to gain by understanding how to use large language models for the future of work. There’s enormous potential for productivity, and it’s crucial for anyone in the industry to consider.

Michael: That sounds intriguing. Before we delve into that, let’s talk about the use of AI, particularly in the context of ChatGPT. People initially treated it like Google, but the results weren’t always impactful. How different is prompting compared to a Google search?

Dan: While there are principles from web searching, there are additional techniques that can enhance sophistication. It’s not just about treating it the same; it’s about understanding how to make the most of generative AI, resulting in higher-quality outputs.

Michael: True, and it seems like there’s a learning curve, akin to how we learned to get what we needed from Google. So, for those unfamiliar with KJT, could you provide some background?

Dan: KJT has been around since 2007, primarily focusing on healthcare market research. We work with clients globally in pharma, med devices, and health insurance. Our foundation is in primary market research, but we’ve expanded into real-world evidence generation and medical and scientific communications. KJT is an employee-owned, woman-owned, best place to work with a broad spectrum of offerings, making it an exciting place to be.

Michael: It does sound like a dynamic work environment. Moving on, you mentioned that ChatGPT was launched in November. Was this your first foray into AI, or had you explored AI before?

Dan: ChatGPT wasn’t my first encounter with AI. There are different types of AI, and while generative AI has gained popularity recently, I had taken some coursework on AI before I stumbled into exploring ChatGPT around its launch, initially thinking it was another hyped tool. However, as I delved deeper, it became clear that there was significant potential beyond just a passing trend. Despite not being an AI expert, my technical background allowed me to translate AI developments into practical insights for market researchers.

Michael: Interesting journey. Before ChatGPT, you had experience with conversational AI, dating back to 2007. How does generative AI, like ChatGPT, differ from previous AI experiences?

Dan: Conversational AI, like the digital concierge in 2007, was more rule-based and followed conversation trees. It required significant manual input to anticipate various scenarios. Generative AI, on the other hand, is a different paradigm. It involves large language models like ChatGPT, offering a more dynamic and versatile approach. While previous AI relied on predefined rules, generative AI learns and generates responses, providing a novel and efficient way to interact.

Michael: It seems like a considerable leap in AI capabilities. Given the current landscape, why is it crucial for market researchers to understand the workings of generative AI?

Dan: Understanding generative AI is essential for market researchers for several reasons. Firstly, it ensures clarity about the data processing, addressing concerns like hallucinations and safeguarding data integrity. Secondly, it involves considerations of data privacy and security, especially when dealing with confidential information. Lastly, understanding generative AI is crucial for evaluating its impact on confidence levels, a key aspect in the realm of market research.

Let’s talk about tabulating data, whether through traditional techniques or efficiently using generative AI. We can employ generative AI to mine and summarize data, presenting it in a table format. This allows for a source of truth, like stating “54% of respondents said XYZ,” and you have the confidence level based on the sample size. The risk arises when skipping this step, and the analysis lacks a clear confidence level. It’s crucial to acknowledge the limitations, as large language models, like ChatGPT, are language models, not math models. Using them for qualitative analysis poses challenges, as they may not handle ambiguity well. There’s a need for human involvement to prevent potential biases and hallucinations.

Michael: People initially feared AI might replace jobs, but the consensus is that while AI can get us 75% of the way there, human involvement is crucial for understanding context and ensuring the output’s accuracy. Let’s explore the current landscape of AI in market research. What’s your perspective on where we are and what you’re observing?

Dan: I see two camps: one hesitant due to concerns about research value, job displacement, and data privacy, and another, less familiar with AI, but eager to experiment. There’s a lot of experimentation happening, which is positive. I believe AI is currently best suited as a personal productivity tool, with systematic enterprise applications still evolving. We’re finalizing an industry survey, and the results might provide richer insights into the industry’s stance.

Michael: Moving on to prompt engineering, can you shed light on what it entails, how it may have evolved, and how your perception of it has changed since you delved into it?

Dan: Prompt engineering might sound intimidating, but it’s essentially about learning how to interact with large language models. It’s empowering and akin to using Excel or Google. It formalizes the process of translating our thoughts into instructions for the model. It’s not learning a programming language but understanding how to structure prompts effectively. I took a course on Coursera by Dr. Jules White, which was fantastic and demystified prompt engineering, making it more accessible.

Michael: Your course on prompt engineering sounds intriguing. Could you share more about it, why you put it together, and what participants can gain from it?

Dan: The course stemmed from the need for accessible knowledge on prompt engineering tailored to everyday researchers and tech workers. It emerged from internal needs but expanded to benefit the broader industry. It’s about sharing techniques specific to the challenges researchers face daily. The course empowers users to feel confident in prompt engineering, offering a foundation for experimentation and growth.

Michael: Excellent insights. Prompt engineering seems like a valuable skill for researchers. Now, how does understanding prompt engineering enhance a researcher’s abilities?

Dan: Prompt engineering offers significant benefits, such as quicker creation of discussion guides or content for webinars. For instance, a researcher with 5-10 years of experience might use it to outline a discussion guide efficiently. It’s a time-saving tool that can assist in generating ideas, titles, and outlines. It’s particularly helpful for those not naturally creative in language or approach.

Michael: That’s fascinating. It’s great to hear about the practical applications of prompt engineering. Can you provide an example of how learning about prompt engineering has improved your capabilities as a researcher?

Dan: One clear example is in rapidly outlining a discussion guide. Instead of spending 30-45 minutes outlining sections, I can now create an initial outline in about 5 minutes. It’s a significant time-saver. Another example is generating content for webinars. It helps in brainstorming ideas, creating outlines, and refining content for slides, making the process more efficient.

Michael: Those are impressive outcomes. It seems like prompt engineering can indeed enhance efficiency and creativity in research. Thank you for sharing these valuable insights and your experiences with prompt engineering.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *