Uncategorized

Artificial Intelligence, Large Language Models, and Architects (Part 1 of 2) – Azure Aggregator


Azure Architecture Blog > Armchair Architects: Artificial Intelligence, Large Language Models, and Architects (Part 1 of 2)

https://techcommunity.microsoft.com/t5/azure-architecture-blog/armchair-architects-artificial-intelligence-large-language/ba-p/4021819

Welcome back to the fourth season of Armchair Architects! You asked for more, and we’re here to deliver. This season, we’re diving deep into the world of Artificial Intelligence (AI), specifically focusing on large language models (LLMs) with our host David Blank-Edelman and our armchair architects Uli Homann and Eric Charran.

 

Our conversation kicks off with Eric and Uli, two seasoned architects, discussing their experiences with ChatGPT and Bard. The topic of discussion? Large Language Models (LLMs), a term you’ll hear a lot throughout the season.

 

Eric shares his disruptive experience with these hosted foundational models, like ChatGPT, which have changed our lives in unexpected and delightful ways. The most impactful way he’s seen it affect his work as an architect is in supporting his role as an architect.

 

The Architect’s New Assistant

As architects, understanding the product features, prioritized requirements, and non-functional requirements is crucial. Traditionally, this would involve extensive research and application of various patterns like the bulkhead pattern and the orchestrator pattern.

 

However, the advent of generative AI has revolutionized this process. Eric shares an instance where he plugged some requirements into ChatGPT, suggested the orchestrator model’s relevance, and asked for its opinion. The result? A cogent response on how to meet the requirements, understand all the features (both functional and non-functional), adhere to the architectural patterns, and even get recommendations on other potentially relevant patterns.

 

This process, which Eric refers to as ‘prompt engineering’, has transformed what used to be a manual activity into an automated one. Now, architects have a research assistant, through AI, that can perform architectural jobs. However, it’s important to note that the architect still needs to be the arbiter of whether the AI’s suggestions are correct to avoid dealing with ‘hallucinations’ or false information generated by the AI, but it’s a great starting point for what used to be a manual activity.

 

Unpacking the Jargon

During their discussion, Eric mentioned some interesting terms like ‘prompt engineering’ and ‘hallucinations’. They also take a moment to define what a large language model is for those unfamiliar with the term.

 

In essence, a large language model is a continuation of two technologies that have been growing bigger and bigger – neural networks is an outcome of all of the whole AI work which is a technology invented in the 90s .and deep learning, which was invented by Google in 2015.

 

The Power of Deep Learning

If you’re a Dune fan, you might liken the process of deep learning to space folding. It’s about folding the neural network to allow for greater depth, hence the term ‘deep learning’. The OpenAI folks, in collaboration with the Azure AI infrastructure, have managed to push this to a size of trillions of variables, creating a large language model.

 

These large language models focus on human language. It’s not just about speech or words, but also images, code, and other forms of human expression. Essentially, large language models are communication models. This is evident in the work done by OpenAI, Bard, and the Llama models for Meta.

 

Prompt Engineering: Steering the Model

Prompt engineering is about utilizing human expertise within a specific domain to steer the model to produce productive outputs. A large language model uses its vast training corpus of information to predict the next most likely cogent word in a sequence of words. Prompt engineering structures a query so that the most accurate output is achieved based on the results of the input question.

 

For instance, instead of asking the model for great patterns to create a microservice, which might result in a dump of information, prompt engineering refines the question. It constructs a prompt so that it specifically outputs the information in a way that can be used effectively.

 

The Hallucination Check

Of course, there’s the hallucination check. This is a crucial step to ensure the accuracy of the model’s output. But before we delve into hallucinations, it’s important to understand that prompt engineering is not just about direction, but also about constraining.

 

The corpus that the system has access to is incredibly wide, encompassing human knowledge acquired over thousands of years. Prompt engineering effectively tells the model to constrain what it’s looking at. One of the niftiest tricks in prompt engineering is asking the model to take on a persona. For example, asking the model to assume the role of a software architect looking for patterns for microservices implementations. This allows the model to switch its perspective and provide better and deeper outputs.

 

As we wrap up Part 1 of this episode, we’re about to head in a slightly different direction. Join us for Part 2 as we continue our exploration of AI and large language models.

 

Recommended Next Steps

If you’d like to learn more about the general principles prescribed by Microsoft, we recommend Microsoft Cloud Adoption Framework for platform and environment-level guidance and Azure Well-Architected Framework. You can also register for an upcoming workshop led by Azure partners on cloud migration and adoption topics and incorporate click-through labs to ensure effective, pragmatic training.

 

You can view the whole video below and check our more videos from the Azure Enablement Show.

 

AES_LLM_AI_part_1.jpg

 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *