Uncategorized

Dynatrace Extends Unified Observability To Generative AI Apps



Technology viewpoints overlap. What this means is that as much as we feel the new breed of Internet of Things (IoT) based cameras, sensors and tracking devices on our back, we ourselves also spend time srutinzing this layer of technology as human beings, often just to assess our level of personal privacy and anonymity.

A similar duality exists in the world of AI. As much as we realize that Artificial Intelligence (AI) is there monitoring a percentage of our activities, choices and actions out in the real world and online for the positive good, we also need to make sure we examine AI functions to look at their provenance, observe their state and status, watch their behavior and assess the validity of the decisions they take.

Embryonic prototyping

This process of AI observability is a fluid and embryonic arena, for many reasons but not least because the entire deployment surface of AI itself is still comparatively experimental for many enterprises. But even at this stage, analytics tools and full blown observability services are being developed to serve this essential need. Among those technology vendors keen to stake an early claim for competency in this field is Dynatrace. The company describes itself as a unified observability and security company with an extended analytics and automation platform.

Terms like unified and holistic are of course often overused, especially in information technology. So does it carry any value when used in this context?

Dynatrace uses the term unified to describe an ability to provide observability across data, code, IT connection points from Application Programming Interfaces (APIs) to containerized software components services to the wider web and cloud… and of course throughout applications and data services themselves. Now, in the current era of generative AI and the supporting repositories and infrastructures it needs from Large Language Models (LLMs) and vector databases, the company includes these information channels too in its definition of what comprises a unified and holistic view.

As a new development for this year, Dynatrace has enhanced its platform with a specific view (observability-view pun intended) across generative AI and Large Language Models.

“Generative AI is the new frontier of digital transformation,” said Bernd Greifeneder, CTO at Dynatrace. “This technology enables organizations to create innovative solutions that boost productivity, profitability and competitiveness. While transformational, it also poses new challenges for security, transparency, reliability, experience and cost management. Organizations need AI observability that covers every aspect of their generative AI solutions to overcome these challenges. Dynatrace is extending its observability and AI leadership to meet this need, helping customers to embrace AI confidently and securely with unparalleled insights into their generative AI-driven applications.”

The end-to-end AI stack

Now a branded and productized piece of technology, Dynatrace AI Observability is said to cover the end-to-end AI stack. Do we have a whole AI stack now then? Yes we do. The company means everything connected to, responsible for serving, driving and running the components of AI that we now wish to coalesce including elements of infrastructure, such as hardware like Nvidia Graphical Processing Units (GPUs), foundational models (the base LLM models developers use to start from) like GPT4… and then onward to ‘semantic caches’ (see below) and vector databases, such as Weaviate, as well as orchestration frameworks, such as LangChain.

Interestingly and as succinctly defined here by vector database company Zilliz, “Semantic caching stores [look after] data based on its meaning, which means that two queries with the same meaning will return the same result, even if the underlying data has changed. This can be useful for complex queries involving multiple tables or data sources.

It also supports the major platforms for building, training and delivering AI models, including Microsoft Azure OpenAI Service, Amazon SageMaker and Google AI Platform. Dynatrace AI Observability uses the company’s Davis AI and other core technologies to deliver a ‘precise and complete view’ of AI-powered applications. As a result, organizations can provide great user experiences while identifying performance bottlenecks and root causes automatically.

The question that now arises is, just how confident can software application development engineers be when coding AI applications and, by equal measure, how confident can we users be when we start to integrate these new smart apps into our lives at home and at work?

“Let’s be realistic and pragmatic about where we are today with AI, it is still in its infancy in terms of deployment in many organizations and that means it is ‘early stage’ in terms of the tools being used, the Large Language Models that are being adopted and the resulting application services that are coming to the surface. If you want that point reinforced, let’s say there is no dial-tone resilence yet i.e. the perhaps comforting continuous sound of your telephone being connected and ready to make a call,” clarified Steve Tack, SVP of product management at Dynatrace. “Because this is where we are at, Dynatrace AI Observability has been built and engineered to provide a route to deploying AI applications that perform and are secure. Any given AI function is typically part of some larger service so it’s important to remember there’s a lot of momentum and cadence in the way AI is created – if things just stayed static in technology then we as a company probably wouldn’t exist, but they don’t… so we do,” he enthused.

More than AI ‘token’ gesture

We said right at the start here that the whole AI observability game means being able to make sure we look at AI functions to look at their provenance. As such, Dynatrace AI Observability with Davis AI (the company’s own AI engine) helps companies comply with privacy and security regulations and governance standards by tracing the origins of the output created by their apps with precision. Additionally, it helps forecast and control costs by monitoring the consumption of ‘tokens’, which are the basic units that generative AI models use to process queries.

A little nerdy and geeky (in a good way), but worth knowing for the next AI conversation someone might find themselves in when the subject of tokens comes up, tokenization methods (as reinterpreted from OpenAI & ChatGPT News) can be summarized briefly as follows:

  • Space-based tokens: Text is tokenized based on the use of space, so “I read Forbes” would be three tokens: I, read, Forbes.
  • Dictionary-based tokens: Tokens are created for each word used that matches an existing record in a predefined dictionary, so “I read Forbes” would deliver three tokens, once for each commonly understood word in much the same way as our first example with spaces.
  • Sub-word tokens: Easy to understand, “I am enjoying reading Forbes” would be six tokens: I, am, enjoy, ing, read, Forbes.
  • Byte-Pair Encoding (BPE) tokens: Tokens are defined by the number of bytes and it is a technique that was first developed as an algorithm in order to compress text strings into shorter values – bringing text back to its original form after tokenization is known as normalization – but this is a complex story for another day.

Keen to detail a new partnership, Ali Dalloul, VP of AI at Microsoft notes that the Azure OpenAI Service (a generative AI product) now aligns with Dynatrace AI Observability to provide shared customers with all the insights detailed here. “This powerful combination helps ensure these services achieve the highest security, reliability and performance standards while enabling the teams that manage them to control costs,” said Dalloul.

Analyst house Gartner suggests that by 2028, the adoption of AI will culminate in over 50% of cloud compute resources devoted to AI workloads, up from less than 10% in 2023. The wider suggestion here is that many organizations are concerned about the costs associated with generative AI-powered services; often because they can be many times more expensive than traditional cloud services and are difficult to forecast because they are based on the consumption of generative AI tokens by applications that are not yet in production.

With world governments now establishing regulations focused on the use of AI technologies responsibly and ethically (without the risk of hallucinations and AI bias and so much more) and in compliance with applicable laws, the need to watch and observe AI components has almost certainly never been great.

IT watching culture

This whole story speaks of a different way of using technology compared to the way we did in pre-millennial times.

Although many of us didn’t quite have the proximity that we do these days with ‘apps in our pocket’ given the ubiquity of the smartphone, the time we did spend with technology did not see us act with so much analytic inquisitiveness to the platforms and tools we used.

We plugged in, tuned in and turned off, mostly.

Today that acceptance of IT has of course changed, we all understand the existence of Internet scams, ransomware and automation to one degree or another and the arrival of generative AI has not been without its questions. As we the users now more closely ‘watch’ the technology that we use, it is perhaps comforting to know that there are system-level and AI-centric monitoring and observability tools being developed to provide a viewing lens lower down.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *