Uncategorized

Expedera NPUs Run Large Language Models Natively on Edge Devices


Expedera NPU IP adds native support for LLMs, including stable diffusion

Expedera, Inc, a leading provider of customizable Neural Processing Unit (NPU) semiconductor intellectual property (IP), announced that its Origin NPUs now support generative AI on edge devices. Specifically designed to handle both classic AI and Generative AI workloads efficiently and cost-effectively, Origin NPUs offer native support for large language models (LLMs), including stable diffusion. In a recent performance study using the open-source foundational LLM, Llama-2 7B by Meta AI, Origin IP demonstrated performance and accuracy on par with cloud platforms while achieving the energy efficiency necessary for edge and battery-powered applications.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

AIThority Predictions Series 2024 bannerRecommended AI News: GiantLeap Capital Invests in Articul8, Intel’s Enterprise Generative AI Spin-off

LLMs bring a new level of natural language processing and understanding capabilities, making them versatile tools for enhancing communication, automation, and data analysis tasks. They unlock new capabilities in chatbots, content generation, language translation, sentiment analysis, text summarization, question-answering systems, and personalized recommendations. Due to their large model size and the extensive processing required, most LLM-based applications have been confined to the cloud. However, many OEMs want to reduce reliance on costly, overburdened data centers by deploying LLMs at the edge. Additionally, running LMM-based applications on edge devices improves reliability, reduces latency, and provides a better user experience.

“Edge AI designs require a careful balance of performance, power consumption, area, and latency,” said Da Chuang, co-founder and CEO of Expedera. “Our architecture enables us to customize an NPU solution for a customer’s use cases, including native support for their specific neural network models such as LLMs. Because of this, Origin IP solutions are extremely power-efficient and almost always outperform competitive or in-house solutions.”

Expedera’s patented packet-based NPU architecture eliminates the memory sharing, security, and area penalty issues that conventional layer-based and tiled AI accelerator engines face. The architecture is scalable to meet performance needs from the smallest edge nodes to smartphones to automobiles. Origin NPUs deliver up to 128 TOPS per core with sustained utilization averaging 80%—compared to the 20-40% industry norm—avoiding dark silicon waste.

Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024

[To share your insights with us, please write to [email protected]]



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *