Uncategorized

Photonic Chip Enables 160 TOPS/W Artificial General Intelligence | Research & Technology | Apr 2024


BEIJING, April 22, 2024 — Researchers from Tsinghua University have reported the development of a photonic AI chiplet called “Taichi” which empowers 160 TOPS/W artificial general intelligence (AGI).

Developments in the field of AGI impose strict energy and area efficiency requirements on next-generation computing. Poised to break the plateauing of Moore’s Law, integrated photonic neural networks have shown the potential to achieve superior processing speeds and high energy efficiency. However, it has suffered severely limited computing capability and scalability such that only simple tasks and shallow models have been realized experimentally.

The team at Tsinghua University developed the large-scale chip along with a distributed optical computing architecture, producing billions-of-neuron on-chip computing capability with 160 TOPS/W energy efficiency. The chip not only exploits the high parallelism and high connectivity of wave optics to implement computing with very high computing density, but also explores a general and iterative encoding-embedding-decoding photonic computing to effectively increase the scale of the optical neural network to billion neuron level.

The integrated large-scale interference-diffraction-hybrid photonic chiplet developed by Tsinghua University researchers could pave the way for viable photonic computing and applications in artificial intelligence. Courtesy of Tsinghua University.


The integrated large-scale interference-diffraction-hybrid photonic chiplet developed by Tsinghua University researchers could pave the way for viable photonic computing and applications in artificial intelligence. Courtesy of Tsinghua University.


For the first time, the researchers said, Taichi experimentally realizes on-chip large optical neural networks for thousand-category-level classification and artificial intelligence-generated content (AIGC) tasks, with up to 2-3 orders of magnitude improvement in area efficiency and energy efficiency compared to current AI chips.

PowerPhotonic Ltd. - Coherent Beam 4/24 MR

The team proposed a universal and robust distributed computing protocol for complex AGI tasks. Instead of going deeper as electronic computing, the researchers said, Taichi architecture goes broad for throughput and scale expansion. A binary encoding protocol is proposed to divide challenging computing tasks and large network models into sub-problems and sub-models that can be distributed and deployed on photonic chiplet. This atomic divide and concur operation enables large-scale tasks to be adaptively solved with flexible scales, achieving on-chip networks with up to 10 billion optical neurons.

The researchers developed their largest-scale photonic chiplets to support input and output dimensions as large as 64 × 64. By integrating scalable wavefield diffraction and reconfigurable interference, the entire inputs are encoded passively and modulated in a highly-parallel way, they said, achieving 160 TOPS/W on-chip energy efficiency and 879 T MACS/mm² area efficiency (up to 2 orders of magnitude improvement in both energy and area efficiency than existing AI chips).

The versatility and flexibility of Taichi was demonstrated by on-chip experiments showing an accuracy of 91.89% in 1623-category Omniglot characters classification and 87.74% in 100-category mini-ImageNet classification. The on-chip high-fidelity AIGC models were demonstrated in tasks such as music composing and high-resolution styled paintings generation.

Taichi not only breaks the scale limitation towards beyond-billion-neurons foundation model with large-scale, high-throughput photonic chiplets, the researchers said, but also achieves error-prone robustness through information scattering and synthesizing. The researchers believe the chip’s ability to solve complex on-chip AGI tasks in a scalable, accurate, and efficient manner will pave the way for real-world photonic computing to support applications in large machine learning models, AIGC, robotics, and other areas.

The research was published in Science (www.doi.org/10.1126/science.adl1203).



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *