AI Content Chat (Beta) logo

Technology Vision 2023 | When Atoms meet Bits #T#TechVechVisionision A new category of AI OpenAI’s GPT-3, which was released in 2020, was the In practice today, these models typically leverage 昀椀rst foundation model to capture widespread public transformer machine learning models and have attention—for good reason. It was the largest language a massive number of parameters—ranging from In an e†ort to de‹ne model in the world then and drove a breakthrough in hundreds of millions to trillions. What makes them so this new class of AI, the 昀椀eld. It demonstrated capabilities no one had seen game changing, is that they’re broadly trained across a before, teaching itself to perform tasks it had never data modality (or multiple modalities like language and researchers from the been trained on, and outperforming models that were image), rather than on a speci昀椀c task, and can learn Stanford Institute trained on those tasks. In the years since, many more to complete new tasks within these data types with for Human-Centered supersized models have appeared. Companies like minimal or no extra training. In other words, they have Arti‹cial Intelligence Google, Microsoft, Meta, and Baidu have created their generalist capabilities within their domains. 175, 176, 177, 178 own large language models. And some have coined the term started building large multimodal models—like the DeepMind’s Gato is one of the most exciting examples “foundation model” aforementioned GPT-4 and text-to-image generators— to date. The company calls Gato a “generalist agent” in August 2021. which are trained on multiple types of data (like text, because it is multimodal and can complete over 181,182 image, video, or sound) and to identify the relationships 600 different tasks. Using a single AI model with 179 between them. 昀椀xed weights, it can chat, caption images, play Atari video games, stack blocks with a robotic arm, and In an effort to de昀椀ne this new class of AI, researchers more. Additionally, it can learn these various tasks from the Stanford Institute for Human-Centered simultaneously and switch between them without Arti昀椀cial Intelligence coined the term “foundation having to forget previous skills. For context, AlphaZero— 180 model” in August 2021. They generally de昀椀ned an older DeepMind model known for playing chess, Go, them as large AI models trained on a vast quantity of and shogi—had to unlearn how to play chess in order to data with signi昀椀cant downstream task adaptability. play Go.

When Atoms meet Bits - Page 66 When Atoms meet Bits Page 65 Page 67