AI Content Chat (Beta) logo

Technology Vision 2023 | When Atoms meet Bits #TechVision 198 also uses Ray to train large language models. Another signi昀椀cant area to track is efforts to make building and deploying foundation models easier. And IBM is using it to implement zero-copy model To truly understand Rapidly growing compute requirements, and the loading, where they store model weights in shared associated costs and expertise needed to handle memory, and use Ray to instantly load and redirect the impact foundation this scale, are the biggest barriers today. The amount cluster resources to whatever model an application 199 models will have on of compute needed to train the largest AI models requires in the moment. This frees users from their industries and has grown exponentially—now doubling anywhere needing to tune the number of model variations businesses, companies from every 10 months to every 3.4 months, they keep loaded in memory, and is expected to lead 194, 195 according to various reports. And even after a to much simpler foundation model adaptation and need to carefully track model is trained, it’s expensive to run and host all deployment. new developments. of its downstream variations as it gets 昀椀ne-tuned to handle different tasks. In today’s cloud computing The novel capabilities of foundation models—and setups, it’s slow to load foundation models each these ongoing advances in the technology—have time they’re needed but expensive to keep many led some in the community to see them as a step models online. toward arti昀椀cial general intelligence (AGI), an AI system capable of learning any intellectual task Anyscale—a unicorn that recently raised $199 million that a human can learn. Only time will tell if the 196 —is working to lower these barriers. Anyscale was technologies and methods behind foundation founded by a group of UC Berkeley researchers models are enough to achieve some form of truly extended its learning ability to video tasks with a who developed Ray, an open-source framework that general intelligence in the future. Nevertheless, proposed video adapter built off the image encoder. improves access to foundation models by making the level of generalization foundation models have Extending to this additional data type is a key step it easier to scale and distribute machine learning already achieved within certain data types is hugely toward a computer vision foundation model that can workloads. It’s currently used to train the largest signi昀椀cant and more than enough to revolutionize generalize across real-world vision tasks—and could 197 AI models coming out of OpenAI, like ChatGPT. how and where enterprises use AI. drive applications in security, healthcare, and more. Cohere, a startup building an NLP developer toolkit,

When Atoms meet Bits - Page 69 When Atoms meet Bits Page 68 Page 70