AI Weekly Rundown (April 6 to April 12)
Major AI announcements from Google, Apple, Intel, Adobe, Cohere, and more.
Hello Engineering Leaders and AI Enthusiasts!
Another eventful week in the AI realm. Lots of big news from huge enterprises.
In today’s edition:
⚖️ Build resource-efficient LLMs with Google’s MoD
📡 Newton brings sensor-driven intelligence to AI models
💰 Internet archives become AI training goldmines for Big Tech
🌐 Stability AI launches multilingual Stable LM 2 12B
📱 Apple’s Ferret-UI beats GPT-4V in mobile UI tasks
⏰ Musk says AI will outsmart humans within a year
🧠 Intel's new AI chip: 50% faster, cheaper than NVIDIA's
🤖 Meta to Release Llama 3 Open-source LLM next week
☁️ Google Cloud announces major updates to enhance Vertex AI
🚀 Meta unveils next-generation AI chip for its AI workloads
🎶 New AI tool lets you generate 1200 songs per month for free
🎬 Adobe is buying videos for $3 per minute to train AI model
🔍 Cohere’s Rerank 3 powers smarter enterprise search
💻 Apple M4 Macs: Coming soon with AI power!
📝 Meta's OpenEQA tests AI’s real-world comprehension
Let’s go!
Build resource-efficient LLMs with Google's MoD
Google DeepMind has introduced "Mixture-of-Depths" (MoD), an innovative method that significantly improves the efficiency of transformer-based language models. Unlike traditional transformers that allocate the same amount of computation to each input token, MoD employs a "router" mechanism within each block to assign importance weights to tokens. This allows the model to strategically allocate computational resources, focusing on high-priority tokens while minimally processing or skipping less important ones.
Notably, MoD can be integrated with Mixture-of-Experts (MoE), creating a powerful combination called Mixture-of-Depths-and-Experts (MoDE). Experiments have shown that MoD transformers can maintain competitive performance while reducing computational costs by up to 50% and achieving significant speedups during inference.
Newton brings sensor-driven intelligence to AI models
Startup Archetype AI has launched with the ambitious goal of making the physical world understandable to artificial intelligence. By processing data from a wide variety of sensors, Archetype's foundational AI model called Newton aims to act as a translation layer between humans and the complex data generated by the physical world.
Using plain language, Newton will allow people to ask questions and get insights about what's happening in a building, factory, vehicle, or even the human body based on real-time sensor data. The company has already begun pilot projects with Amazon, Volkswagen, and healthcare researchers to optimize logistics, enable smart vehicle features, and track post-surgical recovery. Archetype's leadership team brings deep expertise from Google's Advanced Technology and Products (ATAP) division.
Internet archives become AI training goldmines for Big Tech
To gain an edge in the heated AI arms race, tech giants Google, Meta, Microsoft, and OpenAI are spending billions to acquire massive datasets for training their AI models. They are turning to veteran internet companies like Photobucket, Shutterstock, and Freepik, who have amassed vast archives of images, videos, and text over decades online.
The prices for this data vary depending on the type and buyer but range from 5 cents to $7 per image, over $1 per video, and around $0.001 per word for text. The demand is so high that some companies are requesting billions of videos, and Photobucket says it can't keep up.
Stability AI launches multilingual Stable LM 2 12B
Stability AI has released a 12-billion-parameter version of its Stable LM 2 language model, offering both a base and an instruction-tuned variant. These models are trained on a massive 2 trillion token dataset spanning seven languages: English, Spanish, German, and more. Stability AI has also improved its 1.6 billion-parameter Stable LM 2 model with better conversational abilities and tool integration.
The new 12B model is designed to balance high performance with relatively lower hardware requirements than other large language models. Stability AI claims it can handle complex tasks requiring substantially more computational resources. The company also plans to release a long-context variant of these models on the Hugging Face platform soon.
Ferret-UI beats GPT-4V in mobile UI tasks
Researchers have launched Ferret-UI, a multimodal language model designed to excel at understanding and interacting with mobile user interfaces (UIs). Unlike general-purpose models, Ferret-UI is trained explicitly for various UI-centric tasks, from identifying interface elements to reasoning about an app's overall functionality.
By using "any resolution" technology and a meticulously curated dataset, Ferret-UI digs deep into the intricacies of mobile UI screens, outperforming its competitors in elementary and advanced tasks. Its ability to execute open-ended instructions may make it the go-to solution for developers looking to create more intuitive mobile experiences.
Musk says AI will outsmart humans within a year
Tesla CEO Elon Musk has boldly predicted that AI will surpass human intelligence as early as next year or by 2026. In a wide-ranging interview, Musk discussed AI development's challenges, including chip shortages and electricity supply constraints, while sharing updates on his xAI startup's AI chatbot, Grok. Despite the hurdles, Musk remains optimistic about the future of AI and its potential impact on society.
Intel's new AI chip: 50% faster, cheaper than NVIDIA's
Intel has unveiled its new Gaudi 3 AI accelerator, which aims to compete with NVIDIA's GPUs. According to Intel, the Gaudi 3 is expected to reduce training time for large language models like Llama2 and GPT-3 by around 50% compared to NVIDIA's H100 GPU. The Gaudi 3 is also projected to outperform the H100 and H200 GPUs in terms of inference throughput, with around 50% and 30% faster performance, respectively.
The Gaudi 3 is built on a 5nm process and offers several improvements over its predecessor, including doubling the FP8, quadrupling the BF16 processing power, and increasing network and memory bandwidth. Intel is positioning the Gaudi 3 as an open, cost-effective alternative to NVIDIA's GPUs, with plans to make it available to major OEMs starting in the second quarter of 2024. The company is also working to create an open platform for enterprise AI with partners like SAP, Red Hat, and VMware.
Meta to release Llama 3 open-source LLM next week
Meta plans to release two smaller versions of its upcoming Llama 3 open-source language model next week. These smaller models will build anticipation for the larger version, which will be released this summer. Llama 3 will significantly upgrade over previous versions, with about 140 billion parameters compared to 70 billion for the biggest Llama 2 model. It will also be a more capable, multimodal model that can generate text and images and answer questions about images.
The two smaller versions of Llama 3 will focus on text generation. They’re intended to resolve safety issues before the full multimodal release. Previous Llama models were criticized as too limited, so Meta has been working to make Llama 3 more open to controversial topics while maintaining safeguards.
Google Cloud announces major updates to enhance Vertex AI
Google Cloud has announced exciting model updates and platform capabilities that continue to enhance Vertex AI:
Gemini 1.5 Pro: Gemini 1.5 Pro is now available in public preview in Vertex AI, the world’s first one million-token context window to customers. It also supports the ability to process audio streams, including speech and even the audio portion of videos.
Imagen 2.0: Imagen 2.0 can now create short, 4-second live images from text prompts, enabling marketing and creative teams to generate animated content. It also has new image editing features like inpainting, outpainting, and digital watermarking.
Gemma: Google Cloud is adding CodeGemma to Vertex AI. CodeGemma is a new lightweight model from Google's Gemma family based on the same research and technology used to create Gemini.
MLOps: To help customers manage and deploy these large language models at scale, Google has expanded the MLOps capabilities for Gen AI in Vertex AI. This includes new prompt management tools for experimenting, versioning, optimizing prompts, and enhancing evaluation services to compare model performance.
Enjoying the weekly updates?
Refer your pals to subscribe to our newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Meta unveils next-generation AI chip for enhanced workloads
Meta has introduced the next generation of its Meta Training and Inference Accelerator (MTIA), significantly improving on MTIAv1 (its first-gen AI inference accelerator). This version more than doubles the memory and compute bandwidth, designed to effectively serve Meta’s crucial AI workloads, such as its ranking and recommendation models and Gen AI workloads.
Meta has also co-designed the hardware system, the software stack, and the silicon, which is essential for the success of the overall inference solution.
Early results show that this next-generation silicon has improved performance by 3x over the first-generation chip across four key models evaluated. MTIA has been deployed in the data center and is now serving models in production.
New AI tool lets you generate 1200 songs per month for free
Udio, a new AI music generator created by former Google DeepMind researchers, is now available in beta. It allows users to generate up to 1200 songs per month for free, with the ability to specify genres and styles through text prompts.
The startup claims its AI can produce everything from pop and rap to gospel and blues, including vocals. While the free beta offers limited features, Udio promises improvements like longer samples, more languages, and greater control options in the future. The company is backed by celebrities like Will.i.am and investors like Andreessen Horowitz.
Adobe is buying videos for $3 per minute to build an AI model
Adobe is buying videos at $3 per minute from its network of photographers and artists to build a text-to-video AI model. It has requested short clips of people engaged in everyday actions such as walking or expressing emotions including joy and anger, interacting with objects such as smartphones or fitness equipment, etc.
The move shows Adobe trying to catch up to competitors like OpenAI (Sora). Over the past year, Adobe has added generative AI features to its portfolio, including Photoshop and Illustrator, that have garnered billions of uses. However, Adobe may be lagging behind the AI race and is trying to catch up.
Cohere’s Rerank 3 powers smarter enterprise search
Cohere has released a new model, Rerank 3, designed to improve enterprise search and Retrieval Augmented Generation (RAG) systems. It can be integrated with any database or search index and works with existing legacy applications.
Rerank 3 offers several improvements over previous models:
It handles a longer context of documents (up to 4x longer) to improve search accuracy, especially for complex documents.
Rerank 3 supports over 100 languages, addressing the challenge of multilingual data retrieval.
The model can search various data formats like emails, invoices, JSON documents, codes, and tables.
Rerank 3 works even faster than previous models, especially with longer documents.
When used with Cohere's RAG systems, Rerank 3 reduces the cost by requiring fewer documents to be processed by the expensive LLMs.
Plus, enterprises can access it through Cohere's hosted API, AWS Sagemaker, and Elasticsearch's inference API.
Apple M4 Macs: Coming soon with AI power!
Apple is overhauling its Mac lineup with a new M4 chip focused on AI processing. This comes after the recent launch of M3 Macs, possibly due to slowing Mac sales and similar features in competitor PCs.
The M4 chip will come in three tiers (Donan, Brava, Hidra) and will be rolled out across various Mac models throughout 2024 and early 2025. Lower-tier models like MacBook Air and Mac Mini will get the base Donan chip, while high-performance Mac Pro will be equipped with the top-tier Hidra. We can expect to learn more about the specific AI features of the M4 chip at Apple’s WWDC on June 10th.
Meta's OpenEQA puts AI’s real-world comprehension to test
Meta AI has released a new dataset called OpenEQA to measure how well AI understands the real world. This "embodied question answering" (EQA) involves an AI system being able to answer questions about its environment in natural language.
The dataset includes over 1,600 questions about various real-world places and tests an AI's ability to recognize objects, reason about space and function, and use common sense knowledge.
That's all for now!
Subscribe to The AI Edge and gain exclusive access to content enjoyed by professionals from Moody’s, Vonage, Voya, WEHI, Cox, INSEAD, and other esteemed organizations.
Thanks for reading, and see you on Monday. 😊