Apple Trials a ChatGPT-like AI Chatbot
Plus: Google AI’s new research. OpenAI doubles GPT-4 message cap to 50.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 67th edition of The AI Edge newsletter. This edition brings you the “Apple Trials a ChatGPT-like AI Chatbot.”
A huge shoutout to our incredible readers. We cherish and value you! 😊
In today’s edition:
🍎 Apple Trials a ChatGPT-like AI Chatbot
🔓 Google AI’s SimPer unlocks potential of periodic learning
💬 OpenAI doubles GPT-4 message cap to 50
💡 Knowledge Nugget: Imitation Models and the Open-Source LLM Revolution byLet’s go!
Apple Trials a ChatGPT-like AI Chatbot
Apple is developing AI tools, including its own large language model called "Ajax" and an AI chatbot named "Apple GPT." They are gearing up for a major AI announcement next year as it tries to catch up with competitors like OpenAI and Google.
The company has multiple teams developing AI technology and addressing privacy concerns. While Apple has been integrating AI into its products for years, there is currently no clear strategy for releasing AI technology directly to consumers. However, executives are considering integrating AI tools into Siri to improve its functionality and keep up with advancements in AI.
Why does this matter?
Apple's development of AI tools, such as the language model "Ajax" and chatbot "Apple GPT," signals the company's efforts to catch up with competitors OpenAI and Google. The focus on addressing privacy concerns and the potential integration of AI into Siri shows Apple's aim to enhance its product functionality and stay competitive.
Google AI’s SimPer unlocks potential of periodic learning
Google research team’s this paper introduces SimPer, a self-supervised learning method that focuses on capturing periodic or quasi-periodic changes in data. SimPer leverages the inherent periodicity in data by incorporating customized augmentations, feature similarity measures, and a generalized contrastive loss.
SimPer exhibits superior data efficiency, robustness against spurious correlations, and generalization to distribution shifts, making it a promising approach for capturing and utilizing periodic information in diverse applications.
Why does this matter?
SimPer's significance lies in its ability to address the challenge of learning meaningful representations for periodic tasks with limited or no supervision. This advancement proves crucial in various domains, such as human behavior analysis, environmental sensing, and healthcare, where critical processes often exhibit periodic or quasi-periodic changes. It demonstrates that SimPer outperforms state-of-the-art SSL methods.
OpenAI doubles GPT-4 message cap to 50
OpenAI has doubled the number of messages ChatGPT Plus subscribers can send to GPT-4. Users can now send up to 50 messages in 3 hours, compared to the previous limit of 25 messages in 2 hours. And they are rolling out this update next week.
Why does this matter?
Increasing the message limit with GPT-4 provides more room for exploration and experimentation with ChatGPT plugins. For businesses looking to enhance customer interactions, a developer building innovative applications, or an AI enthusiast, the raised cap of 50 messages per 3 hours opens up more extensive and dynamic interactions with the model.
Knowledge Nugget: Imitation Models and the Open-Source LLM Revolution
This interesting read by
discusses the emergence of proprietary Language Model-based APIs and the potential challenges they pose to the traditional open-source and transparent approach in the deep learning community. It highlights the development of open-source LLM alternatives as a response to the shift towards proprietary APIs.The article emphasizes the importance of rigorous evaluation in research to ensure that new techniques and models truly offer improvements. It also explores the limitations of imitation LLMs, which can perform well for specific tasks but tend to underperform when broadly evaluated.
Why does this matter?
While local imitation is still valuable for specific domains, it is not a comprehensive solution for producing high-quality, open-source foundation models. Instead, it advocates for the continued advancement of open-source LLMs by focusing on creating larger and more powerful base models to drive further progress in the field.
What Else Is Happening❗
🍔 Wohoo, look at this AI Food Commercial made with Pika Labs! (Link)
🔍 Google exploring AI tools to write news articles! (Link)
🚀 MosaicML launches MPT-7B-8K with 8k context length. (Link)
🏆 AI has driven Nvidia to achieve a $1 trillion valuation! (Link)
💰 Qualtrics plans to invest $500M in AI over the next 4 years. (Link)
💼 Unstructured raises $25M, a company offering tools to prep enterprise data for LLMs. (Link)
🛠️ Trending Tools
Swiftbrief: AI tool for writers to create SEO-friendly articles using AI-generated briefs.
LangSmith: Platform for developers to build and iterate on products using LLMs.
ElevateHQ: Commission plan designer with simple prompts and instant results.
Fabric: Organize your digital world with AI, connect, and sync your favorite apps.
Second Nature: Free AI simulation for on-demand seller training through simulated conversations.
Belva AI: Add an AI agent to any app with 5 lines of code, navigate phone trees, and more.
123RF: AI-powered search, image generator, and image variation for unique results.
PromptLocker: Store, categorize, and retrieve prompts for various AI models.
That's all for now!
Subscribe to The AI Edge and join the impressive list of readers that includes professionals from Moody’s, Vonage, Voya, WEHI, Cox, INSEAD, and other reputable organizations.
Thanks for reading, and see you tomorrow. 😊