Google's AI Model for Wearables
Plus: What to expect from AI in next decade, DeepMind's ‘virtual rodent’ to understand brain activity.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 297th edition of The AI Edge newsletter. This edition features Google’s PH-LLM, which reads your wearables’ data for personalized insights.
And a huge shoutout to our amazing readers. We appreciate you😊
In today’s edition:
📊 Google’s PH-LLM reads your wearables’ data for personalized insights
🔮 Ex-OpenAI researcher on what to expect from AI in next decade
🧠 DeepMind built ‘a virtual rodent’ with AI to understand brain activity
📚 Knowledge Nugget: How To Solve LLM Hallucinations by
Let’s go!
Google‘s PH-LLM reads your wearables’ data
Building on the next-gen capabilities of Gemini models, Google has presented research that highlights two complementary approaches to providing accurate personal health and wellness information with LLMs.
The first introduces PH-LLM, a version of Gemini fine-tuned to understand and reason on time-series personal health data from wearables such as smartwatches and heart rate monitors. The model answered questions and made predictions noticeably better than experts with years of experience in the health and fitness fields.
In the second paper, Google introduces an agent system that leverages state-of-the-art code generation and information retrieval tools to analyze and interpret behavioral health data from wearables. Combining these two ideas will be critical for developing truly personalized health assistants.
Why does it matter?
Wearables generate a wealth of personal health data that is rarely utilized in clinical settings. Integrating this data with advanced AI models could revolutionize personal health management and preventative care by putting an "expert health assistant" on everyone's wrist.
What to expect from AI in the next decade?
A researcher fired from OpenAI, Leopold Aschenbrenner, published a 165-page essay on what to expect from AI in the next decade. And GPT-4 has summarized it! Here are some key takeaways from the essay:
By 2027, AI models could reach the capabilities of human AI researchers and engineers, potentially leading to AI surpassing human intelligence
Trillions of dollars are being invested into developing the infrastructure needed to support these AI systems
Controlling AI systems smarter than humans(the 'superalignment' problem) will be crucial to prevent catastrophic outcomes
Only a few hundred people truly understand the scale of change AI is about to bring
Why does it matter?
The essay provides a rare insider's perspective on the rapid progression of AI. Coming from someone deeply involved in cutting-edge AI development, the insights highlight the urgency to get ahead of managing risks before AI’s capabilities outpace our defenses.
DeepMind’s AI ‘virtual rat’ to understand brain activity
Researchers from Google DeepMind and Harvard built a ‘virtual rodent’ powered by AI to help them better understand how the brain controls movement. With deep reinforcement learning (RL), it learned to operate a biomechanically accurate rat model, allowing researchers to compare real and virtual neural activity.
Why does it matter?
Understanding how the brain controls movement and modeling neural activity could exponentially advance fields like neuroscience and brain-computer interfaces, with the help of AI.
Enjoying the daily updates?
Refer your pals to subscribe to our daily newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Knowledge Nugget: How To Solve LLM Hallucinations
Hallucinations are a major issue with current LLMs, occurring due to limitations in their training data and methodology. Existing techniques like fine-tuning, retrieval-augmented generation (RAG), and mixture of experts (MoE) have tried to address hallucinations, but with limited success.
In this article,
discusses the problem of hallucinations and delves into a new approach called "Memory Tuning" developed by the startup company Lamini to reduce hallucinations. It is an aggressive way to embed specific data into models even as small as 3 billion parameters. It takes the concept of MoE and turbocharges it in a very specific way.Why does it matter?
Lamini says this approach has already been implemented for many customers, including a Fortune 500 company that now experiences 10x fewer hallucinations in text-to-SQL code generation. It presents Memory Tuning as a promising approach to drastically reducing hallucinations in LLMs for commercial and domain-specific applications.
What Else Is Happening❗
🕵️♂️Former head of NSA joins OpenAI’s Safety and Security Committee
Paul M. Nakasone, a retired US Army general and a former head of the National Security Agency (NSA), will also join OpenAI’s board of directors. He will contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats. (Link)
🤖Former Meta engineers launch Jace, your new autonomous AI employee
Jace uses Zeta Labs’ proprietary web-interaction model, Autonomous Web Agent-1, to use a browser to interact with websites like any human would. It allows it to handle real-world tasks like booking flights, handling hiring, or even setting up a company.
(Link)
💼LinkedIn is rolling out new AI-powered features for premium users
The features include searching for jobs by prompting in natural language, building a cover letter from scratch, reviewing your résumé with personalized suggestions for improving it for a specific job post, and making edits interactively with AI. (Link)
🌍Synthflow's AI voice assistants are now multilingual!
They can fluently communicate in Spanish, German, Portuguese, French, and English. Sythflow also added corresponding voices for each language to ensure authentic and natural-sounding interactions so businesses can engage a global audience and offer personalized experiences. (Link)
🖼️Picsart is partnering with Getty Images to develop a custom model for AI imagery
The model will be built from scratch and trained exclusively on Getty Images’ licensed creative content. It will bring responsible AI imagery to creators, marketers, and small businesses that use Picsart, enabling them to generate unique images with full commercial rights. (Link)
New to the newsletter?
The AI Edge keeps engineering leaders & AI enthusiasts like you on the cutting edge of AI. From machine learning to ChatGPT to generative AI and large language models, we break down the latest AI developments and how you can apply them in your work.
Thanks for reading, and see you tomorrow. 😊