OpenAI and Google Advance AI Security
Plus: Robot dog turns into a talking tour guide with ChatGPT.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 134th edition of The AI Edge newsletter. This edition brings you OpenAI and Google’s efforts for safer, more secure AI.
And a huge shoutout to our incredible readers. We appreciate you😊
In today’s edition:
🔍 OpenAI forms 'Preparedness' team to study advanced AI risks
🌐
Google’s new ventures for safe and secure AI
🤖 Robot dog turns into a talking tour guide with ChatGPT
📚 Knowledge Nugget: Do Language Models really understand Language by
Let’s go!
OpenAI forms 'Preparedness' team to study advanced AI risks
To minimize risks from frontier AI as models continue to improve, OpenAI is building a new team called Preparedness. It tightly connect capability assessment, evaluations, and internal red teaming for frontier models, from the models OpenAI develops in the near future to those with AGI-level capabilities.
The team will help track, evaluate, forecast, and protect against catastrophic risks spanning multiple categories including:
Individualized persuasion
Cybersecurity
Chemical, biological, radiological, and nuclear (CBRN) threats
Autonomous replication and adaptation (ARA)
The Preparedness team mission also includes developing and maintaining a Risk-Informed Development Policy (RDP). In addition, OpenAI is soliciting ideas for risk studies from the community, with a $25,000 prize and a job at Preparedness on the line for top ten submissions.
Why does this matter?
The news unveiled during a major U.K. government summit on AI safety, which not so coincidentally comes after OpenAI announced it would form a team to study and control emergent forms of “superintelligent” AI. While CEO Sam Altman often aired fears that AI may lead to human extinction, this shows OpenAI is actually devoting resources to studying even less obvious and more grounded areas of AI risk.
Google’s new ventures for safer, more secure AI
Google has announced a bug bounty program for attack scenarios specific to generative AI through expanding its Vulnerability Rewards Program (VRP) for AI. It shared guidelines for security researches to see what’s “in scope”.
To further protect against machine learning supply chain attacks, Google is expanding its open source security work and building upon prior collaboration with the Open Source Security Foundation. It has earlier released Secure AI Framework (SAIF) that emphasized AI ecosystems must have strong security foundations.
Google is also to support a new effort by the non-profit MLCommons Association to develop standard AI safety benchmarks. The effort aims to bring together expert researchers across academia and industry to develop standard benchmarks for measuring the safety of AI systems into scores that everyone can understand.
Why does this matter?
While OpenAI’s focus seems to be shifting to broader AI risks, Google’s efforts has a collective-action approach. But both are incentivizing more security research (joining the likes of Microsoft), sparking even more collaboration with the open source security community, outside researchers, and others in industry. It will help find and address novel vulnerabilities, making generative AI products safer and more secure.
Robot dog turns into a talking tour guide with ChatGPT
Named Spot, the four-legged robot could run, jump, and even dance. To make Spot “talk,” Boston Dynamics used OpenAI’s ChatGPT API, along with some open-source LLMs to carefully train its responses. With ChatGPT, it can answer questions and generate responses about the company’s facilities while giving a tour.
It also outfitted the bot with a speaker, added text-to-speech capabilities, and made its mouth mimic speech “like the mouth of a puppet”.
Why does this matter?
This continues to push the boundaries of the intersection between AI and robotics. LLMs provide cultural context, general commonsense knowledge, and flexibility that could be useful for many robotics tasks.
Enjoying the daily updates?
Refer your pals to subscribe to our daily newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Knowledge Nugget: Do Language Models really understand Language
Ever since Deep Learning started outperforming human experts on Language tasks, one question has haunted debates about NLP/NLU models- do they understand language?
In this article,
digs into research to find the answer to that question, referring to a variety of sources and doing some investigations of our own. Before evaluating the research, he defines what is understanding and how we will evaluate LLMs for it. Can we define a framework for evaluating intelligence, which we can refine to get to an answer? And many more questions/concepts.Why does this matter?
The article gives a simple breakdown of how language models work, their capabilities and limitations, while giving some valuable insights.
What Else Is Happening❗
🔍Forbes launches its own generative AI search platform built with Google Cloud.
The tool, Adelaide, is purpose-built for news search and offers AI-driven personalized recommendations and insights from Forbes’ trusted journalism. It is in beta and select visitors can access it through the website. (Link)
🗺️Google Maps is becoming more like Search– thanks to AI.
Google wants Maps to be more like Search, where people can get directions or find places but also enter queries like “things to do in Tokyo” and get actually useful hits and discover new experiences, guided by its all-powerful algorithm. (Link)
🎨Shutterstock will now let you edit its library of images using AI.
It revealed a set of new AI-powered tools, like Magic Brush, which lets you tweak an image by brushing over an area and describing what you want to add/replace/erase. Others include smart resizing feature and background removal tool. (Link)
🏛UK to set up world's first AI safety institute, says PM Rishi Sunak.
The institute will carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of, exploring all the risks from social harms like bias and misinformation through to the most extreme risks of all. (Link)
💼Intel is trying something different– selling specialized AI software and services.
Intel is working with multiple consulting firms to build ChatGPT-like apps for customers that don’t have the expertise to do it on their own. (Link)
That's all for now!
If you are new to The AI Edge newsletter, subscribe to get daily AI updates and news directly sent to your inbox for free!
Thanks for reading, and see you tomorrow. 😊