OpenAI’s AI Safety Team Is No More
Plus: Sony Music warns over 700 AI companies not to steal its content, Meta's Chameleon AI sets a new bar in mixed-modal reasoning.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 278th edition of The AI Edge newsletter. This edition brings you OpenAI dismantles “superalignment” team.
And a huge shoutout to our amazing readers. We appreciate you😊
In today’s edition:
🤖 OpenAI's "superalignment team," focused on the AI risks, is no more
🚫 Sony Music warns over 700 AI companies not to steal its content
🦎 Meta's Chameleon AI sets a new bar in mixed-modal reasoning
📚 Knowledge Nugget: The AI doppelgänger experiment – Part 1: The training by
Let’s go!
OpenAI's "superalignment team," focused on the AI risks, is no more
The team's co-leads, Ilya Sutskever and Jan Leike, have resigned from OpenAI. Several other researchers from the team and those working on AI policy and governance have also left the company. Leike cited disagreements with OpenAI's leadership about the company's priorities and resource allocation as reasons for his departure.
(Source)
The team's work will be absorbed into OpenAI's other research efforts, with John Schulman leading research on risks associated with more powerful models.
Why does this matter?
The "superalignment" team was for ensuring the artificial general intelligence (AGI) that OpenAI claims to be working on doesn't turn on humankind. This dismantling raises questions on the company's commitment to AI safety and ethical standards.
Sony Music warns over 700 AI companies not to steal its content
Sony Music, home to superstars like Billy Joel and Doja Cat, sent letters to over 700 AI companies and streaming platforms, warning them against using its content without permission. The label called out the "training, development, or commercialization of AI systems" that use copyrighted material, including music, art, and lyrics.
SMG recognizes AI's potential but stresses the need to respect songwriters' and artists' rights. The letter asks companies to confirm they haven't used SMG content without permission or provide details if they have.
Why does this matter?
The battle over music copyright and AI has intensified across various platforms, from YouTube's strict rules for AI-generated music to the recent standoff between Universal Music Group and TikTok. As AI voice clones and music generation tools become more sophisticated, artists question control, compensation, and actions against copyright infringement.
Meta's Chameleon AI sets a new bar in mixed-modal reasoning
Meta AI introduces Chameleon, a family of early-fusion token-based mixed-modal models that understands and generates images and text in any order. Unlike recent foundation models that process text and images separately, Chameleon unified token space allows it to process interleaved image and text sequences.This approach allows seamless reasoning and generation across modalities.
Meta researchers introduced architectural enhancements and training techniques to tackle the optimization challenges posed by this early fusion approach, including a novel image tokenizer, QK-Norm, dropout, and z-loss regularization. Remarkably, Chameleon achieves competitive or superior performance across various tasks, outperforming larger models like Flamingo-80B and IDEFICS-80B in image captioning and visual question answering despite its smaller model size.
Why does this matter?
Chameleon opens up new possibilities for more natural and intuitive human-machine interactions, similar to how we effortlessly communicate using both modalities in the real world.
Enjoying the daily updates?
Refer your pals to subscribe to our daily newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Knowledge Nugget: The AI doppelgänger experiment – Part 1: The training
, an illustrator and anthropologist, explores AI and art and the differences between human and machine ways of seeing style. Posture interviewed computer scientists about "style transfer" in generative AI and found that they are interested in style because it challenges machine learning models. He highlights that the success of style transfer depends not only on the algorithm but also on the "intuitive" perception of the human viewer. This suggests that machine and human ways of seeing artistic style are intertwined and co-dependent.Posture collaborated with Sitong, a PhD researcher in human-computer interaction, to investigate this idea further. Together, they conducted an experiment where illustrators trained an AI model on their artwork and then used that model to generate new images. This process revealed several insights:
The difficulty of accurately captioning and labeling datasets for training AI models.
The strange, almost uncanny feeling artists experience when seeing their personal style replicated by an AI.
The blurred lines between an artist's personal identity and their commercially commodified artistic style in the age of generative AI.
Why does this matter?
These findings are particularly relevant in light of British artist FKA Twigs' testimony in front of the US Senate about her experience with AI. Twigs emphasized the deep intertwining of her identity and her artistic work and the potential economic implications of AI-generated content. Experiments like this can help artists and tech experts have more informed technological, legal, and social conversations about AI and creativity.
What Else Is Happening❗
🤖 Google launched open-source Model Explore to visualize and debug complex AI models
It uses advanced graphics rendering techniques from the gaming industry to handle massive models. The tool offers a graphical user interface and a Python API for integration into machine learning workflows. Model Explorer lets developers identify and resolve issues quickly, especially for AI deployed on edge devices. (Link)
🇬🇧 The UK's AI Safety Institute is opening an office in San Francisco
The institute aims to be closer to the epicenter of AI development, companies like OpenAI and Google as they are building foundational models. This new office would open this summer, giving the UK access to Silicon Valley's tech talent and strengthening ties with the US. (Link)
📂 The EU demands Microsoft to provide internal documents on Bing's gen AI risks
The Commission suspects Bing may have breached the Digital Services Act (DSA) due to risks like AI "hallucinations," deep fakes, and potential voter manipulation. Microsoft has until May 27 to comply with the legally binding request for information. Failure to do so could result in fines of up to 1% of Microsoft's total annual income or worldwide turnover. (Link)
📸 Snapchat CEO Evan Spiegel focuses on AI and ML for better UX and personalization
As its ad revenue increases, Snap plans to expand content offerings, improve recommendation algorithms, and integrate Stories with Spotlight. The company is also investing in augmented reality and sees it as a way to bring people together in shared physical environments. (Link)
😏 Researchers in the Netherlands have developed an AI sarcasm detector
The AI was trained on text, audio, and emotional content from US sitcoms, including Friends and The Big Bang Theory. The AI could detect sarcasm in unlabeled exchanges nearly 75% of the time. Further improvements could come from adding visual cues to the AI's training data. (Link)
New to the newsletter?
The AI Edge keeps engineering leaders & AI enthusiasts like you on the cutting edge of AI. From machine learning to ChatGPT to generative AI and large language models, we break down the latest AI developments and how you can apply them in your work.
Thanks for reading, and see you tomorrow. 😊
Is the alignment team no longer critical because the technology is plateauing or did they leave because they see there’s no way left to prevent a misalignment scenario if the company is productizing and monetizing?