OpenAI's New 'Preparedness Framework' to Track AI Risks
Plus: Google improving performance of LLMs, NVIDIA’s new GAvatar creates 3D realistic avatars.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 171st edition of The AI Edge newsletter. This edition brings you “OpenAI's New ‘Preparedness Framework’ to Track AI Risks”
And a huge shoutout to our incredible readers. We appreciate you😊
In today’s edition:
🔥 OpenAI's new ‘Preparedness Framework’ to track AI risks
👏👏🚀
Google Research’s new approach to improve performance of LLMs
🖼️ NVIDIA’s new GAvatar creates realistic 3D avatars
📖 Knowledge Nugget: Gen AI can supercharge your AppSec program by
Let’s go!
OpenAI's new ‘Preparedness Framework’ to track AI risks
OpenAI published a new safety preparedness framework to manage AI risks; They are strengthening its safety measures by creating a safety advisory group and granting the board veto power over risky AI. The new safety advisory group will provide recommendations to leadership, and the board will have the authority to veto decisions.
OpenAI's updated "Preparedness Framework" aims to identify and address catastrophic risks. The framework categorizes risks and outlines mitigations, with high-risk models prohibited from deployment and critical risks halting further development. The safety advisory group will review technical reports and make recommendations to leadership and the board, ensuring a higher level of oversight.
Why does this matter?
OpenAI's updated safety policies and oversight procedures demonstrate a commitment to responsible AI development. As AI systems grow more powerful, thoughtfully managing risks becomes critical. OpenAI's Preparedness Framework provides transparency into how they categorize and mitigate different types of AI risks.
Google Research’s new approach to improve LLM performance
Google Research released a new approach to improve the performance of LLMs; It answers complex natural language questions. The approach combines knowledge retrieval with the LLM and uses a ReAct-style agent that can reason and act upon external knowledge.
The agent is refined through a ReST-like method that iteratively trains on previous trajectories, using reinforcement learning and AI feedback for continuous self-improvement. After just two iterations, a fine-tuned small model is produced that achieves comparable performance to the large model but with significantly fewer parameters.
Why does this matter?
Having access to relevant external knowledge gives the system greater context for reasoning through multi-step problems. For the AI community, this technique demonstrates how the performance of language models can be improved by focusing on knowledge and reasoning abilities in addition to language mastery.
NVIDIA’s new GAvatar creates realistic 3D avatars
Nvidia has announced GAvatar, a new technology that allows for creating realistic and animatable 3D avatars using Gaussian splatting. Gaussian splatting combines the advantages of explicit (mesh) and implicit (NeRF) 3D representations.
However, previous methods using Gaussian splatting had limitations in generating high-quality avatars and suffered from learning instability. To overcome these challenges, GAvatar introduces a primitive-based 3D Gaussian representation, uses neural implicit fields to predict Gaussian attributes, and employs a novel SDF-based implicit mesh learning approach.
GAvatar outperforms existing methods in terms of appearance and geometry quality and achieves fast rendering at high resolutions.
Why does this matter?
This cleverly combines the best of both mesh and neural network graphical approaches. Meshes allow precise user control, while neural networks handle complex animations. By predicting avatar attributes with neural networks, GAvatar enables easy customization. Using a novel technique called Gaussian splatting, GAvatar reaches new levels of realism.
Enjoying the daily updates?
Refer your pals to subscribe to our daily newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Knowledge Nugget: Gen AI can supercharge your AppSec program
This article by
discusses how Gen AI (General Artificial Intelligence) can be used to improve application security (AppSec) programs. It explores the potential of using Gen AI to automate tasks that involve analyzing and responding to content written in English, such as design reviews and risk assessments.The post provides a framework for Security leaders to leverage Gen AI in their AppSec programs, and suggests use cases such as threat modeling, delivering security standards, vendor risk management, and cyber risk assessments. It acknowledges the need for accuracy, the limitations of replacing humans, and the dependence on third-party models.
Why does this matter?
Applying general AI to application security could significantly automate tedious but critical manual reviews. By quickly analyzing vast quantities of documentation, GenAI systems may catch more issues and free up staff for higher complexity assessments.
What Else Is Happening❗
🚀 Accenture launches GenAI Studio in Bengaluru India, to accelerate Data and AI
Its part of $3bn investment. The studio will offer services such as the proprietary GenAI model "switchboard," customization techniques, model-managed services, and specialized training programs. The company plans to double its AI talent to 80K people in the next 3 years through hiring, acquisitions, and training. (Link)
🧳 Expedia is looking to use AI to compete with Google trip-planning business
Expedia wants to develop personalized customer recommendations based on their travel preferences and previous trips to bring more direct traffic. They aim to streamline the travel planning process by getting users to start their search on its platform instead of using external search engines like Google. (Link)
🤝 Jaxon AI partners with IBM Watsonx to combat AI hallucination in LLMS
The company's technology- Domain-Specific AI Language (DSAIL), aims to provide more reliable AI solutions. While AI hallucination in content generation may not be catastrophic in some cases, it can have severe consequences if it occurs in military technology. (Link)
👁️ AI-Based retinal analysis for childhood autism diagnosis with 100% accuracy
Researchers have developed this method, and by analyzing photographs of children's retinas, a deep learning AI algorithm can detect autism, providing an objective screening tool for early diagnosis. This is especially useful when access to a specialist child psychiatrist is limited. (Link)
🌊 Conservationists using AI to help protect coral reefs from climate change
The Coral Restoration Foundation (CRF) in Florida has developed a tool called CeruleanAI, which uses AI to analyze 3D maps of reefs and monitor restoration efforts. AI allows conservationists to track the progress of restoration efforts more efficiently and make a bigger impact. (Link)
That's all for now!
Subscribe to The AI Edge and join the impressive list of readers that includes professionals from Moody’s, Vonage, Voya, WEHI, Cox, INSEAD, and other reputable organizations.
Thanks for reading, and see you tomorrow. 😊