AI Slashes Medical Diagnosis Time By 5000x
Plus: NYT sues Perplexity AI for copyright infringement, Anthropic raises AI safety bar with proactive policy, Nvidia’s new AI model beats leading AI models, and more.
Hello Engineering Leaders and AI Enthusiasts!
This newsletter brings you the latest AI updates in just 4 minutes! Dive in for a quick summary of everything important that happened in AI over the last week.
And a huge shoutout to our amazing readers. We appreciate you😊
In today’s edition:
📰 NYT sues Perplexity AI for copyright infringement
🛡️ Anthropic raises AI safety bar with responsible, proactive policy
🚀 Nvidia’s new AI model beats GPT-4o and Claude 3.5
📱 Mistral unveils new models for on-device AI computing
🧪 Newton AI self-learns physics principles from sensor data
🏥 AI reaches expert-level accuracy in complex medical scans
📚 Knowledge Nugget: Generative AI and the Legal Profession: Breaking the AI Adoption Logjam by
Let’s go!
NYT sues Perplexity AI for copyright infringement
The New York Times has issued a cease-and-desist letter to Perplexity, demanding it stop using NYT content without authorization. NYT claims that Perplexity has been using its articles to generate AI-powered summaries, which violates copyright law.
Perplexity has defended its actions, saying it does not scrape content for AI training but rather indexes web pages and surfaces factual information. It argues that "no one organization owns the copyright over facts." Perplexity plans to respond to the notice by October 30.
Why does it matter?
The rise of AI-powered search presents mounting challenges for media outlets regarding potential web traffic and advertising revenue loss. As AI increasingly summarizes journalistic content, publishers face uncertainty without explicit legal frameworks.
Anthropic raises AI safety bar with responsible, proactive policy
Anthropic has significantly updated its Responsible Scaling Policy (RSP) to introduce new safeguards and governance measures for advanced AI systems. The revised policy defines "Capability Thresholds" as triggering safety protocols when AI models reach certain risk levels.
The policy's tiered AI Safety Level (ASL) system, oversight from a dedicated Responsible Scaling Officer, and commitment to transparency are designed to ensure Anthropic's AI models do not cause large-scale harm, whether through malicious use or unintended consequences.
Why does it matter?
Anthropic's comprehensive framework for responsible AI is a watershed moment in AI safety, solidifying its preeminent safety-oriented research laboratory position. This shows Anthropic is serious about responsibly developing AI and may release more AI products soon.
Nvidia’s new AI model beats GPT-4o and Claude 3.5
Nvidia has quietly released a new AI language model called Llama-3.1-Nemotron-70B-Instruct. The model outperformed industry-leading models on key benchmarks like GPT-4o and Claude 3.5 Sonnet. The model achieved top scores on tests like Arena Hard, AlpacaEval 2 LC, and GPT-4-Turbo MT-Bench, showing it can provide more accurate responses than its competitors.
By fine-tuning the open-source Llama 3.1 model using techniques like Reinforcement Learning from Human Feedback, Nvidia has created a model that could offer businesses a more capable and cost-effective alternative to other leading language models on the market.
Why does it matter?
While NVIDIA is famous for making computer chips, it's now surprising everyone with its AI models, too. Its new open-source AI, Nemotron, proves that smaller models can be as powerful as the big ones while being more efficient.
Mistral unveils new models for on-device AI computing
Mistral AI has released two new compact language models: Ministral 3B and Ministral 8B. They are designed to run on edge devices like laptops and phones, providing powerful AI capabilities at the local level. On various benchmarks, they outperform LLMs such as Mistral 7B, and competitors like Llama and Gemma.
The new models, called "Les Ministraux," are aimed at use cases that require local, privacy-first AI inference, such as on-device translation, offline assistants, and autonomous robotics. Their smaller size and efficient design make them well-suited for these applications.
Why does it matter?
As anticipation builds for Apple Intelligence's deployment as a pioneering on-device AI solution, compact models optimized for local processing continue to advance. With the release of "Les Ministraux," the accessibility of premium language models at your palms may become a norm.
Newton AI self-learns physics principles from sensor data
Researchers at Archetype AI have developed an AI model called Newton that can learn physics principles directly from raw sensor data without being explicitly taught any physics laws. Newton was trained on a massive dataset of over half a billion sensor measurements across diverse physical phenomena like motion, electricity, fluid flows, and more.
Due to the model’s zero-shot capabilities, which adapt to new situations without additional training, it can accurately predict and generalize to unfamiliar physical systems it has never encountered, like the chaotic motion of a pendulum or forecasting citywide power consumption.
Why does it matter?
Newton represents a revolutionary transformation in AI systems, replacing specialized models with a unified approach to physical understanding. This enables AI to develop comprehensive knowledge and adapt autonomously to diverse scenarios without human intervention.
AI reaches expert-level accuracy in complex medical scans
Researchers at UCLA have developed a breakthrough AI model called SLIViT (SLice Integration by Vision Transformer) that can analyze complex 3D medical scans, such as MRIs, CT scans, ultrasounds, and retinal imaging, with an accuracy matching that of expert radiologists.
SLIViT achieves expert-level performance that is 5,000 times faster than human experts. The critical innovation is SLIViT's novel approach of leveraging prior knowledge from large 2D medical image datasets to efficiently learn and make accurate predictions from relatively small 3D datasets.
Why does it matter?
SLIViT revolutionizes healthcare by delivering rapid, precise medical image analysis with minimal data, enabling faster diagnoses. Its affordability and expert-level accuracy empower resource-limited providers to access advanced diagnostics, democratizing healthcare access.
Enjoying the latest AI updates?
Refer your pals to subscribe to our newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Knowledge Nugget: Generative AI and the Legal Profession: Breaking the AI Adoption Logjam
In this post,
states that the legal profession is seeing a significant shift in AI adoption. In 2023, around 20% of law firms were using or considering AI, while an equal percentage were not interested.However, the 2024 data shows a dramatic change - over 50% of top law firms have purchased a generative AI solution, and 45% are using it for legal matters. The author wants to convey that the initial hesitation around AI adoption is diminishing. Factors like competitive pressure and AI's proven benefits are driving this surge.
Why does it matter?
This rapid shift could be a turning point for the legal industry. As more firms adopt generative AI, it may reshape how legal services are delivered and set new industry standards. Early adopters could gain an edge, while others risk falling behind.
What Else Is Happening❗
🚀Meta FAIR has released new models and tools, including an improved image segmentation model, a multimodal language model, and methods to accelerate LLM.
📖Penguin Random House, a major book publisher, is now explicitly prohibiting the use of its books for training artificial intelligence systems on their copyright pages.
⚖️Google AI Studio's new Compare Mode allows users to evaluate different Gemini models side-by-side, making selecting the best model for their use case easier.
🤖Microsoft's Copilot AI now allows businesses to create their own "autonomous agents" to understand work tasks and act on a user's behalf, boosting productivity.
⚠️The producers of ‘Blade Runner 2049’ have sued Elon Musk, Tesla, and Warner Bros. Discovery for allegedly using copyrighted images from the film without permission.
⚙️Elon Musk's AI startup, xAI, has launched an API for its Grok generative AI model. The API allows developers to integrate Grok into their tools and applications.
🍏iOS 18.1 will launch next week with new features: Apple Intelligence and the ability to use AirPods Pro 2 as hearing aids.
🎨Midjourney plans to release a web tool allowing users to edit uploaded images using its generative AI, raising concerns about potential misuse and misinformation.
🧰Perplexity AI releases new features for paid users, including internal knowledge search and a collaboration tool called Spaces to organize information and research.
🔒X has updated its privacy policy to allow third parties to train AI models on user posts unless users opt out of the default settings.
New to the newsletter?
The AI Edge keeps engineering leaders & AI enthusiasts like you on the cutting edge of AI. From machine learning to ChatGPT to generative AI and large language models, we break down the latest AI developments and how you can apply them in your work.
Thanks for reading, and see you next week! 😊