Google & Microsoft Battle to Lead Healthcare Al
Plus: The impact of poisoning LLM supply chains, How language models use long contexts.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 59th edition of The AI Edge newsletter. This edition brings you the battle between Google and Microsoft to win the AI race in healthcare.
A huge shoutout to our incredible readers. We appreciate you! 😊
In today’s edition:
🌍 Google & Microsoft battle to lead healthcare AI
⚠️ The impact of poisoning LLM supply chains
🧠 How language models use long contexts
📚 Knowledge Nugget: Your go-to guide to master prompt engineering in LLMs by
Let’s go!
Google & Microsoft battle to lead healthcare AI
Reportedly, Google’s Med-PaLM 2 (an LLM for the medical domain) has been in testing at the Mayo Clinic research hospital. In April, Google announced its limited access for select Google Cloud customers to explore use cases and share feedback to investigate safe, responsible, and meaningful ways to use it.
Meanwhile, Google’s rivals moved quickly to incorporate AI advances into patient interactions. Hospitals are beginning to test OpenAI’s GPT algorithms through Microsoft’s cloud service in several tasks. Google’s Med-PaLM 2 and OpenAI’s GPT-4 each scored similarly on medical exam questions, according to independent research released by the companies.
Why does this matter?
It seems Google and Microsoft are racing to translate recent AI advances into products that clinicians would use widely. The AI field has seen rapid advancements and research in diverse domains. But such a competitive landscape accelerates translating them into widely available, impactful AI products (which is sometimes slow and challenging due to the complexity of real-world applications).
(Source)
The impact of poisoning LLM supply chains
LLMs are gaining massive recognition worldwide. However, no existing solution exists to determine the data and algorithms used during the model’s training. In an attempt to showcase the impact of this, Mithril Security undertook an educational project— PoisonGPT— aimed at showing the dangers of poisoning LLM supply chains.
It shows how one can surgically modify an open-source model and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.
Mithril Security is also working on AICert, a solution to trace models back to their training algorithms and datasets which will be launched soon.
Why does this matter?
LLMs still resemble a vast, uncharted territory where many companies/users often turn to external parties and pre-trained models for training and data. It carries the inherent risk of applying malicious models to their use cases, exposing them to safety issues. This project highlights the awareness needed for securing LLM supply chains.
How language models use long contexts
LLM vendors are fiercely competing to claim the title of having the biggest context window. Recently, Anthropic made headlines for expanding Claude’s context window from 100K tokens. But does a bigger context window always lead to better results?
New research finds significant insights as well as limitations related to large contexts. It reveals that
Language models often struggle to use information in the middle of long input contexts
Their performance decreases as the input context grows longer
The performance is often highest when relevant information occurs at the beginning or end of the input context
Why does this matter?
While recent language models have the ability to take long contexts as input, relatively little is known about how well they use them. The above research provides a better understanding of this and provides new evaluation protocols for future long-context models. It can also help models up their game and enable users to better interact with them.
Knowledge Nugget: Your go-to guide to master prompt engineering in LLMs
Prompt engineering significantly impact the responses from an LLM. Because the trick lies in understanding how models process inputs and tailoring those inputs for optimal results.
In this article,
explores this crucial area of working with LLMs and explains the concept using an interesting parrot analogy. The article also explains when to use prompt engineering, the types of prompt engineering, and how to pick the one best for you.Why does this matter?
Using the insights from this article, companies and users determine the best prompt engineering techniques to train their LLM model effectively, ensuring high-quality customer service responses.
What Else Is Happening❗
🍎AI image recognition models powers Robot Apple Harvester!(Link)
📝YouTube tests AI-generated quizzes on educational videos(Link)
🚀Official code for DragDiffusion is released, check it out!(Link)
💼TCS scales up Microsoft Azure partnership, to train 25,000 associates(Link)
🔒Shutterstock continues generative AI push with legal protection for enterprise customers(Link)
🛠️ Trending Tools
Box AI: Simplify AI with one-click toolbox for diverse capabilities. User-friendly interface for all tech levels.
Telesite: Free, easy-to-use mobile site builder. AI-powered features for stunning mobile websites in minutes.
AI Postcard Generator: Build personalized postcards based on location and recipient. Tailor with three keywords.
SocialBook Photostudio: Powerful AI design tools for professional photo editing and creative effects.
InsightJini: Upload data for instant insights and visualizations. Ask questions in natural language for answers and charts.
Speak AI: Learn languages, practice scenarios, and receive grammar corrections with an AI-powered language app.
Ask my docs: AI-powered assistant for precise answers from documentation. Boost productivity and satisfaction.
Disperto: AI content creator, chatbot, and personalized assistant in one. Smarter, faster, and more efficient communication.
That's all for now!
If you are new to ‘The AI Edge’ newsletter. Subscribe to receive the ‘Ultimate AI tools and ChatGPT Prompt guide’ specifically designed for Engineering Leaders and AI enthusiasts.
Thanks for reading, and see you tomorrow. 😊