Adobe Unveils Its Most Powerful AI Image Generator Yet
Plus: Meta finally rolls out multimodal AI capabilities for its smart glasses, Profulent’s OpenCRISPR-1 can edit the human genome.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 260th edition of The AI Edge newsletter. This edition features “Adobe Unveils Its Most Powerful AI Image Generator Yet.”
And a huge shoutout to our amazing readers. We appreciate you😊
In today’s edition:
🖼️ Firefly 3: Adobe’s best AI image generation model to date
👓 Meta finally rolls out multimodal AI capabilities for its smart glasses
🧬 Profulent’s OpenCRISPR-1 can edit the human genome
📚 Knowledge Nugget: Options for accessing Llama 3 from the terminal using LLM by
Let’s go!
Firefly 3: Adobe’s best AI image generation model to date
Adobe has announced a major update to its AI image generation technology called Firefly Image 3. The model showcases a significant improvement in creating more realistic and high-quality images over previous versions. It has enhanced capabilities to understand longer text prompts, generate better lighting, and depict subjects like crowds and human expressions. The Firefly Image 3 model is now available through Adobe's Firefly web app as well as integrated into Adobe Photoshop and InDesign apps.
It powers new AI-assisted features in these apps, such as generating custom backgrounds, creating image variations, and enhancing detail. Adobe has also introduced advanced creative controls like Structure Reference to match a reference image's composition and Style Reference to transfer artistic styles between images. Adobe also attaches "Content Credentials" to all Firefly-generated assets to promote responsible AI development.
Why does it matter?
In AI image generation, a more powerful model from a major player like Adobe could intensify competition with rivals like Midjourney and DALL-E It may motivate other providers to accelerate their own model improvements to keep pace. For creative professionals and enthusiasts, accessing such advanced AI tools could unlock new levels of creative expression and productivity.
Meta finally rolls out multimodal AI capabilities for its smart glasses; adds new features
Meta has announced exciting updates to their Ray-Ban Meta smart glasses collection. They are introducing new styles to cater to a wider range of face shapes. The new styles include the vintage-inspired Skyler frames, designed for smaller faces, and the Headliner frames with a low bridge option. It also introduces video calling capabilities via WhatsApp and Messenger, allowing users to share their views during a video call.
Meta is integrating its AI technology, Meta AI Vision, into Ray-Ban smart glasses. Users can interact with the glasses using voice commands, saying "Hey Meta," and receive real-time information. The multimodal AI can translate text into different languages using the built-in camera. These capabilities were in testing for a while and are now available to everyone in the US and Canada.
Why does it matter?
Meta is pushing the boundaries of smart glasses technology, making them more versatile, user-friendly, and AI-powered. This could lead to increased mainstream adoption and integration of augmented reality wearables and voice-controlled AI assistants. Smart glasses could also redefine how people interact with the world around them, potentially changing how we work, communicate, and access information in the future.
Profulent’s OpenCRISPR-1 can edit the human genome
Profluent, a biotechnology company, has developed the world's first precision gene editing system using AI-generated components. They trained LLMs on a vast dataset of CRISPR-Cas proteins to generate novel gene editors that greatly expand the natural diversity of these systems. OpenCRISPR-1 performed similarly to the widely used SpCas9 gene editor regarding on-target editing activity but had a 95% reduction in off-target effects. This means OpenCRISPR-1 can edit the human genome with high precision.
The researchers further improved OpenCRISPR-1 by using AI to design compatible guide RNAs, enhancing its editing efficiency. Profluent publicly released OpenCRISPR-1 to enable broader, ethical use of this advanced gene editing technology across research, agriculture, and therapeutic applications. By using AI-generated components, they aim to lower the cost and barriers to accessing powerful genome editing capabilities.
Why does it matter?
The ability to design custom gene editors using AI could dramatically accelerate the pace of innovation in gene editing, making these powerful technologies more precise, safer, accessible, and affordable for a wide range of diseases. This could lead to breakthroughs like personalized medicine, agricultural applications, and basic scientific research.
Enjoying the daily updates?
Refer your pals to subscribe to our daily newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Knowledge Nugget: Options for accessing Llama 3 from the terminal using LLM
In this article,
discusses several options for accessing the new Llama 3 language model:One easy option is to run the smaller 8B Instruct version of Llama 3 locally using the llm-gpt4all plugin. This requires an 8GB download and 8GB of RAM.
The article also covers accessing faster-hosted versions of Llama 3 from providers like Groq, which offers a free preview of their API.
Another option is to run the larger 70B Instruct version locally using the llamafile tool, which requires 37GB of downloaded files and 64GB of RAM.
In addition to these local options, the article lists several paid API providers that offer access to Llama 3 models, including Perplexity Labs, Anyscale, Fireworks AI, OpenRouter, and Together AI.
Why does it matter?
Since Llama 3 is openly licensed, multiple providers can offer access, potentially leading to increased competition and lower pricing. This could democratize access to cutting-edge language models, enabling more developers, researchers, and organizations to leverage these capabilities in their applications and projects. It could empower individuals and small teams to experiment with and build upon this technology locally, fostering innovation and new use cases.
What Else Is Happening❗
🤝 Coca-Cola and Microsoft partner to accelerate cloud and Gen AI initiatives
Microsoft and Coca-Cola announced a 5-year strategic partnership, where Coca-Cola has made a $1.1 billion commitment to the Microsoft Cloud and its generative AI capabilities. The collaboration underscores Coca-Cola’s ongoing technology transformation, underpinned by the Microsoft Cloud as Coca-Cola’s globally preferred and strategic cloud and AI platform. (Link)
👥 Cognizant and Microsoft team up to boost Gen AI adoption
Microsoft has teamed up with Cognizant to bring Microsoft’s Gen AI capabilities to Cognizant’s employees and users. Cognizant acquired 25,000 Microsoft 365 Copilot seats for its associates, 500 Sales Copilot seats, and 500 Services Copilot seats. With that, Cognizant will transform business operations, enhance employee experiences, and deliver new customer value. (Link)
🧠 Amazon wishes to host companies’ custom Gen AI models
AWS wants to become the go-to place for companies to host and fine-tune their custom Gen AI models. Amazon Bedrock’s new Custom Model Import feature lets organizations import and access Gen AI models as fully managed APIs. Companies’ proprietary models, once imported, benefit from the same infrastructure as other generative AI models in Bedrock’s library. (Link)
🚀 OpenAI launches more enterprise-grade features for API customers
OpenAI expanded its enterprise features for API customers, further enriching its Assistants API and introducing new tools to enhance security and administrative control. The company has introduced Private Link, a secure method to enable direct communication between Azure and OpenAI. It has also added Multi-Factor Authentication (MFA) to bolster access control. (Link)
🤖 Tesla could start selling Optimus robots by the end of 2025
According to CEO Elon Musk, Tesla's humanoid robot, Optimus, may be ready to sell by the end of next year. Several companies have been betting on humanoid robots to meet potential labor shortages and perform repetitive tasks that could be dangerous or tedious in industries such as logistics, warehousing, retail, and manufacturing. (Link))
New to the newsletter?
The AI Edge keeps engineering leaders & AI enthusiasts like you on the cutting edge of AI. From machine learning to ChatGPT to generative AI and large language models, we break down the latest AI developments and how you can apply them in your work.
Thanks for reading, and see you tomorrow. 😊