Humane Officially Launches the AI Pin
Plus: OpenAI Data Partnerships, Adobe's ground-breaking 2D-to-3D AI model.
Hello Engineering Leaders and AI Enthusiasts!
Welcome to the 144th edition of The AI Edge newsletter. This edition brings you all the details about Humane’s AI Pin.
And a huge shoutout to our incredible readers. We appreciate you😊
In today’s edition:
🚀 Humane officially launches the AI Pin
🔥
OpenAI to partner with organizations for new AI training data
🤖 Adobe creates 3D models from 2D images ‘within 5 seconds’
📚 Knowledge Nugget: One LLM won't rule them all by
Let’s go!
Humane officially launches the AI Pin
After months of demos and hints about what the AI-powered future of gadgets might look like, Humane finally took the wraps off of its first device: the AI Pin. Here’s a tldr;
It is a $699 wearable in two parts– a square device and a battery pack that magnetically attaches to your clothes or other surfaces.
$24 monthly fee for a Humane subscription, which gets you a phone number and data coverage through T-Mobile’s network.
You control it with a combination of voice control, a camera, gestures, and a small built-in projector.
More in this video👇
The Pin’s primary job is to connect to AI models through software the company calls AI Mic. Humane’s press release mentions both Microsoft and OpenAI, and previous reports suggested that the Pin was primarily powered by GPT-4– Humane says that ChatGPT access is actually one of the device’s core features.
The device will start shipping in early 2024, and preorders begin November 16th.
Why does this matter?
Humane is trying essentially to strip away all the interface cruft from technology. It won’t have a home screen or lots of settings and accounts to manage; you can just talk to.
Because of AI, we’ve seen much functionality become available through a simple text command to a chatbot. Humane’s trying to build a gadget in the same spirit. If it lives up to its lofty promises, AI may change the future of smartphones forever.
(Source)
OpenAI to partner with organizations for new AI training data
OpenAI is introducing OpenAI Data Partnerships, where it will work together with organizations to produce public and private datasets for training AI models.
Here’s the kind of data it is seeking:
Large-scale datasets that reflect human society and that are not already easily accessible online to the public today
Any modality, including text, images, audio, or video
Data that expresses human intention (e.g. conversations), across any language, topic, and format
It will also use its next-generation in-house AI technology to help organizations digitize and structure data.
Also, it is not seeking datasets with sensitive or personal information, or information that belongs to a third party. But it can help organizations remove it if needed.
Why does this matter?
It is no secret that the data sets used to train AI models are deeply flawed and quality data scarce. Models amplify these flaws in harmful ways. Now, OpenAI seems to want to combat it by partnering with outside institutions to create new, hopefully improved data sets.
OpenAI claims this will help make AI maximally helpful, but there might be a commercial motivation to stay at the top. We’ll just have to wait and see if OpenAI does better than the many data-set-building efforts made before.
Adobe creates 3D models from 2D images ‘within 5 seconds’
A team of researchers from Adobe Research and Australian National University have developed a groundbreaking AI model that can transform a single 2D image into a high-quality 3D model in just 5 seconds.
Detailed in their research paper LRM: Large Reconstruction Model for Single Image to 3D, it could revolutionize industries such as gaming, animation, industrial design, augmented reality (AR), and virtual reality (VR).
LRM can reconstruct high-fidelity 3D models from real-world images, as well as images created by AI models like DALL-E and Stable Diffusion. The system produces detailed geometry and preserves complex textures like wood grains.
Why does this matter?
LRM enables broad applications in many industries and use cases with a generic and efficient approach. This can make it a game-changer in the field of AI-driven 3D modeling.
Enjoying the daily updates?
Refer your pals to subscribe to our daily newsletter and get exclusive access to 400+ game-changing AI tools.
When you use the referral link above or the “Share” button on any post, you'll get the credit for any new subscribers. All you need to do is send the link via text or email or share it on social media with friends.
Knowledge Nugget: One LLM won't rule them all
The chips in most devices — microwaves, kettles, even your Apple Watch — aren’t general-purpose CPUs. They are application-specific integrated circuits (ASICs).
The same pattern will emerges with LLMs. Rather than having one, all-powerful LLM, we’ll instead see a proliferation of smaller, application-specific LLMs.
In this interesting post,
and calls the emerging category of smaller, domain-specific models ASLMs (application-specific language models), following the nomenclature from chip design. For the sake of the post, GPT-style, general-purpose models will be called GLMs (general language models).They discuss what is ASLM, why ASLMs over GLMs, how to build ASLMs, and also what does it mean for open-source.
Why does this matter?
This article highlights a fundamental shift in AI. It encourages open-source models to become more accessible, affordable, and specialized in order to thrive successfully.
What Else Is Happening❗
📸Snap adds ChatGPT to its AR Lenses as AI becomes integral to products.
In a collaboration with OpenAI, Snap created the ChatGPT Remote API, granting Lens developers the ability to harness the power of ChatGPT in their Lenses. The new GenAI features simplify the creation process into one straightforward workflow in Lens Studio, rather than using several external tools. (Link)
💬GitLab expands its AI lineup with Duo Chat.
Earlier GitLab unveiled Duo, a set of AI features to help developers be more productive. Today, it added Duo Chat to this lineup, a ChatGPT-like experience that allows developers to interact with the bot to access the existing Duo features, but in a more interactive experience. Duo Chat is now in beta. (Link)
🤖OpenAI’s Turbo models to be available on Azure OpenAI Service by the end of this year.
On Azure OpenAI Service, token pricing for the new models will be at parity with OpenAI’s prices. Microsoft is also looking forward to building deep ecosystem support for GPTs, which it’ll share more about next week at the Microsoft Ignite conference. (Link)
💰Stability AI gets Intel backing in new financing.
Stability AI has raised new financing led by chipmaker Intel– a cash infusion that arrives at a critical time for the AI startup. It raised just under $50 million in the form of a convertible note in the deal, which closed in October. (Link)
🚀Picsart launches a suite of AI-powered tools for businesses and individuals.
The suite includes tools that let you generate videos, images, GIFs, logos, backgrounds, QR codes, and stickers. Called Picsart Ignite, it has 20 tools that are designed to make it easier to create ads, social posts, logos, and more. It will be available to all users across Picsart web, iOS, and Android. (Link)
That's all for now!
If you are new to The AI Edge newsletter, subscribe to get daily AI updates and news directly sent to your inbox for free!
Thanks for reading, and see you tomorrow. 😊