Plus: Many-shot-in-context learning is a breakthrough in improving LLM performance, Groq shatters AI inference speed record with 800 tokens/second on LLaMA 3
I don’t think Apple will keep the AI tools *only* on device for long. It’s cool that they are starting with all on device processing which should enable some novel use cases on the iPhone later this year. Where appropriate from a security and processing perspective though I think they’ll start adding in cloud computed functionalities enabled by their acquisition of many AI startups over the last few years as well as leveraging their partnership with Google (Gemini Siri is definitely on the way).
I agree. On-device processing is a great foundation for data privacy to begin with, but cloud processing is an absolute necessity for advanced features. As I mentioned in the newsletter, I think we are going to witness hybrid systems coming from Apple that offer basic functionalities like text analysis and response generation offline while borrowing from more advanced AI for complex features.
I don’t think Apple will keep the AI tools *only* on device for long. It’s cool that they are starting with all on device processing which should enable some novel use cases on the iPhone later this year. Where appropriate from a security and processing perspective though I think they’ll start adding in cloud computed functionalities enabled by their acquisition of many AI startups over the last few years as well as leveraging their partnership with Google (Gemini Siri is definitely on the way).
I agree. On-device processing is a great foundation for data privacy to begin with, but cloud processing is an absolute necessity for advanced features. As I mentioned in the newsletter, I think we are going to witness hybrid systems coming from Apple that offer basic functionalities like text analysis and response generation offline while borrowing from more advanced AI for complex features.