Liquid AI just dropped LFM2-VL, and it feels like a turning point. These are the world’s fastest, best-performing open-source small foundation models, built to run directly on phones, laptops, and even wearables. With up to 2× faster inference than rivals, device-aware efficiency, and benchmarks rivaling much larger systems, Liquid AI is showing that multimodal AI doesn’t need the cloud anymore. From smart cameras to offline assistants, this release proves advanced vision-language AI can finally live on everyday devices.
Credit to : AI Revolution