AnyVSAny Logo

Ishiki Labs

Building the Future of Multimodal AI

Visit Website
Metric Details
Industry B2B
Batch Winter 2026
Team Size 2 members
Focus Tags
Deep LearningGenerative AIAI
API Support ✅ Available
Description Current multimodal models can see and hear. But they talk when they shouldn't. They can't tell if you're speaking to them or someone else. We are building an AI that knows when to stay silent, yet still understanding what's going on in your conversation, so it can best assist when you do need it in real time. Our first version, fern-0.1: provides real-time expert opinions on demand, instant task delegation, zero interruptions. All as fast as ChatGPT voice and Gemini live.