
Understand model behavior.
Don't assign it.
Latent Spaces is the first mobile app designed from the ground up as a Loom interface for language models, allowing you to see multiple possible continuations of the same prompt and explore any branch you choose.
Unlike standard chat interfaces that show only one path forward, Latent Spaces reveals the full spectrum of what language models can generate, empowering you to understand how they actually think. By making this tool available to everyone, we're enabling users to form their own perspectives about these models based on direct observation, rather than prepackaged narratives from commercial corporations.
Intrinsic Labs is invested in facilitating widespread, deep understanding of AI behavior. Latent Spaces is our first big step in that direction.
The state of things
Currently, Latent Spaces exists as a prototype iOS app. Model providers OpenRouter and Anthropic are currently implemented, giving users access to over 300 models. Next in line is a web version, and then an Android version.



The fundrasier aims to get the iOS app and web app ready for a public beta release.




Alongside the mobile app, Intrinsic Labs is developing a protocol called OpenLoom that other loom interfaces may adopt to import/export trees in a standardized lossless format. Latent Spaces supports tree sharing via the OpenLoom format out of the box.
The fundrasier also aims to get to a stable OpenLoom V1.0.
Beta Fundraiser
Monthly Support
Become a regular supporter for continuous development.
Feature List
- Address app architecture issues
- Address SwiftData-related performance issues
- Integrate Firebase for cloud backup and user authentication
- Encrypt all message data via the Signal protocol
- Upgrade node caching system
- Add support for saving reusable system prompts
- Add pinned/bookmarked trees
- Add support for editing trees and nodes
- Add full markdown display support
- Add image upload support (for applicable models)
- Add document upload support (for applicable models)
- Parse reasoning tokens for relevant models (e.g. DeepSeek R1)
- Add support for user defined models that comply with OpenAI API schema
- Add on-device audio transcription for hands-free beta voice mode
- Implement functional MVP of LoomDisplay (text-to-ASCII animation system)
- Add Hyperbolic to model providers
- Replicate iOS app features in web app
- Refine design and layout for desktop, tablet, and mobile
- Upgrade OpenLoom protocol architecture from graph to hypergraph (better handling of multi-modal trees)
- Upgrade node signing requirements to ensure accurate author attribution
- Add support for non-text node types (e.g. images, documents)
FAQ
Join Our Community
Be part of the conversation, get early access to beta releases, and help shape the future of Latent Spaces.