2Device
Device-aware compute orchestration.
Run multiple ML models on mobile — without crashing.
Everything your device
needs to think
Foundational systems that transform any mobile device into an intelligent compute platform.
Device Profiler
Real-time hardware fingerprinting. Detects compute capabilities, memory topology, and accelerator availability — all in under 12ms.
Adaptive Memory Engine
Multi-tier resource management with deep OS-level integration on iOS and Android. Preemptive eviction before the system intervenes. Zero OOM kills.
Thermal Intelligence
Continuously adapts workload intensity based on real-time thermal state. Graceful scaling before the OS throttles you.
Dynamic Resource Scheduler
Intelligently routes workloads across threads and GPU cores in real-time. Run 3+ models concurrently without contention.
Audio Pipeline
Full-stack voice processing with echo cancellation, activity detection, gain control, and noise suppression — all on-device, real-time, hardware-accelerated.
Secure OTA Policies
Cryptographically signed behavior policies delivered via our global edge network. Update limits and rules without app store review.
Three lines of code.
Infinite intelligence.
From install to production in minutes — not weeks.
Link the SDK
Static library + two headers. CMake, CocoaPods, or drop the XCFramework into Xcode. No runtime dependencies.
Declare your models
Tell 2Device what you want to run and at what priority. It profiles the device and builds an optimal execution plan.
Call 2d_run()
Intelligent resource management, thermal awareness, and workload orchestration activate automatically. Ship with confidence.
Monitor & adapt
Optional telemetry and over-the-air policy updates. Tune model priorities and resource limits without shipping a new build.
Performance that
speaks for itself
Built for the
real world
Runtime
⚡ Universal API Surface
Single C-compatible interface — link from Swift, Kotlin, C++, Rust, or WASM. Drop into any project in minutes, not weeks.
📱 Deep OS Integration
Direct hardware queries on every major mobile platform. No wrappers, no bridging overhead — true system-level awareness.
🚀 Hardware Acceleration
Native SIMD and platform-specific acceleration paths. Every computation runs at the maximum speed your hardware allows.
🌐 Policy/Engine Split
Engine ships as a sealed binary. Behavior rules update over the air via our tamper-proof edge network — no app store review needed.
Simple pricing.
Scale when ready.
Free
Full SDK. Attribution required.
- All runtime features
- Community support
- Telemetry opt-in
- Free commercial license
Indie
No attribution. Priority support.
- Everything in Free
- Remove attribution
- Email support
- Analytics dashboard
Pro
Dashboard, policy CDN, SLA.
- Everything in Indie
- Edge policy delivery
- Custom dashboards
- Priority SLA
Ready to build
without limits?
Start running ML models on mobile today. Zero crashes, zero compromises.