💧 LFM2 Edge

Run Liquid Foundation Models entirely on your device. Browser, desktop, mobile. Zero cloud dependency. Complete privacy.

🔒 Privacy Mode Active • Zero Telemetry • Local Storage Only

🎯 Deployment Targets

🌐

Web Browser

Run LFM2 directly in the browser with WebGPU acceleration. No downloads, instant deployment.

React Vite ONNX Runtime WebGPU
🖥️

Desktop (Tauri)

Native desktop app for Windows, macOS, Linux. 5MB binary, native file system access.

Tauri v2 Rust Native GPU
💻

Desktop (Electron)

Full-featured desktop app with Node.js integration. Familiar development experience.

Electron Node.js Chromium
📱

Mobile (React Native)

iOS and Android with Expo. ONNX Runtime Mobile for efficient on-device inference.

React Native Expo ONNX Mobile

🔧 Model Pipeline

🧠
LFM2 PyTorch
350M params
📦
ONNX Export
Opset 17
Quantize
INT8/FP16
🚀
Deploy
WebGPU/WASM

🔐 Privacy Features

🔒
AES-256 Encryption

All conversations encrypted at rest using Web Crypto API

🏠
100% Local Inference

Model runs on your device. No data ever leaves.

📵
Zero Telemetry

No analytics, no tracking, no usage data collection

💾
Export Your Data

Full data portability. Export conversations anytime.

Ready to Deploy?

Clone the repository, run the notebooks, and deploy LFM2 to any edge device.

📂 View on GitHub 📚 Back to Portfolio