Complete listing of all my GitHub repositories and projects
This repository - A comprehensive portfolio index hosting all my tools, tech stacks, frameworks, and projects.
Edge-based audio intelligence: Privacy-first transcription, text-to-speech, and acoustic modeling without cloud dependencies.
Optimizing OpenAI Whisper for browser and mobile. Local speech-to-text using ONNX Runtime WebGPU. 100% private, zero server cost.
Fast, local text-to-speech engine. Optimized for low-power devices and browsers. High-quality neural voices without an internet connection.
Cutting-edge research implementations in evolutionary AI, multi-agent coordination, and scaling principles.
Genetic evolution for AI personalities. Self-optimizing agents with liquid memory, ML analytics, and evolutionary optimization. Beyond prompt engineering.
Implementation of agent coordination architectures and scaling principles from "Towards a Science of Scaling Agent Systems" (arXiv:2512.08296). Research-backed multi-agent framework with benchmarks.
Intelligent AI routing & context engine with 3-tier routing (BitNet โ LFM โ Cloud). Validated against agent scaling laws (arXiv:2512.08296). Features architecture selection, error amplification detection, and cost optimization for multi-agent workflows.
๐ Benchmark Results: Efficiency: 1.000 (100% of target), Error Amplification: 1.00x (perfect), Success Rate: 100%
Genetic algorithms for agent personality optimization
Coordination overhead in multi-agent systems
Self-optimizing neural network architectures
Predictive analytics for agent behavior
Repository listings for custom tools and utilities will appear here.
Repository listings for tech stack implementations will appear here.
Repository listings for framework implementations will appear here.
Notebooks, tools, and resources for LLM fine-tuning, distillation, GGUF quantization, and model optimization.
ABE-41M training notebooks and fine-tuning tools. Includes advanced training configurations and unified training pipeline.
LLM security tooling for detecting poisoned models and training data. Adversarial robustness testing.
Official Liquid Foundation Models cookbook. Fine-tune LFM2, deploy to iOS/Android/Edge, local agentic workflows, and LEAP SDK tutorials.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024). Supports LoRA, QLoRA, PEFT, DPO, and more. 64.2k+ stars.
LLM inference in C/C++. The definitive GGUF quantization toolkit. Supports 2-8 bit quantization for efficient deployment.
Finetune Llama 3.3, Mistral, Phi-4 & Gemma 2-5x faster with 80% less memory! Free Colab notebooks included.
Go-to fine-tuning framework. Streamlined config-based training for LLaMA, Mistral, Falcon, and more.
Research code for language model operations including knowledge distillation, model compression, and efficient inference.
Easy-to-use LLM quantization package with GPTQ algorithm. 4-bit quantization for efficient deployment.
Repository listings for learning projects and experiments will appear here.
Listings for all other repositories will appear here.