โ† Back to Home

๐Ÿ“ Portfolio Index

Complete listing of all my GitHub repositories and projects

๐Ÿš€ Featured Projects

jimmys-tool-stacks-portfolio

This repository - A comprehensive portfolio index hosting all my tools, tech stacks, frameworks, and projects.

๐Ÿ“ Documentation โญ Portfolio

๐ŸŽ™๏ธ Voice & Audio AI Research

Edge-based audio intelligence: Privacy-first transcription, text-to-speech, and acoustic modeling without cloud dependencies.

๐Ÿ‘‚

Whisper Edge Deployment

Optimizing OpenAI Whisper for browser and mobile. Local speech-to-text using ONNX Runtime WebGPU. 100% private, zero server cost.

๐ŸŽฏ View Demo
Audio ONNX WebGPU Privacy
๐Ÿ—ฃ๏ธ

Piper TTS Edge

Fast, local text-to-speech engine. Optimized for low-power devices and browsers. High-quality neural voices without an internet connection.

TTS C++ WASM

๐Ÿ”ฌ AI Research & Agent Systems

Cutting-edge research implementations in evolutionary AI, multi-agent coordination, and scaling principles.

๐Ÿงฌ

Phylogenic AI Agents

v1.0.0

Genetic evolution for AI personalities. Self-optimizing agents with liquid memory, ML analytics, and evolutionary optimization. Beyond prompt engineering.

๐Ÿ“– Documentation ๐ŸŽฏ Demo
Python Machine Learning Genetic Algorithms Neural Networks Multi-Agent Systems LLM
โญ 1 star ๐Ÿ“Š 55 commits ๐Ÿท๏ธ v1.0.0
๐Ÿ“ˆ

Agent Scaling Laws

v1.0.0

Implementation of agent coordination architectures and scaling principles from "Towards a Science of Scaling Agent Systems" (arXiv:2512.08296). Research-backed multi-agent framework with benchmarks.

๐Ÿ“– Documentation ๐ŸŽฏ Demo
Python Distributed Systems Research arXiv MAS Coordination
โญ 1 star ๐Ÿ“Š 17 commits ๐Ÿ“š arXiv:2512.08296
๐Ÿง 

AdaptiveMind

v1.0.0

Intelligent AI routing & context engine with 3-tier routing (BitNet โ†’ LFM โ†’ Cloud). Validated against agent scaling laws (arXiv:2512.08296). Features architecture selection, error amplification detection, and cost optimization for multi-agent workflows.

๐Ÿ“– GitHub โœ… Benchmarked
Python FastAPI 3-Tier Routing BitNet LFM Multi-Agent

๐Ÿ“Š Benchmark Results: Efficiency: 1.000 (100% of target), Error Amplification: 1.00x (perfect), Success Rate: 100%

๐ŸŽฏ Key Research Areas

๐Ÿงฌ Evolutionary AI

Genetic algorithms for agent personality optimization

๐Ÿ“Š Scaling Laws

Coordination overhead in multi-agent systems

๐Ÿง  Liquid Memory

Self-optimizing neural network architectures

๐Ÿ”„ Behavioral ML

Predictive analytics for agent behavior

๐Ÿ› ๏ธ Tools & Utilities

Repository listings for custom tools and utilities will appear here.

๐Ÿ“Œ Template: Add repositories related to custom tools, CLI utilities, scripts, and automation projects.

๐Ÿ’ป Tech Stack Implementations

Repository listings for tech stack implementations will appear here.

๐Ÿ“Œ Template: Add repositories demonstrating various technology stacks (MERN, MEAN, JAMstack, etc.).

๐Ÿ—๏ธ Framework Examples

Repository listings for framework implementations will appear here.

๐Ÿ“Œ Template: Add repositories showcasing framework implementations (React, Vue, Angular, Django, etc.).

๐Ÿง  LLM Fine-Tuning & Quantization

Notebooks, tools, and resources for LLM fine-tuning, distillation, GGUF quantization, and model optimization.

๐Ÿ”ง LLM-tuning-tools

ABE-41M training notebooks and fine-tuning tools. Includes advanced training configurations and unified training pipeline.

๐Ÿ““ Jupyter Notebook ๐Ÿ”’ Private ๐Ÿ Python

๐Ÿ›ก๏ธ llm-poison-detector

LLM security tooling for detecting poisoned models and training data. Adversarial robustness testing.

๐Ÿ”’ Private ๐Ÿ›ก๏ธ Security

๐Ÿ’ง Liquid AI Cookbook (LFM2)

Official Liquid Foundation Models cookbook. Fine-tune LFM2, deploy to iOS/Android/Edge, local agentic workflows, and LEAP SDK tutorials.

โญ 632 ๐Ÿ”€ 85 forks ๐Ÿ“ฑ Edge AI

๐Ÿฆ™ LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024). Supports LoRA, QLoRA, PEFT, DPO, and more. 64.2k+ stars.

โญ 64.2k ๐Ÿ”€ 7.8k forks ๐Ÿท๏ธ fine-tuning

๐Ÿ“ฆ llama.cpp (GGUF)

LLM inference in C/C++. The definitive GGUF quantization toolkit. Supports 2-8 bit quantization for efficient deployment.

โญ 70k+ ๐Ÿ“ฆ GGUF โšก Inference

๐Ÿฆฅ Unsloth

Finetune Llama 3.3, Mistral, Phi-4 & Gemma 2-5x faster with 80% less memory! Free Colab notebooks included.

โญ 20k+ ๐Ÿš€ 5x Faster ๐Ÿ““ Notebooks

๐ŸฆŽ Axolotl

Go-to fine-tuning framework. Streamlined config-based training for LLaMA, Mistral, Falcon, and more.

โญ 8k+ โš™๏ธ Config-based ๐Ÿ”ง MLOps

๐Ÿ”ฌ LMOps (Microsoft)

Research code for language model operations including knowledge distillation, model compression, and efficient inference.

โญ 3k+ ๐Ÿงช Research ๐Ÿ“š Distillation

โšก AutoGPTQ

Easy-to-use LLM quantization package with GPTQ algorithm. 4-bit quantization for efficient deployment.

โญ 4k+ ๐Ÿ”ข 4-bit ๐ŸŽฏ GPTQ

๐Ÿ““ Key Notebooks & Resources

๐Ÿ“š Learning & Experiments

Repository listings for learning projects and experiments will appear here.

๐Ÿ“Œ Template: Add repositories for tutorials, practice projects, and experimental code.

๐ŸŽฏ All Other Projects

Listings for all other repositories will appear here.

๐Ÿ“Œ Template: Add any other repositories that don't fit the categories above.