Full fine-tuning recipe: LoRA on Llama 3.1 8B via Unsloth, targeting 8x H100, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Llama 3.1 8B via FSDP, targeting AWS g5.12xlarge, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Llama 3.1 8B via Unsloth, targeting 2x A100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Mixtral 8x22B via LLaMA-Factory, targeting single RTX 3090 (24GB), with data mix and eval plan.
Full fine-tuning recipe: LoRA on Mixtral 8x22B via OpenRLHF, targeting Lambda Labs 8xH100, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Phi-4 via Axolotl, targeting 4x A100 40GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Mixtral 8x22B via torchtune, targeting 8x H100, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Phi-4 via DeepSpeed, targeting 2x RTX 4090, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Phi-4 via Unsloth, targeting single H100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Gemma 2 9B via Hugging Face TRL, targeting 8x H100, with data mix and eval plan.
Full fine-tuning recipe: QLoRA (4-bit) on Llama 3.3 70B via LitGPT, targeting single RTX 3090 (24GB), with data mix and eval plan.
Full fine-tuning recipe: QLoRA (4-bit) on Llama 3.1 8B via LLaMA-Factory, targeting 2x RTX 4090, with data mix and eval plan.