Full fine-tuning recipe: LoRA on Phi-3.5-mini via Unsloth, targeting single A100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on DeepSeek-V3 base via LitGPT, targeting single A100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Mixtral 8x7B via torchtune, targeting single A100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Yi 1.5 34B via Unsloth, targeting single H100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Llama 3.3 70B via OpenRLHF, targeting single H100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Llama 3.1 70B via DeepSpeed, targeting 2x A100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Mistral Small 3 via Hugging Face TRL, targeting 2x A100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Mistral Nemo 12B via Megatron-LM, targeting 2x A100 80GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Qwen 2.5 32B via DeepSpeed, targeting 4x A100 40GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Qwen 2.5-Coder 7B via Hugging Face TRL, targeting 4x A100 40GB, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Gemma 2 27B via Megatron-LM, targeting 8x H100, with data mix and eval plan.
Full fine-tuning recipe: LoRA on Phi-3.5-mini via FSDP, targeting 8x H100, with data mix and eval plan.