meta / codellama-34b-instruct

A 34 billion parameter Llama tuned for coding and conversation

  • Public
  • 99K runs
  • GitHub
  • Paper
  • License

Input

Output

Run time and cost

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 24 seconds.

Readme

CodeLlama is a family of fine-tuned Llama-2 models for coding. This is CodeLlama-34b-instruct, a 34 billion parameter Llama model tuned chatting about code.