Skip to main content
Creating, evaluating, and refining prompts is a core activity for AI engineers. Small changes to a prompt can have big impacts on your application’s behavior. W&B Weave lets you create prompts, publish them, and evolve them over time. This page covers how to create prompt objects and publish them. For referencing, retrieving, and managing versions of published prompts, see Store and track versions of prompts. If your prompt needs are simple, you can use the built-in weave.StringPrompt or weave.MessagesPrompt classes. If your needs are more complex, you can subclass those or the base class weave.Prompt and override the format method. When you publish a prompt with weave.publish, it appears in your Weave project on the Prompts page.

StringPrompt

StringPrompt logs single-string prompts that you might use for system messages, user queries, or any standalone text input to an LLM. We recommend using StringPrompt to manage individual prompt strings that don’t require the complexity of multi-message conversations.
import weave
weave.init('intro-example')

system_prompt = weave.StringPrompt("You speak like a pirate")
weave.publish(system_prompt, name="pirate_prompt")

from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
  model="gpt-4o",
  messages=[
    {
      "role": "system",
      "content": system_prompt.format()
    },
    {
      "role": "user",
      "content": "Explain general relativity in one paragraph."
    }
  ],
)

MessagesPrompt

MessagesPrompt allows you to log multi-turn conversations and chat-based prompts. It stores an array of message objects (with roles like “system”, “user”, and “assistant”) that represent a complete conversation flow. We recommend using this for chat-based LLMs where you need to maintain context across multiple messages, define specific conversation patterns, or create reusable conversation templates.
import weave
weave.init('intro-example')

prompt = weave.MessagesPrompt([
    {
        "role": "system",
        "content": "You are a stegosaurus, but don't be too obvious about it."
    },
    {
        "role": "user",
        "content": "What's good to eat around here?"
    }
])
weave.publish(prompt, name="dino_prompt")

from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
  model="gpt-4o",
  messages=prompt.format(),
)

Parameterizing prompts

Both StringPrompt and MessagesPrompt support dynamic content through parameterization. This allows you to create flexible, reusable prompt templates with placeholders (using {variable} syntax) that can be filled with different values at runtime. This is useful for building scalable applications where prompts need to adapt to different inputs, user data, or contexts while maintaining a consistent structure. The format() method accepts key-value pairs to replace these placeholders with actual values.
import weave
weave.init('intro-example')

prompt = weave.StringPrompt("Solve the equation {equation}")
weave.publish(prompt, name="calculator_prompt")

from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
  model="gpt-4o",
  messages=[
    {
      "role": "user",
      "content": prompt.format(equation="1 + 1 = ?")
    }
  ],
)
This also works with MessagesPrompt.
import weave
weave.init('intro-example')

prompt = weave.MessagesPrompt([
{
    "role": "system",
    "content": "You will be provided with a description of a scene and your task is to provide a single word that best describes an associated emotion."
},
{
    "role": "user",
    "content": "{scene}"
}
])
weave.publish(prompt, name="emotion_prompt")

from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
  model="gpt-4o",
  messages=prompt.format(scene="A dog is lying on a dock next to a fisherman."),
)