Prompt Engineering with AgentB
Prompt engineering is the art and science of crafting effective inputs (prompts) to elicit desired responses from Large Language Models (LLMs). In AgentB, prompts are constructed at several levels, primarily through System Prompts and the dynamic inclusion of Tool Definitions.
1. System Prompts
The System Prompt is a crucial instruction given to the LLM at the beginning of a conversation (or as a persistent instruction). It sets the context, persona, role, and overall behavior guidelines for the agent.
How AgentB Uses System Prompts:
AgentRunConfig.systemPrompt
: You can specify a custom system prompt for each agent run via theAgentRunConfig
. This is the most direct way to control an agent's persona.Generated by
ApiInteractionManager
: If nosystemPrompt
is explicitly provided inAgentRunConfig
, theApiInteractionManager
(AIM) often generates a default one based on the operationalmode
and the tools available to the agent.genericOpenApi
Mode: AIM usesgenerateGenericHttpToolSystemPrompt
. This prompt:Introduces the API the agent can interact with (title, version, base URL).
Explains how to use the
genericHttpRequest
tool (if available).Lists all specific operation-based tools derived from the OpenAPI spec, including their names, descriptions, and parameters.
hierarchicalPlanner
Mode: AIM typically usesDEFAULT_PLANNER_SYSTEM_PROMPT
(or a similar prompt generated bygenerateRouterSystemPrompt
if thinking of older router concepts). This prompt instructs thePlanningAgent
on:Its role as a master planner/orchestrator.
How to break down tasks.
How to use the
DelegateToSpecialistTool
(including its parameters:specialistId
,subTaskDescription
).A list of available "specialists" (derived from
IToolSet
s managed byToolsetOrchestrator
), including their IDs, names, and descriptions/capabilities.
Worker/Specialist Agent System Prompts: When
DelegateToSpecialistTool
invokes a worker agent, it constructs a system prompt for that worker. This prompt is generated usinggenerateToolsSystemPrompt
and includes:The name and description of the
IToolSet
(specialist) it's embodying.Details of the specific tools only from that
IToolSet
.The
subTaskDescription
given by the planner is usually the firstuser
message to this worker agent.
Tips for Writing Effective System Prompts:
Be Clear and Specific: Tell the LLM exactly what you want it to do and how it should behave.
Define Persona/Role: "You are a helpful flight booking assistant."
Set Constraints: "Only answer questions related to booking flights." "Always respond in JSON format."
Provide Context: If interacting with an API, briefly mention the API's purpose.
Instruct on Tool Usage: Crucially, explain how to use available tools if the default generated prompts aren't sufficient or if you have very custom tools. The generated prompts for API tools are usually quite good.
Few-Shot Examples (Advanced): For complex behavior, you can include a few examples of user queries and desired agent responses (including tool calls) directly within the system prompt.
Iterate: Prompt engineering is often an iterative process. Test your prompts and refine them based on the LLM's output.
businessContextText
: ThebusinessContextText
option inApiInteractionManagerOptions
orOpenAPIConnectorOptions
is appended to generated system prompts, allowing you to add global guidelines or API-specific notes.
2. Tool Definitions (IToolDefinition
)
IToolDefinition
)The description
and parameters
(including their individual description
and schema
) within your IToolDefinition
are critical parts of the prompt seen by the LLM.
Tool
description
:Should clearly explain what the tool does and when it should be used.
Example: "Fetches the current weather forecast for a specified location."
The LLM uses this to decide if your tool is relevant to the user's query.
Parameter
description
:Should explain what each parameter represents.
Example (for a
location
parameter): "The city and state, e.g., San Francisco, CA."The LLM uses this to understand what information it needs to extract from the user's query to fill in the arguments for your tool.
Parameter
schema
(JSON Schema):While not directly "natural language" for the LLM, the structure,
type
,enum
(for allowed values),required
fields, etc., in the JSON schema significantly influence how the LLM formats thearguments
string for the tool call.AgentB adapters (like
OpenAIAdapter
) convertIToolDefinition
s (including these parameter schemas) into the LLM provider's native function/tool calling format.
Example of how tool definitions become part of the "prompt":
When BaseAgent
prepares to call the LLM, it might effectively tell the LLM (this is a conceptual representation, the actual format is provider-specific):
System: You are a helpful assistant. You have access to the following tools:
Tool Name:
getWeatherForecast
Description: Fetches the current weather forecast for a specified location. Parameters:
location
(string, required): The city and state, e.g., San Francisco, CA.
unit
(string, optional, enum: "celsius", "fahrenheit", default: "celsius"): The temperature unit.User: What's the weather in London?
The LLM sees this and knows it can call getWeatherForecast
with {"location": "London"}
.
3. Conversation History (LLMMessage[]
)
LLMMessage[]
)The history of user
, assistant
, and tool
messages also forms a critical part of the prompt for the current turn.
The
ContextManager
prepares this history.The LLM uses this history to understand the ongoing conversation, maintain context, and respond relevantly.
Follow-up questions or clarifications heavily depend on this history.
4. Prompt Builders in AgentB
AgentB provides utility functions in src/llm/prompt-builder.ts
to help construct standardized system prompts for common scenarios:
generateToolsSystemPrompt(toolSetName, toolSetDescription, toolDefinitions, ...)
:For agents that are given a specific set of tools (e.g., worker agents in
hierarchicalPlanner
mode).Lists each tool, its description, and its parameters.
generateGenericHttpToolSystemPrompt(operations, apiInfo, baseUrl, ...)
:For agents using the
GenericHttpApiTool
with anOpenAPIConnector
.Lists all API operations from the spec and explains how to use
genericHttpRequest
to call them.
generateRouterSystemPrompt(availableToolSets, apiInfo, ...)
/DEFAULT_PLANNER_SYSTEM_PROMPT
:For
PlanningAgent
s.Explains their role as a planner/delegator.
Details how to use the
DelegateToSpecialistTool
.Dynamically lists available "specialists" (
IToolSet
s) with their IDs, names, and capabilities.
These builders are used by ApiInteractionManager
to set up appropriate system prompts for different operational modes. You can use them as inspiration or starting points if you're crafting highly custom system prompts.
Iteration and Testing
Effective prompt engineering requires experimentation.
Start with Clear Descriptions: Ensure your tool and parameter descriptions are unambiguous and accurately reflect their purpose.
Test with Various Inputs: See how the LLM interprets different user queries and whether it chooses the correct tools with the right arguments.
Inspect
AgentEvent
s:thread.message.created
(for the LLM's request to the agent)thread.message.completed
(for the assistant's message, especiallymetadata.tool_calls
)These will show you exactly what the LLM decided to do based on your prompts and tool definitions.
Adjust and Refine: If the LLM isn't behaving as expected, refine your system prompt, tool descriptions, or parameter descriptions. Sometimes, making a description slightly more or less specific can have a big impact.
Consider
temperature
: For tasks requiring precise tool usage, a lowertemperature
(e.g., 0.0 to 0.3) inAgentRunConfig
might yield more consistent tool calls.
By mastering system prompts and tool definitions, you gain significant control over how your AgentB agents reason, decide, and act.
Last updated