Skip to main content
Environment is the unified class for defining tools, connecting to services, and formatting for any LLM provider.

Environment

from hud import Environment

env = Environment("my-env")

Constructor

ParameterTypeDescriptionDefault
namestrEnvironment name"environment"
instructionsstr | NoneDescription/instructionsNone
conflict_resolutionConflictResolutionHow to handle tool name conflictsPREFIX

Context Manager

Environment must be used as an async context manager to connect:
async with env:
    tools = env.as_openai_chat_tools()
    result = await env.call_tool("my_tool", arg="value")

Defining Tools

@env.tool()

Register functions as callable tools:
@env.tool()
def count_letter(text: str, letter: str) -> int:
    """Count occurrences of a letter in text."""
    return text.lower().count(letter.lower())

@env.tool()
async def fetch_data(url: str) -> dict:
    """Fetch JSON data from URL."""
    async with httpx.AsyncClient() as client:
        response = await client.get(url)
        return response.json()
Tools are automatically documented from type hints and docstrings.

Scenarios

Scenarios define evaluation logic with two yields:
@env.scenario("checkout")
async def checkout_flow(product: str):
    # First yield: send prompt, receive answer
    answer = yield f"Add '{product}' to cart and checkout"
    
    # Second yield: return reward based on result
    order_exists = await check_order(product)
    yield 1.0 if order_exists else 0.0
Create Tasks from Scenarios:
task = env("checkout", product="laptop")

async with hud.eval(task) as ctx:
    await agent.run(ctx.prompt)
    await ctx.submit(agent.response)

Connectors

Connect to external services as tool sources.

connect_hub()

Connect to a deployed HUD environment:
env.connect_hub("browser", prefix="browser")
# Tools available as browser_navigate, browser_click, etc.

connect_fastapi()

Import FastAPI routes as tools:
from fastapi import FastAPI

api = FastAPI()

@api.get("/users/{user_id}", operation_id="get_user")
def get_user(user_id: int):
    return {"id": user_id, "name": "Alice"}

env.connect_fastapi(api)
# Tool available as get_user
ParameterTypeDescriptionDefault
appFastAPIFastAPI applicationRequired
namestr | NoneServer nameapp.title
prefixstr | NoneTool name prefixNone
include_hiddenboolInclude routes with include_in_schema=FalseTrue

connect_openapi()

Import from OpenAPI spec:
env.connect_openapi("https://api.example.com/openapi.json")

connect_server()

Mount an MCPServer or FastMCP directly:
from fastmcp import FastMCP

tools = FastMCP("tools")

@tools.tool
def greet(name: str) -> str:
    return f"Hello, {name}!"

env.connect_server(tools)

connect_mcp_config()

Connect via MCP config dict:
env.connect_mcp_config({
    "my-server": {
        "command": "uvx",
        "args": ["some-mcp-server"]
    }
})

connect_image()

Connect to a Docker image via stdio:
env.connect_image("mcp/fetch")

Tool Formatting

Convert tools to provider-specific formats.

OpenAI

# Chat Completions API
tools = env.as_openai_chat_tools()
response = await client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=tools,
)

# Responses API
tools = env.as_openai_responses_tools()

# Agents SDK (requires openai-agents)
tools = env.as_openai_agent_tools()

Anthropic/Claude

tools = env.as_claude_tools()
response = await client.messages.create(
    model="claude-sonnet-4-5",
    messages=messages,
    tools=tools,
)

Gemini

tools = env.as_gemini_tools()
config = env.as_gemini_tool_config()

LangChain

# Requires langchain-core
tools = env.as_langchain_tools()

LlamaIndex

# Requires llama-index-core
tools = env.as_llamaindex_tools()

Google ADK

# Requires google-adk
tools = env.as_adk_tools()

Calling Tools

call_tool()

Execute tools with auto-format detection:
# Simple call
result = await env.call_tool("my_tool", arg="value")

# From OpenAI tool call
result = await env.call_tool(response.choices[0].message.tool_calls[0])

# From Claude tool use
result = await env.call_tool(response.content[0])  # tool_use block
Returns result in matching format (OpenAI tool call → OpenAI tool message, etc.).

Mock Mode

Test without real connections:
env.mock()  # Enable mock mode

# Set specific mock outputs
env.mock_tool("navigate", "Navigation successful")
env.mock_tool("screenshot", b"fake_image_data")

async with env:
    result = await env.call_tool("navigate", url="https://example.com")
    # Returns "Navigation successful" instead of actually navigating

env.unmock()  # Disable mock mode
MethodDescription
mock(enable=True)Enable/disable mock mode
unmock()Disable mock mode
mock_tool(name, output)Set specific mock output
is_mockCheck if mock mode is enabled

Properties

PropertyTypeDescription
namestrEnvironment name
promptstr | NoneDefault prompt (set by scenarios or agent code)
is_connectedboolTrue if in context
connectionsdict[str, Connector]Active connections

Creating Tasks

Call the environment to create a Task:
# With scenario
task = env("checkout", product="laptop")

# Without scenario (just the environment)
task = env()
Then run with hud.eval():
async with hud.eval(task, variants={"model": ["gpt-4o"]}) as ctx:
    ...

See Also