Getting Started

Rivano sits between your application and AI providers, giving you complete visibility and governance over every request. This guide walks you through a basic integration in under five minutes.

Prerequisites

  • A Rivano account (sign up free)
  • An existing AI provider key (OpenAI, Anthropic, Google, etc.)
  • Node.js 18+ or Python 3.9+

Step 1: Create Your Agent

After signing in, navigate to Agents → New Agent in the Rivano dashboard. Give it a name (e.g., “production-assistant”) and select your provider. Rivano generates a unique Agent ID and API key for routing.

Copy both values — you’ll need them in the next step.

Step 2: Configure Your Client

Point your existing AI SDK at Rivano’s proxy endpoint. No code changes beyond the base URL and an extra header.

TypeScript (OpenAI SDK)

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://proxy.rivano.ai/v1",
  defaultHeaders: {
    "X-Rivano-Agent": process.env.RIVANO_AGENT_ID,
    "X-Rivano-Key": process.env.RIVANO_API_KEY,
  },
});

const completion = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Summarize our Q4 results." }],
});

console.log(completion.choices[0].message.content);

Python (OpenAI SDK)

from openai import OpenAI

client = OpenAI(
    api_key=os.environ["OPENAI_API_KEY"],
    base_url="https://proxy.rivano.ai/v1",
    default_headers={
        "X-Rivano-Agent": os.environ["RIVANO_AGENT_ID"],
        "X-Rivano-Key": os.environ["RIVANO_API_KEY"],
    },
)

completion = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Summarize our Q4 results."}],
)

print(completion.choices[0].message.content)

Requests flow through Rivano transparently — your provider’s response is returned unmodified with zero added latency beyond a single network hop.

Step 3: View Your Dashboard

Head back to the Rivano dashboard. Within seconds you’ll see:

  • Live trace stream — every request and response with full token counts
  • Cost attribution — spend broken down by agent, model, and user
  • Policy evaluations — any governance rules that fired on the request
  • Latency metrics — p50/p95/p99 response times per model

Next Steps

You’re now proxying AI traffic through Rivano. Explore these guides to unlock the full platform: