API Reference

API Documentation

Everything you need to integrate Gyanis AI into your EdTech product. Standard REST API with two model tiers — works with any HTTP client in any language.

Base URL

Productiontext
https://api.gyanis.ai/platform/v1

All endpoints are relative to this base URL. HTTPS is required for all requests.

Authentication

Authenticate by passing your API key in the Authorization header. Keys follow the format gyn_sk_live_<32chars> for production or gyn_sk_test_<32chars> for testing.

Authorization Headerhttp
Authorization: Bearer gyn_sk_live_aBcDeFgHiJkLmNoPqRsTuVwXyZ012345
Live Keys

Production use. Billing enforced, credits deducted per request.

Test Keys

Development use. Billing skipped, requests still logged and moderated.

API Key Scopes

ScopeGrants Access To
*All endpoints (default)
chatPOST /chat
chat.streamPOST /chat/stream
keysKey management (create, list, revoke)
usageGET /usage
webhooksWebhook management
entitiesEntity CRUD + history
billingCheckout, purchases, billing status
complianceData export and deletion

Models

mobiusstandard

Optimized for cost and speed. Best for everyday educational interactions — tutoring, Q&A, content generation.

Input: $4.00/M tokens

Output: $12.00/M tokens

Cached: $2.00/M tokens

oloidpremium

Higher-capability model for complex topics — competitive exam prep (JEE, NEET), deep reasoning, multi-step problems.

Input: $6.00/M tokens

Output: $22.00/M tokens

Cached: $3.00/M tokens

If no model is specified, your tenant's default tier is used. Model routing is admin-configurable with automatic failover via circuit breakers.

Chat Completion

POST/platform/v1/chatscope: chat

Send a conversation to the AI and receive a complete response.

Request Body

FieldTypeRequiredDescription
messagesarrayYesArray of message objects (1–100)
messages[].rolestringYes"user", "assistant", or "system"
messages[].contentstringYesMessage content (1–16,000 chars)
modelstringNo"mobius" (default) or "oloid"
max_tokensintegerNoMax response tokens (1–16,384, default: 4,096)
entity_idstringNoYour external entity ID for conversation continuity
subjectstringNoSubject context (e.g., "math", "biology")
gradestringNoGrade level (e.g., "8", "12")
metadataobjectNoCustom key-value pairs (returned in response)
json_modebooleanNoRequest structured JSON output
Request Bodyjson
{
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful math tutor for 8th graders."
    },
    {
      "role": "user",
      "content": "Explain the Pythagorean theorem with an example."
    }
  ],
  "model": "mobius",
  "max_tokens": 1024,
  "entity_id": "student_abc123",
  "subject": "math",
  "grade": "8"
}
cURLbash
curl -X POST https://api.gyanis.ai/platform/v1/chat \
  -H "Authorization: Bearer gyn_sk_live_aBcDeFgHiJkLmNoPqRsTuVwXyZ012345" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      { "role": "user", "content": "What is photosynthesis?" }
    ],
    "model": "mobius",
    "entity_id": "student_abc123"
  }'
Node.js (fetch)javascript
const response = await fetch("https://api.gyanis.ai/platform/v1/chat", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${process.env.GYANIS_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    messages: [{ role: "user", content: "What is photosynthesis?" }],
    model: "mobius",
    entity_id: "student_abc123",
  }),
});

const data = await response.json();
console.log(data.choices[0].message.content);
Python (requests)python
import requests

response = requests.post(
    "https://api.gyanis.ai/platform/v1/chat",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json",
    },
    json={
        "messages": [{"role": "user", "content": "What is photosynthesis?"}],
        "model": "mobius",
        "entity_id": "student_abc123",
    },
)

data = response.json()
print(data["choices"][0]["message"]["content"])
PHP (cURL)php
<?php
$ch = curl_init("https://api.gyanis.ai/platform/v1/chat");
curl_setopt_array($ch, [
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_POST => true,
    CURLOPT_HTTPHEADER => [
        "Authorization: Bearer YOUR_API_KEY",
        "Content-Type: application/json",
    ],
    CURLOPT_POSTFIELDS => json_encode([
        "messages" => [["role" => "user", "content" => "What is photosynthesis?"]],
        "model" => "mobius",
        "entity_id" => "student_abc123",
    ]),
]);
$response = curl_exec($ch);
curl_close($ch);

$data = json_decode($response, true);
echo $data["choices"][0]["message"]["content"];
Response \u2014 200 OKjson
{
  "id": "req_a1b2c3d4e5f6g7h8i9j0k1l2",
  "object": "chat.completion",
  "created": 1709834400,
  "model": "mobius",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Photosynthesis is the process by which green plants convert sunlight, water, and carbon dioxide into glucose and oxygen..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 42,
    "completion_tokens": 185,
    "total_tokens": 227
  }
}

Streaming Chat (SSE)

POST/platform/v1/chat/streamscope: chat.stream

Stream a chat completion as Server-Sent Events (SSE). Ideal for real-time UIs where you want to display tokens as they arrive. The request body is identical to the chat endpoint.

cURL \u2014 Streamingbash
curl -X POST https://api.gyanis.ai/platform/v1/chat/stream \
  -H "Authorization: Bearer gyn_sk_live_aBcDeFgHiJkLmNoPqRsTuVwXyZ012345" \
  -H "Content-Type: application/json" \
  -N \
  -d '{
    "messages": [
      { "role": "user", "content": "What is gravity?" }
    ],
    "model": "mobius",
    "entity_id": "student_abc123"
  }'
Node.js \u2014 Streamingjavascript
const response = await fetch("https://api.gyanis.ai/platform/v1/chat/stream", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${process.env.GYANIS_API_KEY}`,
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    messages: [{ role: "user", content: "What is gravity?" }],
    model: "mobius",
    entity_id: "student_abc123",
  }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  const text = decoder.decode(value);
  for (const line of text.split("\n")) {
    if (line.startsWith("data: ") && line !== "data: [DONE]") {
      const chunk = JSON.parse(line.slice(6));
      process.stdout.write(chunk.choices[0]?.delta?.content || "");
    }
  }
}
Python \u2014 Streamingpython
import requests, json

response = requests.post(
    "https://api.gyanis.ai/platform/v1/chat/stream",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json",
    },
    json={
        "messages": [{"role": "user", "content": "What is gravity?"}],
        "model": "mobius",
        "entity_id": "student_abc123",
    },
    stream=True,
)

for line in response.iter_lines():
    line = line.decode("utf-8")
    if line.startswith("data: ") and line != "data: [DONE]":
        chunk = json.loads(line[6:])
        content = chunk["choices"][0].get("delta", {}).get("content", "")
        print(content, end="", flush=True)
PHP \u2014 Streamingphp
<?php
$ch = curl_init("https://api.gyanis.ai/platform/v1/chat/stream");
curl_setopt_array($ch, [
    CURLOPT_POST => true,
    CURLOPT_HTTPHEADER => [
        "Authorization: Bearer YOUR_API_KEY",
        "Content-Type: application/json",
    ],
    CURLOPT_POSTFIELDS => json_encode([
        "messages" => [["role" => "user", "content" => "What is gravity?"]],
        "model" => "mobius",
        "entity_id" => "student_abc123",
    ]),
    CURLOPT_WRITEFUNCTION => function ($ch, $data) {
        foreach (explode("\n", $data) as $line) {
            if (str_starts_with($line, "data: ") && $line !== "data: [DONE]") {
                $chunk = json_decode(substr($line, 6), true);
                echo $chunk["choices"][0]["delta"]["content"] ?? "";
            }
        }
        return strlen($data);
    },
]);
curl_exec($ch);
curl_close($ch);
SSE Response Streamtext
data: {"id":"req_a1b2c3d4e5f6","object":"chat.completion.chunk","created":1709834400,"model":"mobius","choices":[{"index":0,"delta":{"content":"Gravity"},"finish_reason":null}]}

data: {"id":"req_a1b2c3d4e5f6","object":"chat.completion.chunk","created":1709834400,"model":"mobius","choices":[{"index":0,"delta":{"content":" is"},"finish_reason":null}]}

data: {"id":"req_a1b2c3d4e5f6","object":"chat.completion.chunk","created":1709834400,"model":"mobius","choices":[{"index":0,"delta":{"content":" a fundamental"},"finish_reason":null}]}

data: {"id":"req_a1b2c3d4e5f6","object":"chat.completion.chunk","created":1709834400,"model":"mobius","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":38,"completion_tokens":195,"total_tokens":233}}

data: [DONE]

SSE Event Format

EventDescription
data: {chunk}Content delta — append delta.content to your output
data: [DONE]Stream complete — close the connection
: pingHeartbeat comment every 15s — ignore in your client

Rate limit: 60 requests/minute per key (vs 120/min for non-streaming). Token usage stats are included in the final chunk with finish_reason.

Entity Context

Pass entity_id with your chat request to maintain conversation continuity per entity. The platform stores up to 50 recent messages per entity, scoped to your tenant.

Turn 1 \u2014 First questionjson
{
  "messages": [{ "role": "user", "content": "What is a variable in math?" }],
  "entity_id": "student_abc123"
}
Turn 2 \u2014 Follow-up (history auto-included)json
{
  "messages": [{ "role": "user", "content": "Can you give me an example?" }],
  "entity_id": "student_abc123"
}

Omit entity_id to use the API statelessly — manage the full message history yourself by passing all messages in each request.

Error Codes

All errors return a consistent JSON structure:

Error Response Formatjson
{
  "error": {
    "code": "insufficient_credits",
    "message": "Tenant has insufficient token credits.",
    "type": "billing_error",
    "request_id": "req_a1b2c3d4e5f6g7h8i9j0k1l2"
  }
}
CodeHTTPDescription
missing_api_key401No API key provided in request headers
invalid_api_key401Key not found, malformed, or revoked
expired_api_key401Key has passed its expiration date
endpoint_not_allowed403Key scopes do not include this endpoint
account_suspended403Tenant account suspended by admin
insufficient_credits402Prepaid credit balance is zero
budget_exceeded402Monthly token budget reached
rpm_exceeded429Requests per minute limit exceeded
tpm_exceeded429Tokens per minute limit exceeded
invalid_request400Validation error (missing fields, wrong types)
message_too_long400Message exceeds 16,000 character limit
too_many_messages400Messages array exceeds 100-message limit
content_policy_violation400Flagged by content moderation
model_unavailable503AI model provider temporarily unavailable
timeout504AI model request timed out

Rate Limits

Rate limits are enforced per API key with a sliding window. Defaults can be customized per tenant.

MetricDefaultDescription
RPM60Requests per minute
TPM100,000Tokens per minute
Daily RequestsUnlimitedRequests per day (0 = unlimited)
Chat Endpoint120 RPMPOST /chat specific throttle
Stream Endpoint60 RPMPOST /chat/stream specific throttle

Response Headers

HeaderDescription
X-Request-IdUnique request identifier (req_<24chars>)
X-RateLimit-Limit-RequestsMaximum RPM for this key
X-RateLimit-Remaining-RequestsRemaining requests in current window
X-RateLimit-Reset-RequestsSeconds until rate limit reset

Ready to integrate?

Follow our step-by-step integration guide to go from zero to your first API call in under 5 minutes.