Integration Guide

Zero to First Call in 5 Minutes

A step-by-step guide to integrating Gyanis AI into your EdTech product. Standard REST API — works with any language or HTTP client.

Chat Completion

Send prompts, get responses

SSE Streaming

Real-time token delivery

Entity Tracking

Per-student conversation memory

Prerequisites

  • A Gyanis AI account — register here
  • An API key from your dashboard (Keys tab)
  • No SDK needed — use your language's built-in HTTP client or any HTTP library
1

Send Your First Chat Request

Make a POST request to /chat with your API key and a messages array. That's it — a standard REST call.

Chat Completion
curl
curl -X POST https://api.gyanis.ai/platform/v1/chat \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      { "role": "user", "content": "What is photosynthesis?" }
    ],
    "model": "mobius",
    "entity_id": "student_abc123"
  }'
Response — 200 OK
json
{
  "id": "req_a1b2c3d4e5f6g7h8i9j0k1l2",
  "object": "chat.completion",
  "created": 1709834400,
  "model": "mobius",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Photosynthesis is the process by which green plants convert sunlight, water, and carbon dioxide into glucose and oxygen..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 42,
    "completion_tokens": 185,
    "total_tokens": 227
  }
}

Tip: Use gyn_sk_test_... keys during development — they skip billing while keeping moderation and logging active.

2

Add Streaming for Real-Time UI

For chat interfaces, use SSE streaming to show tokens as they arrive. Hit the /chat/stream endpoint — same request body, tokens delivered in real-time.

Streaming (SSE)
curl
curl -X POST https://api.gyanis.ai/platform/v1/chat/stream \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -N \
  -d '{
    "messages": [
      { "role": "user", "content": "Explain DNA replication." }
    ],
    "model": "mobius",
    "entity_id": "student_abc123"
  }'
SSE Response Stream
text
data: {"id":"req_a1b2c3d4e5f6","object":"chat.completion.chunk","created":1709834400,"model":"mobius","choices":[{"index":0,"delta":{"content":"DNA"},"finish_reason":null}]}

data: {"id":"req_a1b2c3d4e5f6","object":"chat.completion.chunk","created":1709834400,"model":"mobius","choices":[{"index":0,"delta":{"content":" replication"},"finish_reason":null}]}

data: {"id":"req_a1b2c3d4e5f6","object":"chat.completion.chunk","created":1709834400,"model":"mobius","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":38,"completion_tokens":210,"total_tokens":248}}

data: [DONE]

SSE events: Each data: line contains a chat.completion.chunk object. Append delta.content to your output. data: [DONE] signals stream end. A : ping heartbeat is sent every 15 seconds.

3

Track Student Conversations with entity_id

Pass entity_id to automatically maintain conversation history per student. The platform stores up to 50 messages — no database needed on your side.

Multi-Turn Conversation
curl
# Turn 1 — ask a question
curl -X POST https://api.gyanis.ai/platform/v1/chat \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{ "role": "user", "content": "What is a variable?" }],
    "entity_id": "student_abc123",
    "subject": "math",
    "grade": "8"
  }'

# Turn 2 — follow-up (history auto-included)
curl -X POST https://api.gyanis.ai/platform/v1/chat \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [{ "role": "user", "content": "Give me an example." }],
    "entity_id": "student_abc123"
  }'

Add subject and grade for curriculum-aware responses. These cascade from request → entity → tenant config.

4

Choose Your Model Tier

Pick the model that fits your use case. Set it per-request or configure a default at the tenant level.

mobiusStandard

Best for everyday tutoring, Q&A, homework help, content generation.

$4/M input · $12/M output

oloidPremium

Complex reasoning — JEE/NEET prep, multi-step math, advanced science.

$6/M input · $22/M output

5

Handle Errors Gracefully

All errors return a consistent JSON format. Key codes to handle in production:

402
insufficient_credits

Prompt user to purchase credits or notify admin

429
rpm_exceeded

Back off and retry after Retry-After header seconds

400
content_policy_violation

Show a safe fallback message to the student

503
model_unavailable

Retry with exponential backoff (circuit breaker)

Error Response Format
json
{
  "error": {
    "code": "insufficient_credits",
    "message": "Tenant has insufficient token credits. Remaining: 500.",
    "type": "billing_error",
    "request_id": "req_a1b2c3d4e5f6g7h8i9j0k1l2"
  }
}

Production Readiness Checklist

Switch from test keys to live keys
Handle 402 (credits) and 429 (rate limit) errors
Implement retry with exponential backoff for 503s
Set entity_id for student conversation tracking
Add subject and grade for curriculum context
Subscribe to webhooks for credit alerts
Review rate limits for your expected traffic
Test streaming in your chat UI

Need more details?

Check the full API reference for all endpoints, response schemas, and advanced features like webhooks, entities, and billing.