API Reference

Complete API documentation for TensorCortex - your unified interface to 100+ AI models.

Getting Started

Authentication

All API requests require authentication via API key. Get your key from the dashboard.

// Add to your request headers
Authorization: Bearer tc_your_api_key

Base URL

https://openai.tensor.cx/v1

Replace openai with any supported provider: anthropic, google, mistral, xai, etc.

Rate Limits

Developer

100

requests/minute

Pro

1,000

requests/minute

Enterprise

Custom

unlimited available

API Endpoints

POST/chat/completions

Generate chat completions using any supported model. OpenAI-compatible format.

Request Body

{
  "model": "gpt-4o",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  "max_tokens": 1000,
  "temperature": 0.7,
  "stream": false
}

The provider is determined by your base URL (e.g., openai.tensor.cx, anthropic.tensor.cx)

Response

{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 10,
    "total_tokens": 30
  },
  "tensorcortex": {
    "cost": 0.00045,
    "latency_ms": 342,
    "provider": "openai"
  }
}

Example (cURL)

curl https://openai.tensor.cx/v1/chat/completions \
  -H "Authorization: Bearer tc_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

Example (TypeScript)

import TensorCortex from '@tensorcortex/sdk'

const tc = new TensorCortex({
  apiKey: 'tc_your_api_key',
  provider: 'openai' // or 'anthropic', 'google', etc.
})

const response = await tc.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Hello!' }]
})

console.log(response.choices[0].message.content)
console.log('Cost:', response.tensorcortex.cost)
GET/models

List all available models for the selected provider.

Response

{
  "data": [
    {
      "id": "gpt-4o",
      "provider": "openai",
      "context_length": 128000,
      "capabilities": ["chat", "function_calling", "vision"]
    },
    {
      "id": "gpt-4o-mini",
      "provider": "openai",
      "context_length": 128000,
      "capabilities": ["chat", "function_calling", "vision"]
    }
  ]
}
POST/embeddings

Generate embeddings for text using various embedding models.

Request Body

{
  "model": "text-embedding-3-large",
  "input": "The quick brown fox jumps over the lazy dog"
}

Parameters

model

required

Model name (e.g., gpt-4o, claude-3-5-sonnet-20241022). Provider is determined by your base URL.

messages

required

Array of message objects with role and content

stream

optional

Enable streaming responses via Server-Sent Events. Default: false

Error Codes

401 Unauthorized

Invalid or missing API key.

429 Too Many Requests

Rate limit exceeded. Slow down your requests.

502 Bad Gateway

Upstream provider error. Check provider status or try again.

503 Service Unavailable

Service temporarily unavailable. Retry with exponential backoff.

Official SDKs

TypeScript/JavaScript

OpenAI-compatible SDK for Node.js and browsers.

npm install @tensorcortex/sdk

Python

Python SDK with async support.

pip install tensorcortex

Go

Idiomatic Go client library.

go get github.com/tensorcortex/go-sdk

Need Help?

Our developer support team is here to help you integrate TensorCortex.