Build with NAPH using our OpenAI-compatible API. Drop-in replacement for existing integrations with streaming support, function calling, and SDKs for every major language.
Getting Started
Get up and running with the NAPH API in minutes. Our endpoints are compatible with the OpenAI SDK, so your existing code works with minimal changes.
Contact our team to receive API credentials. Enterprise customers receive dedicated API keys with custom rate limits and priority access.
Use our official Python SDK or the OpenAI SDK with a custom base URL. Both approaches work identically.
Send a completion request using the model of your choice. Response format matches OpenAI exactly for seamless migration.
Works with any library or framework built for the OpenAI API.
Server-sent events for real-time token delivery. Sub-50ms TTFT.
Structured tool use with JSON schema validation for agents.
from naph import NAPH
# Initialize client
client = NAPH(
api_key="your-api-key"
)
# Create a completion
response = client.chat.completions.create(
model="naph-70b",
messages=[
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
],
max_tokens=1024,
temperature=0.7
)
print(response.choices[0].message.content)
Libraries
Native libraries for major programming languages with full type support, automatic retries, and streaming helpers.
pip install naph
npm install @naph/sdk
go get github.com/naph/go-sdk
cargo add naph
Resources
Everything you need to build production applications with NAPH models.
Complete documentation of all endpoints, parameters, and response formats with interactive examples.
View API ReferenceGuide to API key management, token rotation, and security best practices for production deployments.
Read GuideImplement real-time token streaming with Server-Sent Events for responsive user interfaces.
Streaming GuideBuild AI agents with structured tool use. Define functions, validate inputs, and handle responses.
Function Calling GuideGenerate text embeddings for semantic search, clustering, and retrieval-augmented generation.
Embeddings GuideUnderstand rate limiting, implement backoff strategies, and optimize for high-throughput applications.
Rate Limits Guide# Define available tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string"
}
},
"required": ["location"]
}
}
}
]
response = client.chat.completions.create(
model="naph-70b",
messages=[{
"role": "user",
"content": "What's the weather in Tokyo?"
}],
tools=tools,
tool_choice="auto"
)
Advanced Features
NAPH models support structured function calling for building AI agents that interact with external systems, APIs, and databases.
Define tool parameters with JSON Schema. The model generates validated, structured outputs.
The model can invoke multiple tools in a single response for efficient multi-step workflows.
Consistent function call formatting with proper error handling and retry support.
Integration
NAPH works with every major AI framework and orchestration library out of the box.
Full compatibility with LangChain's chat models, agents, and chains. Use NAPH as a drop-in replacement for any OpenAI-based workflow.
Build RAG applications with LlamaIndex and NAPH models. Native integration for query engines, chat engines, and data agents.
Build AI-powered React applications with the Vercel AI SDK. Streaming UI components work seamlessly with NAPH endpoints.
Use the official OpenAI Python and Node.js SDKs by pointing to NAPH's base URL. Zero code changes required.
Get API access and start integrating NAPH models into your applications today.