Skip to main content

Overview

NovitaLLMService provides access to Novita AI’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with competitive pricing and a wide selection of open-source models.

Novita LLM API Reference

Pipecat’s API methods for Novita AI integration

Example Implementation

Complete example with function calling

Novita AI Documentation

Official Novita AI API documentation and features

Novita AI Platform

Access models and manage API keys

Installation

To use Novita AI services, install the required dependency:
pip install "pipecat-ai[novita]"

Prerequisites

Novita AI Account Setup

Before using Novita AI LLM services, you need:
  1. Novita AI Account: Sign up at Novita AI
  2. API Key: Generate an API key from your account dashboard
  3. Model Selection: Choose from a wide selection of open-source models

Required Environment Variables

  • NOVITA_API_KEY: Your Novita AI API key for authentication

Configuration

api_key
str
required
Novita AI API key for authentication.
base_url
str
default:"https://api.novita.ai/openai"
Base URL for Novita AI API endpoint.
settings
NovitaLLMService.Settings
default:"None"
Runtime-configurable settings. See Settings below.

Settings

Runtime-configurable settings passed via the settings constructor argument using NovitaLLMService.Settings(...). These can be updated mid-conversation with LLMUpdateSettingsFrame. See Service Settings for details. This service uses the same settings as OpenAILLMService. See OpenAI LLM Settings for the full parameter reference. The default model is "moonshotai/kimi-k2.5".

Usage

Basic Setup

import os
from pipecat.services.novita import NovitaLLMService

llm = NovitaLLMService(
    api_key=os.getenv("NOVITA_API_KEY"),
    settings=NovitaLLMService.Settings(
        model="openai/gpt-oss-120b",
    ),
)

With Custom Settings

llm = NovitaLLMService(
    api_key=os.getenv("NOVITA_API_KEY"),
    settings=NovitaLLMService.Settings(
        model="moonshotai/kimi-k2.5",
        temperature=0.7,
        max_tokens=500,
    ),
)

With Function Calling

from pipecat.adapters.schemas.function_schema import FunctionSchema
from pipecat.adapters.schemas.tools_schema import ToolsSchema
from pipecat.processors.aggregators.llm_context import LLMContext
from pipecat.services.llm_service import FunctionCallParams

async def get_weather(params: FunctionCallParams):
    await params.result_callback({"temperature": "75", "conditions": "sunny"})

llm = NovitaLLMService(
    api_key=os.getenv("NOVITA_API_KEY"),
    settings=NovitaLLMService.Settings(
        model="openai/gpt-oss-120b",
    ),
)

llm.register_function("get_weather", get_weather)

weather_function = FunctionSchema(
    name="get_weather",
    description="Get the current weather",
    properties={
        "location": {
            "type": "string",
            "description": "City and state, e.g. San Francisco, CA",
        },
    },
    required=["location"],
)

tools = ToolsSchema(standard_tools=[weather_function])
context = LLMContext(tools=tools)

Notes

  • Novita AI provides an OpenAI-compatible API, so all OpenAI features and patterns work with this service
  • The service supports streaming responses, function calling, and other OpenAI-compatible features
  • Model selection depends on your Novita AI account access and pricing tier