Skip to main content
Version: v0.7.4

AI/ML API Proxy LLM Configuration

OpenAI-compatible chat completion request schema.

Details can be found in:
https://api.aimlapi.com/docs-public

Parameters

NameTypeRequiredDescription
modelstring✅
ID of the model to use.
messagesarray✅
List of messages comprising the conversation.
max_completion_tokensinteger❌
Maximum number of tokens to generate for completion.
max_tokensinteger❌
Alias for max_completion_tokens.
streamboolean❌
Whether to stream back partial progress.
stream_optionsobject❌
Additional options to control streaming behavior.
toolsarray❌
List of tools (functions or APIs) the model may call.
tool_choiceobject❌
Which tool the model should call, if any.
parallel_tool_callsboolean❌
Whether tools can be called in parallel.
ninteger❌
How many completions to generate for each prompt.
stoparray|string❌
Sequences where the model will stop generating further tokens.
logprobsboolean❌
Whether to include log probabilities for tokens.
top_logprobsinteger❌
Number of most likely tokens to return logprobs for.
logit_biasobject❌
Modify likelihood of specified tokens appearing in the completion.
frequency_penaltynumber❌
How much to penalize new tokens based on frequency.
presence_penaltynumber❌
How much to penalize new tokens based on whether they appear in the text so far.
seedinteger❌
Seed for sampling (for reproducibility).
temperaturenumber❌
Sampling temperature to use (higher = more random).
top_pnumber❌
Nucleus sampling (top-p) cutoff value.
response_formatobject|string❌
Format to return the completion in, such as 'json' or 'text'.