POST
Perplexity AIPerplexity AIPerplexity AI APIChat Completions
POST
https://api.lowcodeapi.com/perplexityai/chat/completions
Request Body
Content Type : application/json
Request Parameters
modelstring
Defaults to llama-3-sonar-small-32k-online
messagesarray
A list of messages comprising the conversation so far.
max_tokensnumber
The maximum number of completion tokens returned by the API.
temperaturenumber
Defaults to 0.2
top_pnumber
Defaults to 0.9
return_citationsboolean
Determines whether or not a request to an online model should return citations
return_imagesboolean
Determines whether or not a request to an online model should return images
top_knumber
Specified as an integer between 0 and 2048 inclusive
streamboolean
Determines whether or not to incrementally stream the response
presence_penaltynumber
Value between -2.0 and 2.0
frequency_penaltynumber
Defaults to 1
Overview

Generates a model's response for the given chat conversation

API Reference Link
https://docs.perplexity.ai/reference/post_chat_completions
Response
API response data will be shown here once the request is completed.
Snippet
cURL
curl -X POST \
 'https://api.lowcodeapi.com/perplexityai/chat/completions' \
 -H 'Cache-Control: no-cache' \
 -H 'Content-Type: application/json' --data-raw '{
  "model": "",
  "messages": [],
  "max_tokens": "",
  "temperature": "",
  "top_p": "",
  "return_citations": "",
  "return_images": "",
  "top_k": "",
  "stream": "",
  "presence_penalty": "",
  "frequency_penalty": ""
}'
© 2024LowCodeAPI

Last Updated : 2024-12-16 14:03 +00:00