xxxxxxxxxx
{
"model": "",
"messages": [],
"max_tokens": "",
"temperature": "",
"top_p": "",
"return_citations": "",
"return_images": "",
"top_k": "",
"stream": "",
"presence_penalty": "",
"frequency_penalty": ""
}
Generates a model's response for the given chat conversation
curl -X POST \
'https://api.lowcodeapi.com/perplexityai/chat/completions' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' --data-raw '{
"model": "",
"messages": [],
"max_tokens": "",
"temperature": "",
"top_p": "",
"return_citations": "",
"return_images": "",
"top_k": "",
"stream": "",
"presence_penalty": "",
"frequency_penalty": ""
}'
Last Updated : 2024-12-16 14:03 +00:00