Skip to content

Completes chat completion inference endpoint docs #4451

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 74 additions & 3 deletions specification/inference/_types/CommonTypes.ts
Original file line number Diff line number Diff line change
Expand Up @@ -47,10 +47,47 @@ export class RequestChatCompletion {
temperature?: float
/**
* Controls which tool is called by the model.
* String representation: One of `auto`, `none`, or `requrired`. `auto` allows the model to choose between calling tools and generating a message. `none` causes the model to not call any tools. `required` forces the model to call one or more tools.
* Example (object representation):
* ```
* {
* "tool_choice": {
* "type": "function",
* "function": {
* "name": "get_current_weather"
* }
* }
* }
* ```
*/
tool_choice?: CompletionToolType
/**
* A list of tools that the model can call.
* Example:
* ```
* {
* "tools": [
* {
* "type": "function",
* "function": {
* "name": "get_price_of_item",
* "description": "Get the current price of an item",
* "parameters": {
* "type": "object",
* "properties": {
* "item": {
* "id": "12345"
* },
* "unit": {
* "type": "currency"
* }
* }
* }
* }
* }
* ]
* }
* ```
*/
tools?: Array<CompletionTool>
/**
Expand Down Expand Up @@ -140,18 +177,52 @@ export type MessageContent = string | Array<ContentObject>
export interface Message {
/**
* The content of the message.
*
* String example:
* ```
* {
* "content": "Some string"
* }
* ```
*
* Object example:
* ```
* {
* "content": [
* {
* "text": "Some text",
* "type": "text"
* }
* ]
* }
* ```
*/
content?: MessageContent
/**
* The role of the message author.
* The role of the message author. Valid values are `user`, `assistant`, `system`, and `tool`.
*/
role: string
/**
* The tool call that this message is responding to.
* Only for `tool` role messages. The tool call that this message is responding to.
*/
tool_call_id?: Id
/**
* The tool calls generated by the model.
* Only for `assistant` role messages. The tool calls generated by the model. If it's specified, the `content` field is optional.
* Example:
* ```
* {
* "tool_calls": [
* {
* "id": "call_KcAjWtAww20AihPHphUh46Gd",
* "type": "function",
* "function": {
* "name": "get_current_weather",
* "arguments": "{\"location\":\"Boston, MA\"}"
* }
* }
* ]
* }
* ```
*/
tool_calls?: Array<ToolCall>
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import { Id } from '@_types/common'
/**
* Create an Amazon Bedrock inference endpoint.
*
* Creates an inference endpoint to perform an inference task with the `amazonbedrock` service.
* Create an inference endpoint to perform an inference task with the `amazonbedrock` service.
*
* >info
* > You need to provide the access and secret keys only once, during the inference model creation. The get inference API does not retrieve your access or secret keys. After creating the inference model, you cannot change the associated key pairs. If you want to use a different access and secret key pair, delete the inference model and recreate it with the same name and the updated keys.
Expand Down
2 changes: 1 addition & 1 deletion specification/inference/put_mistral/PutMistralRequest.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ import { Id } from '@_types/common'
/**
* Create a Mistral inference endpoint.
*
* Creates an inference endpoint to perform an inference task with the `mistral` service.
* Create an inference endpoint to perform an inference task with the `mistral` service.
* @rest_spec_name inference.put_mistral
* @availability stack since=8.15.0 stability=stable visibility=public
* @availability serverless stability=stable visibility=public
Expand Down