Assistants
- The Assistants API by OpenAI has a dedicated endpoint.
- The Assistants API enables the creation of AI assistants, offering functionalities like code interpreter, knowledge retrieval of files, and function execution.
- Read here for an in-depth documentation of the feature, how it works, what it's capable of.
- As with the regular OpenAI API, go to https://platform.openai.com/account/api-keys to get a key.
- You will need to set the following environment variable to your key or you can set it to
user_providedfor users to provide their own.
- You can determine which models you would like to have available with
ASSISTANTS_MODELS; otherwise, the models list fetched from OpenAI will be used (only Assistants API compatible models will be shown).
- If necessary, you can also set an alternate base URL instead of the official one with
ASSISTANTS_BASE_URL, which is similar to the OpenAI counterpartOPENAI_REVERSE_PROXY
- There is additional, optional configuration, depending on your needs, such as disabling the assistant builder UI, that are available via the
librechat.yamlcustom config file:- Control the visibility and use of the builder interface for assistants. More info
- Specify the polling interval in milliseconds for checking run updates or changes in assistant run states. More info
- Set the timeout period in milliseconds for assistant runs. Helps manage system load by limiting total run operation time. More info
- Specify which assistant Ids are supported or excluded More info
Strict function calling
With librechat you can add add the 'x-strict': true flag at operation-level in the openapi spec for actions. This will automatically generate function calls with 'strict' mode enabled. Note that strict mode supports only a partial subset of json. Read https://platform.openai.com/docs/guides/structured-outputs/some-type-specific-keywords-are-not-yet-supported for details.
For example:
Notes
Notes:
- At the time of writing, only the following models support the Retrieval capability:
- gpt-3.5-turbo-0125
- gpt-4-0125-preview
- gpt-4-turbo-preview
- gpt-4-1106-preview
- gpt-3.5-turbo-1106
- Vision capability is not yet supported.
- If you have previously set the
ENDPOINTSvalue in your .env file, you will need to add the valueassistants
How is this guide?