Lightrail provides multiple configuration options. This page provides details on how to navigate to the settings page and configure these settings.
To navigate to the settings page, perform the following steps:
- Open the Lightrail application.
- Click the gear icon in the upper-right corner of the prompt input.
Here are the configuration settings currently available on Lightrail:
The 'Provider' setting allows you to choose which model provider you want Lightrail to use to generate your responses. Currently, the available options are either
OpenAI will use your own OpenAI API key, while choosing 'Lightrail' will route your requests through the OpenAI proxy server that we provide free-of-charge. The models available through both providers are identical. More providers (with different model options) will be available soon.
The 'Model' setting enables you to select the model you want to use for AI responses. The currently available models are
gpt-3.5-turbo. For most uses, we currently recommend
gpt-4, as other models tend to struggle with complex prompts.
This setting is only available if 'openai' is selected as the provider. Here, you provide the API key that Lightrail will use for generating the AI responses.
After you configure the settings, click the 'Save' button to apply your new settings.