Configuration

Configuration

Lightrail provides multiple configuration options. This page provides details on how to navigate to the settings page and configure these settings.

Accessing Settings

To navigate to the settings page, perform the following steps:

  1. Open the Lightrail application.
  2. Click the gear icon in the upper-right corner of the prompt input.

Configuration Settings

Here are the configuration settings currently available on Lightrail:

Provider

The 'Provider' setting allows you to choose which model provider you want Lightrail to use to generate your responses. Currently, the available options are either OpenAI or Lightrail. Choosing OpenAI will use your own OpenAI API key, while choosing 'Lightrail' will route your requests through the OpenAI proxy server that we provide free-of-charge. The models available through both providers are identical. More providers (with different model options) will be available soon.

Model

The 'Model' setting enables you to select the model you want to use for AI responses. The currently available models are gpt-3.5-turbo-16k, gpt-4, and gpt-3.5-turbo. For most uses, we currently recommend gpt-4, as other models tend to struggle with complex prompts.

OpenAI API Key

This setting is only available if 'openai' is selected as the provider. Here, you provide the API key that Lightrail will use for generating the AI responses.

Saving Configuration Settings

After you configure the settings, click the 'Save' button to apply your new settings.