Configure the Generative AI provider

To start using Copilot, you must have already configured the AI provider integration in the TotalAgility Designer. If you do not have the Generative AI integration, configure the AI provider in Tungsten Copilot.

  1. On the TotalAgility Apps Home page, click the Tungsten Copilot card.

    You are prompted to configure a Generative AI provider.

  2. Click Configure.

    The New AI provider configuration dialog box is displayed.

  3. On the Type list, select one of the AI providers and configure it to use as a standard model or vision model.

    Configure the standard model settings when you make use of an AI provider and do not use an image to generate an item.

    Configure the vision model settings when you make use of an AI provider and use an image to generate an item. If the vision settings are not configured or the model specified does not support the use of images, an error appears.

    • ChatGPT OpenAI (default)

      Configuration settings

      Description

      ID

      Provide a unique ID for the AI provider.

      Display name

      Enter the AI provider name to help in identifying the server's name in the list.

      Use legacy function syntax

      The "function" syntax used by TotalAgility to send the request to the AI provider. (Default: Selected and read-only)

      Use provider as

      Copilot for development

      The read-only field is selected by default as there should be at least one AI provider used as the Copilot.

      You can have multiple AI providers but use only one AI provider (ChatGPT OpenAI or Azure OpenAI) as the Copilot for development.

      Copilot for extraction

      If selected, the AI provider is used as an Extraction provider. (Default: Clear)

      When multiple AI providers are configured, you can use only one AI provider (ChatGPT OpenAI or Azure OpenAI) as Copilot for data extraction.

      Standard Model
      API URL

      Enter the API URL for the selected provider.

      API key

      Enter the API key.

      Model

      Enter the AI provider model to align with your requirement. For example, gpt-3.5-turbo.

      Temperature

      Set the process Temperature for the AI provider. (Default: 0.5, Minimum value: 0, and Maximum value: 2)

      Temperature is a parameter that controls the level of creativity in the AI-generated text. You get a more focused or diverse text based on the temperature set. A higher temperature means the model takes more risks, giving you a mixed response.

      Timeout in seconds

      Configure the timeout period in seconds (default: 300, minimum: 120, and maximum: 3600) for each provider. When you send a request to an AI model, such as a text generation or image processing task, the timeout period ensures that the request does not become unresponsive indefinitely. This is helpful as some providers are slower than others, and adjusting the timeout period accordingly can optimize performance.

      For example, if you set a timeout period of 120 seconds for a text generation request, the system will wait up to 120 seconds for the AI to generate the response. If it takes longer than 120 seconds, the request will be terminated, and an error is returned.

      Retry count

      Set the retry count (default: 5, minimum: 0 , and maximum: 100) for each provider. The retry count is the number of times a system attempts to retry a failed operation before giving up and returning a failure response. If you set a retry count of 3, the system attempts to resend a failed request up to 3 times before finally returning an error if the request continues to fail.

      When you use the AI models, the timeout and retry count settings are applied for Copilot, Copilot for extraction, dashboard insights, Copilot insights, and vision usage.

      Vision Model
      API URL

      Enter the API URL for the vision model.

      API key

      Enter the API key.

      Model

      Optional. The AI provider model when an image is used as input. For example, gpt-4-vision-preview. However, you can change the model to align with your requirement.

      Token limit

      By default, the token limit for the provider is 3000.

      The tokens are word fragments that are generated when the AI provider processes the input. The token limits restrict the number of tokens (usually words) sent to a model per request.

      Timeout in seconds

      Configure the timeout period in seconds (default: 300, minimum: 120, and maximum: 3600) for each provider.

      Retry count

      Set the retry count (default: 5, minimum: 0 , and maximum: 100) for each provider.

    • Azure OpenAI

      The Azure OpenAI provider type has the same configuration options as ChatGPT OpenAI.

  4. Click Save and click OK.
    The AI provider is configured and you are redirected to the Copilot.

    The AI provider configured here appears on the Integration menu > Generative AI providers page in the TotalAgility Designer. When you open the AI provider integration, the API key for the standard and vision models is encrypted, and only the last five digits of the key are displayed.