On This Page

Home / Cribl Copilot/ Custom AI Providers/Configure OpenAI via Microsoft Foundry for Cribl Copilot

Configure OpenAI via Microsoft Foundry for Cribl Copilot

Use this topic to connect OpenAI models via Microsoft Foundry as a Bring Your Own Model (BYOM) for Cribl Copilot. This page focuses on:

  • What you need from the Microsoft Foundry/OpenAI side.
  • How to fill in the ID, Description, Deployment URL, and API Key fields in the Cribl UI.

For the generic flow and prerequisites to open AI Settings, start the Custom AI provider modal, and switch back to Cribl-managed large language models (LLMs), see Configure Custom AI Providers.

Prerequisites

In addition to the general prerequisites for configuring your own LLM, you need:

  • A Microsoft Foundry/OpenAI deployment that your organization manages.
  • At least one deployed OpenAI model (for example, gpt-4o, gpt-4.1, or similar) available through that deployment.
  • A Deployment URL for that OpenAI deployment (for example, an HTTPS endpoint exposed by Foundry or your internal gateway).
  • An API key that your organization exposes specifically for OpenAI via Microsoft Foundry access (often via an internal API gateway or service).
  • On-prem only: Network connectivity from the Leader to the endpoint your organization uses for OpenAI via Microsoft Foundry (public endpoint, private link, or corporate proxy).

Step 1: Open the Custom Provider Modal

Navigate to your AI Settings to start the configuration:

  • Select Use Custom AI Providers (or Try it!) to open the configuration modal.

Step 2: Provide ID and Description

These fields identify the provider within your Cribl environment.

  • ID: Enter a short, unique identifier (for example, openai-foundry-prod).
  • Description: Enter a human-readable label (for example, OpenAI via Microsoft Foundry - production (gpt-4o)).

Step 3: Choose the Provider Type

  • Choose OpenAI via Microsoft Foundry as the provider type.

Step 4: Provide the Deployment URL and API Key

These fields tell Cribl where to send requests and how to authenticate.

Deployment URL

Enter the base HTTPS endpoint URL for your deployment. In Microsoft Foundry, the URL format typically follows one of these patterns:

  • Project-specific endpoint: https://<resource-name>.services.ai.azure.com/api/projects/<project-name>
  • Global Inference endpoint: https://<resource-name>.openai.azure.com

Ensure the URL includes the https:// prefix. Do not include trailing slashes or specific paths like /chat/completions, as Cribl appends these automatically.

API Key

The API Key is the single credential used to authenticate your Copilot requests.

  • API Key: Paste the API key or bearer token provided by your platform team.

Depending on your environment setup, this key will be one of the following:

  • If you are using Azure OpenAI Service directly, this is the key found on the Keys and Endpoint tab in the Azure Portal.
  • If using a gateway, use the token generated by your internal API management layer.

If you are unsure where to get this key, ask the team that manages AI providers and Microsoft Foundry access to issue or confirm the correct key for:

  • Cribl.Cloud (if you are configuring a Cribl.Cloud Workspace), or
  • The on-prem Leader (if you are configuring an on-prem deployment).

Step 5: Test and Save the Configuration

Select Test Connection in the modal.

  • If the test succeeds, you will see a success indicator.
  • If it fails, verify:
    • The Deployment URL is the inference endpoint (not the management URL) and reachable from your environment.
    • The API Key is correct and active.
    • The key has the correct permissions to access OpenAI models via Microsoft Foundry.
    • On-prem: The Leader can reach the Foundry or gateway endpoint (no firewall, DNS, or proxy issues).

When all fields are valid and the test passes, select Save.

After you save:

  • A Custom AI provider card appears at the top of AI Settings, showing your ID/Description and OpenAI via Microsoft Foundry as the provider type.
  • Supported Copilot capabilities in that workspace begin using OpenAI via Microsoft Foundry as their AI backend.

Step 6: Verify Copilot Behavior

To confirm that OpenAI via Microsoft Foundry is correctly configured:

  1. In AI Settings, confirm that the Custom AI provider card lists OpenAI via Microsoft Foundry.
  2. Supported Copilot capabilities in that Workspace begin using OpenAI via Microsoft Foundry as their AI backend.
  3. Verify that:
    • Requests succeed without provider-related errors.
    • Latency and behavior align with your expectations for the OpenAI models and deployment you configured.

If you see failures or unexpected behavior:

  • Re-run Test Connection in the modal and review any error messages.
  • Confirm with your platform/AI team that:
    • The endpoint URL and key are valid and not rate-limited.
    • Your OpenAI deployments are enabled and accessible behind the Foundry or gateway endpoint.
    • For on-prem deployments: Network rules allow the Leader to reach the Foundry or gateway endpoint.

Change or Stop Using OpenAI via Microsoft Foundry

  • To update the OpenAI provider (for example, after a key rotation or endpoint change), see Edit an Existing Custom AI Provider.
  • To stop using OpenAI via Microsoft Foundry, see Stop Using a Custom AI Provider.
  • To use this custom AI provider again later, use the instructions in this topic to reopen the modal, choose OpenAI via Microsoft Foundry, and enter your ID, Description, Deployment URL, and API Key.