On This Page

Home / Cribl as Code/ Cribl SDKs (Preview)/ SDK Code Examples/Configure Worker Groups with the Cribl SDK

Configure Worker Groups with the Cribl SDK

Preview Feature

The Cribl SDKs are Preview features that are still being developed. We do not recommend using them in a production environment, because the features might not be fully tested or optimized for performance, and related documentation could be incomplete.

Please continue to submit feedback through normal Cribl support channels, but assistance might be limited while the features remain in Preview.

These code examples demonstrate how to create, scale, replicate, and deploy Worker Groups in Cribl Stream using the Cribl Python SDK for the control plane.

About the Code Examples

The code examples use Bearer token authentication. Read the SDK authentication documentation to learn how to configure authentication. The Permissions granted to your Bearer token must include creating and managing Worker Groups.

Replace the variables in the examples with the corresponding information for your Cribl deployment.

For on-prem deployments, to use https in the URLs, you must configure Transport Layer Security (TLS).

The configurations in the examples do not include all available body parameters. For a complete list of body parameters for each endpoint, refer to the documentation in the API Reference.

The code examples for creating and scaling a Worker Group in Cribl.Cloud include the estimatedIngestRate property, which allows you to configure Worker Groups for optimal performance. For each supported estimatedIngestRate value, the following table maps the corresponding throughput and number of Worker Processes:

estimatedIngestRateThroughputWorker Processes
102412 MB/s6
204824 MB/s9
307236 MB/s14
409648 MB/s21
512060 MB/s30
716884 MB/s45
10240120 MB/s62
13312156 MB/s93
15360180 MB/s186

1. Create a Worker Group

This example creates a new Worker Group in Cribl Stream.

In the Cribl.Cloud example, the estimatedIngestRate is set to 2048, which is equivalent to a maximum of 24 MB/s with nine Worker Processes.

Python SDK (Cribl.Cloud)Python SDK (On-Prem Deployment)

2. Scale the Worker Group

The Cribl.Cloud example scales the Worker Group to an estimatedIngestRate of 4096, which is equivalent to a maximum of 48 MB/s with 21 Worker Processes. The example also sets provisioned to True to activate Cribl.Cloud resources.

The on-prem example assumes the Syslog Source load balancer (LB) is enabled on a Cribl Stream system with 6 physical cores hyperthreaded (12 vCPUs). Because the Syslog Source LB reserves an additional core, the example scales the Worker process count from the default to -3: two cores for system/API overhead plus one for the LB process, so the system spawns nine Worker Processes. For more information, see Optimize a Distributed Deployment or Hybrid Group and Choose a Process Count. The on-prem example also updates the Worker Group system settings, commits and deploys the changes, and restarts the Worker Group to apply the changes.

The request bodies for the groups.update (Cribl.Cloud) and system.settings.cribl.update (on-prem) methods require a complete representation of the Worker Group or settings, respectively, that you want to update. These methods do not support partial updates. Cribl removes any omitted fields when updating the Worker Group or settings.

Python SDK (Cribl.Cloud)Python SDK (On-Prem Deployment)

3. Replicate a Worker Group

This example creates a replica Worker Group in Cribl Stream by cloning an existing Worker Group configuration. The request body uses the source_group_id parameter to specify the Worker Group to clone.

The replica Worker Group inherits the configuration from the source Worker Group, including settings and resources like Sources, Destinations, and Pipelines.

To run this example, you must have at least one existing Worker Group named my-worker-group in Cribl Stream to use as the source Worker Group.

Python SDK (Cribl.Cloud)Python SDK (On-Prem Deployment)

4. Confirm the Worker Group Configuration

Use this example to retrieve a list of all Worker Groups in Cribl Stream so that you can review and confirm their configurations.

Python SDK (Cribl.Cloud)Python SDK (On-Prem Deployment)

5. Commit and Deploy the Worker Group

This example demonstrates how to commit and deploy the Worker Group configuration, then commit to the Leader to keep it in sync with the Worker Group. You can commit and deploy immediately after a single create or update request or after multiple requests.

Committing and deploying the Worker Group configuration requires three requests, which the Python SDK example chains together:

  1. Commit pending changes to the Worker Group. This request commits only the configuration changes for Worker Groups by specifying the file local/cribl/groups.yml.
  2. Deploy the committed changes to the Worker Group. This request includes the version body parameter, which uses the value of commit from the response body for the commit request.
  3. Commit the changes to the Leader to keep the Leader in sync with the Worker Group.
Python SDK (Cribl.Cloud)Python SDK (On-Prem Deployment)