Configuration

Admin Console

The admin console is used to install and configure your DeepSource enterprise installation.

The Admin Console contains the following sections:

  • Application

    • Dashboard
    • Version History
    • Config
    • Troubleshoot
    • License
    • View files
    • Registry settings
  • GitOps

  • Cluster Management

  • Snapshots

Application

The Application tab is used for installing and upgrading the DeepSource Enterprise application, configuring your installation, etc.

Dashboard

The Dashboard section shows you a quick preview of the state of your application.

Version history

The Version History tab shows you all versions of the DeepSource application that are available to you. Clicking on the Deploy button will deploy the selected version of the app to your cluster.

Some releases are marked as Required. For upgrading to the latest version, you must deploy all Required versions in order between your current version and the latest version before upgrading to the latest version.

You can press the Check for update button to check for new updates that may be available.

Config

This tab is used to configure your DeepSource app.

Application hostname: Provide the primary hostname/IP through which you will access the DeepSource application. Please note that your version control system should be able to send webhooks to the application using this hostname/IP.

Custom Allowed Hosts: Allows you to provide additional hostnames or IP addresses that must be whitelisted by the application. This can be useful when internal services within your network use a different hostname to access the DeepSource application.

Version Control Provider: Select and configure a Version Control Provider to integrate with your DeepSource application. DeepSource Enterprise currently supports the following Version Control Providers:

Selecting any Version Control Provider will show additional configuration options to configure. A detailed guide to integrating with your Version Control Provider is available in the Setup section of the docs.

Enable SAML SSO: Configure SAML SSO for your application. A guide to integrating your SSO provider is available in the SSO section of the docs.

Deploy embedded database: The application will automatically provision an embedded database within the cluster when enabled. This is often recommended for pilot installations and is not recommended in production. A detailed guide on setting up an external database is available here.

Access key for object storage: Random value to be used as an access key for Minio object storage. If not entered, a default value is kept.

Application database username: PostgreSQL Database username.

Application database password: PostgreSQL Database password

Upload TLS certificate and private key?: This option is used to configure the TLS settings. A detailed guide on setting up TLS is available here.

Node selectors: DeepSource contains two classes of workloads. The application workloads and the analysis workloads. Separating application workloads from the analysis workloads is highly recommended. This is done to prevent resource contention surges from affecting your application. You can attach node labels for your nodes using the following command: @vishnu to document.

  • Node selector label for application workloads: Set the labels to identify nodes that will run the application workloads. Eg: deepsource: application
  • Node selector label for analysis workloads: Set the labels to identify nodes that will run the analysis workloads. Eg: deepsource: analysis

DeepSource Enterprise Admins: Enter a comma-separated list of users who will have access to the Enterprise Control Panel. You can add these e-mail addresses pre-emptively, and once the user with the e-mail address onboard, they will have access to the Enterprise Control Panel.

Cluster Management

This section would be visible if you are using the Standalone installation method. The cluster management section provides information about the nodes in your cluster. The cluster management portal allows you to:

  • Drain node: This lets you safely drain a node. This can be used when you need to restart or delete a node.
  • Add a Node: This will generate a command you can execute to add a newly provisioned node to your cluster.

Snapshots (Cluster metadata backups)

You can back up your cluster metadata through the Admin Console. Advanced documentation for this is directly available on Replicated documentation page.

Bring your own Database

DeepSource Enterprise uses a PostgreSQL database to store persistent data. By default, an embedded database is provisioned within the Kubernetes cluster. However, we highly recommend bringing your own PostgreSQL database for production deployments. This document walks through the steps to set up an external database.

Recommended version: PostgreSQL 12

Setting up your database

On a fresh PostgreSQL database, you would have to create a new database and set up a user with sufficient privileges before configuring it for use with DeepSource app.

Following are sample steps to do this on a generic PostgreSQL database. Your steps may vary if you use a managed database like AWS RDS or Google CloudSQL.

Log into your database with the postgres user:

psql postgres

Create a new database:

CREATE DATABASE <db-name>;

Create a new PostgreSQL user for the application to use:

CREATE USER <user-name> WITH ENCRYPTED PASSWORD '<password>;

Grant privileges to the database user:

GRANT ALL PRIVILEGES ON DATABASE <db-name> TO <user-name>;

Important: Ensure that the max_connections parameter for the PostgreSQL application is set to at least 500. You will have to set a suitable shared_buffers value for this value.

Configuring your app to use the external database

  1. Log into the Admin Console.

  2. Navigate to the Config section in the admin console.

  3. Select No for the field Deploy embedded database?

  4. Enter the following values for the corresponding fields:

    1. Application database name: Enter the name of your database name that was created for DeepSource.
    2. Application database hostname: Enter the hostname/IP of your PostgreSQL server.
    3. Application database port: Enter the port on which your PostgreSQL instance is configured. The default value for this field is 5432.
    4. Application database username: Enter the database username that you created for the database.
    5. Application database password: Enter the database password for the database user you used in the previous step.
  5. Click on Save Config to persist this configuration.

  6. Navigate to the Version History page.

  7. Click Deploy.

  8. The application will be redeployed with the new database.

Moving from an embedded database to an external database

If you intend to switch to an external database and you have been using DeepSource Enterprise for a while, you will face data loss. You can, however, use these steps to take a database backup and restore it on to the new database.

  • Create a backup using the pg_dump utility.

    pg_dump -h <hostname> -U <username> -d <db_name> -v > dump.sql
  • To get the hostname, password, database name and the database user, run the following command:

kubectl exec -it deploy/asgard-main -- cat /secrets/.env | grep ASGARD_DB_
  • The above command will output the info in the below format:
ASGARD_DB_NAME='asgard'
ASGARD_DB_USER=asgard-enterprise
ASGARD_DB_PASSWORD=********
ASGARD_DB_HOST='postgresql-ha-pgpool.default'
ASGARD_DB_PORT='5432'
  • To get the hostname IP, run kubectl get svc | grep pgpool and use the IP for the service as the hostname.
  • You can use pg_restore to apply the dump to your new database.

Database Backups

DeepSource Enterprise Server does not create database backups. Please make sure to enable backups by configuring the database hosting provider.

Setup TLS

We highly recommend using TLS with your DeepSource Enterprise installation. There are multiple ways to enable TLS on DeepSource Enterprise Server. You can configure these options on the Admin Console.

Upload your own TLS certificate and private key

Setting this option to Yes will allow you to upload your TLS certificate and private key. Upload the TLS certificate and private key and click Save Config. Deploy the latest version to begin using your site with TLS.

Let DeepSource provision a TLS certificate

By selecting No to the above option, DeepSource will automatically use Let's Encrypt to provision a TLS certificate for your site. Please note that Let's Encrypt will attempt a domain validation challenge. To validate the certificates, your site would have to be reachable by the Let's Encrypt server.

No TLS

If this option is selected, no certificates will be installed by DeepSource. This option is useful if you choose to terminate TLS upstream.

Bring Your Own Key (BYOK)

BYOK requires Enterprise Server v5.0.0 or later.

Bring Your Own Key (BYOK) lets you run AI-powered features using your own model provider credentials. With BYOK, inference calls route directly from your Enterprise Server to your chosen provider. Your code never leaves your infrastructure to reach third-party AI services through DeepSource.

BYOK is useful when you:

  • Have existing cloud commitments with negotiated rates for AI services
  • Need to meet data residency or compliance requirements
  • Want full control over which models power your AI features

Supported providers

ProviderAccess method
Google Vertex AIGCP service account or Workload Identity
AWS BedrockAWS bearer token / Inference profile
Azure OpenAIAzure API key
OpenAIAPI key (also supports OpenAI-compatible endpoints via custom base URL)
Google AI (Gemini)API key
AnthropicAPI key

Model tiers

DeepSource uses two model tiers for different tasks:

TierFieldPowers
FlagshipModel/deployment name for flagship tierPrimary AI features
VersatileModel/deployment name for versatile tierAuxiliary AI tasks

Supported models

Ensure the following models are enabled and accessible in your provider account. DeepSource automatically selects the appropriate model for each task.

ProviderFlagshipVersatile
Google Vertex AIgemini-2.5-pro, gemini-3.1-pro-previewgemini-2.5-flash, gemini-3.1-flash-lite-preview
AWS Bedrockus.anthropic.claude-sonnet-4-6us.anthropic.claude-haiku-4-5-20251001-v1:0
Azure OpenAIgpt-5.3-codexgpt-5.1-codex-mini
OpenAIgpt-5.3-codexgpt-5.1-codex-mini
Google AI (Gemini)gemini-3.1-pro-previewgemini-3.1-flash-lite-preview
Anthropicclaude-sonnet-4-6claude-haiku-4-5-20251001

For Google Vertex AI, the exact models used may vary by region. Setting GCP Model Location to global allows you to use newer preview models.

Configuring BYOK

  1. Log into the Admin Console.
  2. Navigate to the Config section.
  3. Under AI Model Provider, select your provider.
  4. Fill in the provider-specific fields described below.
  5. Set the Model/deployment name for flagship tier and Model/deployment name for versatile tier.
  6. Click Save Config and deploy the latest version from the Version History page.

Provider configuration

Select Google Vertex AI as the AI Model Provider.

FieldDescriptionRequired
GCP Project IDYour GCP project IDYes
GCP Model LocationGCP region (e.g., us-central1). See supported regionsYes
GCP Authentication TypeChoose between Workload Identity or Service Account Key. If using Service Account Key, upload your service account JSON key file.Yes

If you select Workload Identity, the application uses the default credentials available to the pod (e.g., Workload Identity). No additional key file is needed.

Select AWS Bedrock as the AI Model Provider.

FieldDescriptionRequired
AWS RegionAWS region where your Bedrock models are hosted (e.g., us-east-1). See supported regionsYes
AWS Bearer TokenBearer token to authenticate with BedrockYes
Use Bedrock Inference ProfileEnable this to use a Bedrock inference profileNo
AWS Access Key IDAccess key ID for authenticating with Bedrock when using an inference profileOnly when Use Bedrock Inference Profile is enabled
AWS Secret Access KeySecret access key for authenticating with Bedrock when using an inference profileOnly when Use Bedrock Inference Profile is enabled

Select Azure OpenAI as the AI Model Provider.

FieldDescriptionRequired
Azure OpenAI EndpointYour Azure OpenAI endpoint URLYes
Azure OpenAI API KeyAzure OpenAI API keyYes
Azure OpenAI API VersionAPI version (e.g., 2024-12-01-preview)Yes

Use the Model/deployment name for flagship tier and Model/deployment name for versatile tier fields to specify your Azure deployment names.

Select OpenAI as the AI Model Provider.

FieldDescriptionRequired
API KeyOpenAI API keyYes
Base URLCustom base URLNo

The Base URL field is useful if you use an OpenAI-compatible proxy or endpoint such as OpenRouter or LiteLLM. Leave it empty to use the default OpenAI endpoint.

Select Google AI as the AI Model Provider.

FieldDescriptionRequired
API KeyGoogle AI API keyYes

Select Anthropic as the AI Model Provider.

FieldDescriptionRequired
API KeyAnthropic API keyYes

Switching providers

Changing your model provider is a configuration-only operation. Update the AI Model Provider selection and the corresponding credentials in the Admin Console, save, and deploy. The next analysis run will use the new provider with no migration or downtime.

Use Proxy for External Connectivity

DeepSource Enterprise (v4.4.0+) supports routing outbound traffic through an HTTP/HTTPS proxy server.

Configuring the Proxy

  1. Log into the Admin Console.
  2. Navigate to the Config section.
  3. Set Use a proxy server to access the internet? to Yes.
  4. Fill in the following fields:

HTTP Proxy: The URL of your HTTP proxy server, e.g. http://proxy.example.com:3128

HTTPS Proxy: The URL of your HTTPS proxy server, e.g. http://proxy.example.com:3128

No Proxy: A comma-separated list of hostnames or IP addresses that should bypass the proxy. Add your VCS host, internal registries, or any other internal addresses alongside the pre-populated defaults:

localhost,127.0.0.1,cluster.local,kubernetes.default,deepsource-application.svc,.deepsource-application,.deepsource,.deepsource.svc,.default,.default.svc

In addition to the hostname kubernetes.default, also add its cluster IP to the No Proxy list. To find it, run:

kubectl get svc kubernetes -n default -o jsonpath='{.spec.clusterIP}'

Then append the output to the No Proxy field, e.g. ...,10.96.0.1.

  1. Click Save Config and deploy the latest Config Change version.

On this page