Skip to main content
curl --request POST \
  --url https://api.usedatabrain.com/api/v2/datasource \
  --header 'Authorization: Bearer dbn_live_abc123...' \
  --header 'Content-Type: application/json' \
  --data '{
    "datasourceType": "postgres",
    "credentials": {
      "name": "production-postgres",
      "host": "db.example.com",
      "port": 5432,
      "username": "dbuser",
      "password": "securepassword",
      "database": "analytics",
      "schema": "public"
    }
  }'
{
  "name": "production-postgres"
}
POST
https://api.usedatabrain.com
/
api
/
v2
/
datasource
curl --request POST \
  --url https://api.usedatabrain.com/api/v2/datasource \
  --header 'Authorization: Bearer dbn_live_abc123...' \
  --header 'Content-Type: application/json' \
  --data '{
    "datasourceType": "postgres",
    "credentials": {
      "name": "production-postgres",
      "host": "db.example.com",
      "port": 5432,
      "username": "dbuser",
      "password": "securepassword",
      "database": "analytics",
      "schema": "public"
    }
  }'
{
  "name": "production-postgres"
}
Create a new datasource connection to integrate your database or data warehouse with DataBrain. The API validates credentials, tests the connection, and automatically caches the schema for immediate use.
Before creating a datasource, ensure you have valid credentials for your database or data warehouse. The API will test the connection before creating the datasource. Supported datasource types include Snowflake, PostgreSQL, MySQL, BigQuery, Databricks, and many more.

Endpoint

POST https://api.usedatabrain.com/api/v2/datasource

Self-hosted Databrain Endpoint

POST <SELF_HOSTED_URL>/api/v2/datasource

Authentication

This endpoint requires a service token in the Authorization header. Service tokens differ from data app API keys and provide organization-level permissions. To access your service token:
  1. Go to your Databrain dashboard and open Settings.
  2. Navigate to Settings.
  3. Find the Service Tokens section.
  4. Click the “Generate Token” button to generate a new service token if you don’t have one already. Use this token as the Bearer value in your Authorization header.

Headers

Authorization
string
required
Bearer token for API authentication. Use your API key from the data app.
Authorization: Bearer dbn_live_abc123...
Content-Type
string
required
Must be set to application/json for all requests.
Content-Type: application/json

Request Body

datasourceType
string
required
The type of datasource to create. Must be one of the supported datasource types.Supported types:
  • snowflake - Snowflake data warehouse
  • postgres - PostgreSQL database
  • redshift - Amazon Redshift
  • mysql - MySQL database
  • mongodb - MongoDB database
  • clickhouse - ClickHouse database
  • singlestore - SingleStore database
  • bigquery - Google BigQuery
  • databricks - Databricks
  • elasticsearch - Elasticsearch
  • opensearch - OpenSearch
  • mssql - Microsoft SQL Server
  • awss3 - Amazon S3
  • csv - CSV files
  • firebolt - Firebolt
  • athena - Amazon Athena
  • trino - Trino
credentials
object
required
Connection credentials for the datasource. The structure varies by datasource type, but all types require a name field.
credentials.name
string
required
Unique name for the datasource. This name will be used to reference the datasource in other APIs and configurations.

Datasource-Specific Credentials

The credentials object structure depends on the datasourceType. Below are examples for common datasource types:
credentials.host
string
required
Snowflake account hostname (e.g., your-account.snowflakecomputing.com)
credentials.username
string
required
Snowflake username
credentials.role
string
required
Snowflake role to use
credentials.warehouse
string
required
Snowflake warehouse name
credentials.database
string
required
Snowflake database name
credentials.schema
string
required
Snowflake schema name
credentials.credentials
string
required
Authentication method: "username/password" or "Key-pair authentication"
credentials.password
string
Password (required if credentials is "username/password")
credentials.privateKey
string
Private key (required if credentials is "Key-pair authentication")
credentials.host
string
required
Database hostname or IP address
credentials.port
number
required
Database port number (1-65535)
credentials.username
string
required
Database username
credentials.password
string
required
Database password
credentials.database
string
required
Database name
credentials.schema
string
required
Schema name
credentials.sslMode
boolean
Enable SSL mode (optional)
credentials.sshTunnel
string
SSH tunnel setting: "enable" or "disable" (optional)
credentials.credentials_json
string
required
JSON string containing Google Cloud service account credentials
credentials.project_id
string
required
Google Cloud project ID
credentials.dataset_location
string
required
BigQuery dataset location (e.g., "US", "EU")
credentials.dataset_id
string
BigQuery dataset ID (optional)
credentials.host
string
required
Database hostname or IP address
credentials.port
number
required
Database port number (1-65535)
credentials.user
string
required
Database username. Note: Uses user not username for these datasource types.
credentials.password
string
required
Database password
credentials.server
string
required
SQL Server hostname or IP address. Note: Uses server not host for MSSQL.
credentials.port
number
required
Database port number (1-65535)
credentials.user
string
required
Database username. Note: Uses user not username for MSSQL.
credentials.password
string
required
Database password
credentials.database
string
Database name (optional)
credentials.isDisableDatabase
boolean
Disable database selection (optional)
credentials.host
string
required
Database hostname or IP address
credentials.port
number
required
Database port number (1-65535)
credentials.user
string
required
Database username
credentials.password
string
required
Database password
credentials.database
string
required
Database name
credentials.serverHostname
string
required
Databricks server hostname
credentials.httpPath
string
required
Databricks HTTP path
credentials.token
string
required
Databricks access token
credentials.server_type
string
required
Server type: "elastic-cloud", "open-cloud", or "self-managed"
credentials.cloud_id
string
Cloud ID (required if server_type is "elastic-cloud" or "open-cloud")
credentials.server_url
string
Server URL (required if server_type is "self-managed")
credentials.username
string
required
Username
credentials.password
string
required
Password
credentials.isIgnoreVerifyCert
boolean
Ignore certificate verification (optional)
credentials.isIgnoreSsl
boolean
Ignore SSL (optional)
credentials.client_id
string
required
Firebolt client ID
credentials.client_secret
string
required
Firebolt client secret
credentials.account
string
required
Firebolt account name
credentials.database
string
required
Database name
credentials.engineName
string
required
Engine name
credentials.schema
string
Schema name (optional)
credentials.database
string
required
Athena database name
credentials.outputBucket
string
required
S3 output bucket for query results
credentials.s3AccessKeyId
string
required
AWS access key ID
credentials.s3Region
string
required
AWS region
credentials.s3SecretAccessKey
string
required
AWS secret access key
credentials.datasourceId
string
Datasource ID (optional)
credentials.host
string
required
Trino hostname or IP address
credentials.port
number
required
Database port number (1-65535)
credentials.catalog
string
required
Trino catalog name
credentials.schema
string
required
Schema name
credentials.username
string
required
Username
credentials.password
string
required
Password
credentials.sshTunnel
string
SSH tunnel setting: "enable" or "disable" (optional)
credentials.sshHost
string
SSH host (required if sshTunnel is "enable")
credentials.sshPort
number
SSH port (required if sshTunnel is "enable")
credentials.sshUsername
string
SSH username (required if sshTunnel is "enable")
credentials.sshPrivateKey
string
SSH private key (required if sshTunnel is "enable")
credentials.name
string
required
Name for the CSV datasource
credentials.bucketName
string
required
S3 bucket name
credentials.datasetPath
string
Path within the bucket (optional, can be empty string)
credentials.s3Region
string
required
AWS region (e.g., "us-east-1")
credentials.tableLevel
string
Table level: "File" or "Folder" (optional)
credentials.s3AccessKeyId
string
required
AWS access key ID
credentials.s3SecretAccessKey
string
required
AWS secret access key

Response

name
string
The name of the created datasource (same as credentials.name).
error
null
Error field, null when successful. Not included in successful responses.

Examples

Error Codes

Error CodeHTTP StatusDescription
INVALID_REQUEST_BODY400Missing required fields or invalid credential structure
DATASOURCE_NAME_ERROR400Datasource name already exists
CREDENTIAL_TEST_FAILED400Connection test failed
AUTHENTICATION_ERROR401Invalid or missing service token
SCHEMA_CACHE_FAILED500Schema caching failed
CREATE_DATASOURCE_FAILED500Internal error during datasource creation
INTERNAL_SERVER_ERROR500Server error occurred

Next Steps