Skip to main content
Databrain simplifies data integration by offering a wide range of connectors, allowing businesses to centralize their data from multiple platforms. Below, we explore some of the most popular data connectors available in Databrain and how they can enhance your data strategy.

Getting Started with Database IP Whitelisting for Databrain

When integrating Databrain with your database, IP whitelisting is crucial for secure and seamless connectivity. This guide outlines the steps to set up IP whitelisting, allowing Databrain access while ensuring security. Databrain IP Address can be found here.

Allow Access to our IP

Available Data Source Connectors

Databrain supports a comprehensive range of data source connectors to meet your business intelligence and analytics needs. Below is a complete list of all available connectors:

Relational Databases

  • PostgreSQL - Connect to PostgreSQL databases for powerful relational data analytics
  • MySQL - Integrate MySQL databases seamlessly with Databrain
  • Microsoft SQL Server (MSSQL) - Connect to SQL Server databases for enterprise data analytics
  • SingleStore - High-performance SQL database connector for real-time analytics

Cloud Data Warehouses

  • Amazon Redshift - Connect to AWS Redshift for scalable data warehousing
  • Google BigQuery - Integrate with BigQuery for serverless data analytics
  • Snowflake - Connect to Snowflake’s cloud data platform
  • Databricks - Integrate with Databricks for unified analytics
  • Firebolt - Connect to Firebolt for fast cloud data warehouse analytics

Query Engines & Analytics Platforms

  • Amazon Athena - Query data directly from S3 using SQL
  • Trino - Distributed SQL query engine for big data analytics
  • ClickHouse - Connect to ClickHouse for real-time analytics

Search & NoSQL Databases

  • Elasticsearch - Connect to Elasticsearch for search and analytics
  • OpenSearch - Integrate with OpenSearch for search and observability
  • MongoDB - Connect to MongoDB for document-based data analytics

File & Object Storage

  • Amazon S3 - Connect to S3 buckets for file-based data analytics
  • CSV - Import and analyze CSV file data directly

Frequently Asked Questions (FAQ)

To connect a new data source to Databrain:
  1. Navigate to the Data Sources section in your Databrain dashboard
  2. Click on ”+ Data Source”.
  3. Select your desired connector from the available options
  4. Provide the necessary connection credentials (host, port, database name, username, password)
  5. Configure IP whitelisting if required for your database
  6. Test the connection and save
For detailed step-by-step instructions for each connector, refer to the specific connector documentation.
The credentials required vary by data source, but typically include:
  • Host/Endpoint: The server address or hostname
  • Port: The port number for the database connection
  • Database Name: The specific database you want to connect
  • Username: A user with appropriate read permissions
  • Password: The password for the database user
  • Additional Parameters: Some databases may require SSL certificates, regions, project IDs, or other specific configuration
For cloud data warehouses, you may also need API keys, service account credentials, or OAuth tokens.
Yes, if your database has firewall restrictions or network security rules in place, you’ll need to whitelist Databrain’s IP addresses to allow secure connectivity. This is especially important for:
  • Self-hosted databases
  • Cloud databases with VPC or security group restrictions
  • Databases behind corporate firewalls
You can find Databrain’s IP addresses and detailed whitelisting instructions in our IP Whitelisting Guide.
Yes, Databrain takes data security seriously:
  • All connections use encrypted protocols (SSL/TLS)
  • Credentials are stored securely and encrypted at rest
  • Databrain follows industry-standard security practices
  • You can configure read-only database users to limit access
  • IP whitelisting provides an additional layer of security
  • Data is never stored permanently unless explicitly configured
For self-hosted deployments, you have full control over your data and infrastructure.
Each Databrain workspace is limited to one data source connection. However, you have several options to work with multiple data sources:
  • Use Trino: Connect to Trino as your data source, which acts as a distributed SQL query engine that can query multiple databases simultaneously
  • Multiple Workspaces: Create separate workspaces for different data sources, then combine the dashboards when embedding them downstream in your application
  • Semantic Layer: Use Databrain’s Semantic Layer to create unified data models that can bridge data across different sources
  • Data Integration: Consolidate your data in a single data warehouse (like Snowflake, BigQuery, or Redshift) before connecting to Databrain
These approaches ensure optimal performance and clear data source management while still enabling multi-source analytics through strategic architecture.
If a data source connection fails, Databrain will:
  • Display an error message with connection details
  • Provide troubleshooting suggestions
  • Allow you to test and reconfigure the connection
  • Log the connection attempts for debugging
Common connection issues include:
  • Incorrect credentials
  • Network/firewall restrictions
  • IP address not whitelisted
  • Database server downtime
  • Incorrect host or port configuration
Data syncing in Databrain can be configured based on your needs:
  • Real-time: Queries are executed on-demand when viewing dashboards
  • Scheduled Sync: You can configure periodic data refreshes
  • Manual Sync: Trigger data refresh manually when needed
  • Caching: Configure caching policies to balance freshness and performance
The sync frequency depends on your data source type and configuration. Refer to the Data Source Sync Guide for more details.
Yes, Databrain works with both cloud-hosted and self-hosted databases. For self-hosted databases:
  • Ensure your database is accessible over the network
  • Configure firewall rules to allow Databrain’s IP addresses
  • Set up appropriate user permissions for read access
  • Consider using VPN or SSH tunneling for enhanced security
Databrain also offers self-hosted deployment options if you prefer to keep all infrastructure on-premises.
Databrain requires minimal permissions to function:
  • Read Permissions: SELECT access to tables and views you want to analyze
  • Schema Access: Permission to view database schema and table structures
  • Optional: Some features may require additional permissions like creating temporary tables or views
It’s recommended to create a dedicated read-only user for Databrain to minimize security risks. Databrain does not require write, update, or delete permissions on your data.