Guides
Visit our websiteDeveloper Docs
  • Getting Started with Databrain
    • What is Databrain?
  • ❄️Onboarding & Configuration
    • 📝Sign-Up
    • ✍️Sign-In
    • ✏️Google Sign In Setup for Self-hosted app
    • 🤔Forgot password? Recover your Databrain Account
    • 🌟Onboarding
    • 💾Add a Data Source
    • 🧑Configure Tenants
    • 🆕Create a Workspace
    • 🔓Create a Private Workspace
    • 🆕Create a Dashboard
    • 💠Create a Metric
      • Create Custom Columns
      • Create a Metric using Chat Mode
      • Create a Metric using Custom SQL
    • Workspace Settings
      • General Settings
      • Access Control Settings
      • Cache Settings
      • Download Settings
    • 🗄️Explore Data
  • 🛢️Datasources
    • Connecting Data Sources to Databrain
      • Amazon Redshift
      • Snowflake
      • BigQuery
      • MySQL
      • Postgres
      • MongoDB
      • ElasticSearch
      • DataBricks
      • ClickHouse
      • MSSQL
      • Amazon S3
      • CSV
      • Firebolt
      • SingleStore
      • Athena
    • Allow Access to our IP
    • Add a Data Source
    • Configure Tenants
    • How to Sync a Data Source
    • Edit Tenancy
    • Create a Datamart
    • Semantic Layer
    • Create a Data App
    • Creating a Custom Dataset/View in a Multi-Datasource Environment
  • Workspace
    • Multi Datasource Workspace
  • 🔍DASHBOARDS
    • Edit a Dashboard
    • Share Dashboard
    • Dashboard Settings
    • Create/Modify Dashboard Filter
      • Dashboard Filter - Variable Apply On
      • Add LHS and RHS custom sql support for dashboard filter
    • Customize Layout
    • Adding Elements to Dashboard
    • Import/Export Dashboard
    • Report Scheduler
  • 📉METRIC
    • Edit a Metric
    • Joins , Filter, Sort, Group By
    • Complex Filter
    • Apply Metric Filter
      • Metric Filter - Variable
      • Metric Filter - Custom
    • Switch X axis and Switch Y axis
    • Group By
    • Footnote and Long Description
    • Dynamic Property
    • Archive/Unarchive Metric Card
    • Download Metric Card
    • Download Underlying Data
    • Metric Summary
    • Metric Expression for Single Value Card
    • AI Summary
    • Merge Metrics
    • Section Filters
    • View Unpublished Metrics
  • 📊VISUALIZATIONS - ACTIONS & APPEARANCE
    • Chart Actions
      • Chart Click Action
      • Chart Click Action with Metric
      • Card Click Action
      • Drill Down
      • Cross Dashboard Drill Down
    • Chart Appearance
      • Chart-Specific Appearance Options
  • 🛢️PREVIEW OF DASHBOARDS
    • Email Settings for Scheduled Reports
    • Scheduled Reports for End User
  • 🔍FILTERS
    • Dashboard Filter
    • Metric Filter
    • App filter
  • 💡Features
    • Python Editor Console
    • Custom SQL Console
    • Custom SQL Query Guidelines
  • 🏢Integrating Plugin
    • ✳️Get an auth token
    • 🙏Get a guest token
  • 🛃THEMEING & CUSTOMIZATION
    • 🎨Creating a theme
    • 🖼️View the theme in action
    • ⚙️Reset a saved theme
  • 📊Metric Component (upto version v0.11.15)
    • ✨Quick start
  • 🕸️Web Components
    • ✨Quick start
    • ⚛️Framework Specific Guide
  • 🚀Product Changelog
  • 🤳Self Hosted Changelog
Powered by GitBook
On this page
  1. Datasources
  2. Connecting Data Sources to Databrain

BigQuery

PreviousSnowflakeNextMySQL

Last updated 19 hours ago

Getting Started with BigQuery Destination Configuration

Requirements:

  • Active Google Cloud Platform (GCP) project with BigQuery enabled.

  • Allow connections from Databrain to your BigQuery dataset.

    • For details on setting up IP whitelisting and ensuring secure connectivity, refer to our guide on

  • Choose the Databrain Workspace to which you wish to connect the data.

Setup Guide:

  1. Ensure Project Accessibility:

    • Ensure your GCP project with BigQuery is active and accessible from the machine running Databrain.

    • The accessibility is dependent on your GCP permissions and IAM settings. The easiest way to verify if Databrain can connect to your BigQuery is via the Add a Data Source UI.

  2. Grant Necessary Permissions:

    • Read Access on Datasets, Tables, and information_schema: Grant read access permissions to the datasets, tables, and the information_schema dataset within BigQuery. This allows Databrain to retrieve necessary information and replicate data accurately. You can assign the predefined role roles/bigquery.dataViewer to provide read access to datasets and tables.

    • Add jobs.create Permissions: Additionally, grant permissions to create jobs in BigQuery by assigning the roles/bigquery.jobUser role. This allows Databrain to create and manage jobs for tasks such as querying data into BigQuery.

    • Note on Project & Dataset IDs:

      • Project IDs must contain 6-63 lowercase letters, digits, or dashes. Some project IDs also include a domain name separated by a colon. IDs must start with a letter and may not end with a dash.

      • Dataset IDs follow similar rules, adhering to the same length and character restrictions. It's important to maintain consistency and compliance with these guidelines to ensure proper functionality and interoperability within BigQuery.

  3. Fill Up Connection Info:

    • Provide the necessary information to connect to your BigQuery:

      • Destination Name: [Pick a name to help you identify this destination in Databrain]

      • Project ID: [The GCP project ID for the project containing the target BigQuery dataset]

      • Default Dataset ID: [The default BigQuery Dataset ID that tables are replicated to if the source doesn't specify a namespace]

      • Service Account Key JSON: [The JSON value of the Service Account Key to authenticate into your Service Account. This is mandatory for Cloud and optional for Open-Source versions of Databrain]

    • Note on Dataset Configuration:

      • Ensure the default dataset ID is set correctly for proper data synchronization. This is where tables will be replicated if the source does not specify a namespace.

Encryption:

  • All BigQuery connections via Databrain are secure, leveraging GCP's built-in security features.

Permissions:

  • Permission to read information_schema.

  • Whitelist the IP address.

  • Allow job creation and provide read access to datasets, ensuring dataset IDs adhere to standard rules.

Replace the placeholders inside the square brackets with actual values when filling in the details.

Locating the Configuration Details in BigQuery

  1. Destination Name:

    • This is a custom name you decide for identification within Databrain. Choose a name that is relevant and descriptive of your BigQuery setup.

  2. Project ID:

    • On the top right corner, the current project name is displayed. Clicking on it will show a dropdown with all your projects.

    • Beside each project name, there is an ID which is the Project ID.

  3. Default Dataset ID:

    • In the Google Cloud Console, navigate to BigQuery.

    • In the left sidebar, under the project name, you'll see a list of datasets. The Dataset ID is the name of these datasets.

  4. Service Account Key JSON:

    • In the Google Cloud Console, navigate to "IAM & Admin" > "Service Accounts".

    • Find the service account you want to use or create a new one.

    • Once you have the service account, click on the three dots (options) for that account, and select "Manage keys".

    • Click on "Add Key" and choose "JSON" as the key type.

    • Once created, the JSON key will be downloaded to your computer. This is the JSON value you need for the Service Account Key JSON.

Navigate to the .

🛢️
Allow Access to our IP
Google Cloud Console