Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.usedatabrain.com/llms.txt

Use this file to discover all available pages before exploring further.

Integrating Databrain with New Relic

This guide explains how to send OpenTelemetry traces, metrics, and logs from your self-hosted Databrain instance to New Relic.

Prerequisites

  • Databrain self-hosted version with OpenTelemetry support
  • New Relic account (free tier available)
  • New Relic License Key or Ingest Key

Configuration

1. Get Your New Relic Ingest Key

  1. Log into New Relic
  2. Click on your name → API Keys
  3. Copy your Ingest - License key
  4. Note your account’s data center (US or EU)

2. Determine Your OTLP Endpoint

Data CenterOTLP HTTP Endpoint
UShttps://otlp.nr-data.net:4318
EUhttps://otlp.eu01.nr-data.net:4318

3. Configure Databrain Environment Variables

Add these environment variables to your Databrain backend:
# Enable OpenTelemetry
OTEL_ENABLED=true

# New Relic OTLP endpoint (US data center)
OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4318

# Service name (appears in New Relic)
OTEL_SERVICE_NAME=databrain-api

# New Relic License Key (required)
OTEL_EXPORTER_OTLP_HEADERS=api-key=YOUR_LICENSE_KEY_HERE

# Optional: Set environment
NEW_RELIC_ENVIRONMENT=production

# Optional: Enable debug logging
LOG_LEVEL=info

4. Docker Compose Configuration

Update your docker-compose.yml:
services:
  databrainbackend:
    environment:
      OTEL_ENABLED: "true"
      OTEL_EXPORTER_OTLP_ENDPOINT: "https://otlp.nr-data.net:4318"
      OTEL_SERVICE_NAME: "databrain-api"
      OTEL_EXPORTER_OTLP_HEADERS: "api-key=${NEW_RELIC_LICENSE_KEY}"
      NEW_RELIC_ENVIRONMENT: "production"
      LOG_LEVEL: "info"
Security: Store your NEW_RELIC_LICENSE_KEY in a .env file:
# .env
NEW_RELIC_LICENSE_KEY=your_license_key_here

5. Kubernetes Configuration

For Kubernetes deployments:
apiVersion: v1
kind: Secret
metadata:
  name: newrelic-secret
type: Opaque
stringData:
  license-key: your_license_key_here
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: databrain-backend
spec:
  template:
    spec:
      containers:
      - name: backend
        env:
          - name: OTEL_ENABLED
            value: "true"
          - name: OTEL_EXPORTER_OTLP_ENDPOINT
            value: "https://otlp.nr-data.net:4318"
          - name: OTEL_SERVICE_NAME
            value: "databrain-api"
          - name: NEW_RELIC_LICENSE_KEY
            valueFrom:
              secretKeyRef:
                name: newrelic-secret
                key: license-key
          - name: OTEL_EXPORTER_OTLP_HEADERS
            value: "api-key=$(NEW_RELIC_LICENSE_KEY)"

Advanced: Using New Relic’s OpenTelemetry Collector

For better control and additional features, deploy the New Relic OpenTelemetry Collector:

Docker Compose with Collector

services:
  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    command: ["--config=/etc/otel-collector-config.yaml"]
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - "4317:4317"  # OTLP gRPC
      - "4318:4318"  # OTLP HTTP
    environment:
      NEW_RELIC_LICENSE_KEY: "${NEW_RELIC_LICENSE_KEY}"
    networks:
      - databrain

  databrainbackend:
    environment:
      OTEL_ENABLED: "true"
      OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel-collector:4318"
      OTEL_SERVICE_NAME: "databrain-api"
    depends_on:
      - otel-collector

Collector Configuration

Create otel-collector-config.yaml:
receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318
      grpc:
        endpoint: 0.0.0.0:4317

processors:
  batch:
    timeout: 1s
    send_batch_size: 1024
  
  # Add resource attributes
  resource:
    attributes:
    - key: service.instance.id
      from_attribute: host.name
      action: upsert
  
  # Add environment attribute
  attributes:
    actions:
    - key: environment
      value: production
      action: upsert

exporters:
  otlphttp:
    endpoint: https://otlp.nr-data.net:4318
    headers:
      api-key: ${NEW_RELIC_LICENSE_KEY}

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch, resource, attributes]
      exporters: [otlphttp]
    metrics:
      receivers: [otlp]
      processors: [batch, resource, attributes]
      exporters: [otlphttp]
    logs:
      receivers: [otlp]
      processors: [batch, resource, attributes]
      exporters: [otlphttp]

What Gets Sent to New Relic

Once configured, Databrain automatically sends:
Telemetry TypeNew Relic ProductDescription
TracesDistributed TracingAPI request spans with timing, status codes, and errors
MetricsMetrics & EventsRequest latency histograms, error rates, throughput
LogsLogsCorrelated logs with trace context (trace_id, span_id)

Verification

1. Restart Databrain

docker compose restart databrainbackend
# or
kubectl rollout restart deployment/databrain-backend

2. Generate Test Traffic

# Health check
curl -X GET "https://your-databrain-instance.com/api/health"

# Sample API request
curl -X POST "https://your-databrain-instance.com/api/v2/metric/execute" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -d '{"metricId": "test-123"}'

3. Check New Relic UI

  1. Distributed Tracing:
    • Navigate to APM & Services → Select databrain-api
    • Click Distributed tracing
    • You should see traces within 1-2 minutes
  2. Service Map:
    • Go to APM & Servicesdatabrain-apiService map
    • View dependencies and relationships
  3. Metrics:
    • Navigate to Metrics & events
    • Query: FROM Metric SELECT * WHERE service.name = 'databrain-api'
  4. Logs:
    • Go to Logs
    • Filter: service.name = databrain-api
    • Click any log to see correlated traces

4. Check Backend Logs

Look for the initialization message:
{
  "level": "info",
  "message": "[Telemetry] OpenTelemetry initialized - service: databrain-api, endpoint: https://otlp.nr-data.net:4318"
}

Custom Attributes and Tags

Add custom attributes to all telemetry:
# docker-compose.yml
environment:
  OTEL_RESOURCE_ATTRIBUTES: "service.namespace=databrain,deployment.environment=production,team=backend"
These appear in New Relic as filterable attributes.

New Relic Query Language (NRQL)

Use NRQL to create custom dashboards and alerts:

Example Queries

Average Response Time:
FROM Span SELECT average(duration.ms) 
WHERE service.name = 'databrain-api' 
AND span.kind = 'server'
FACET name SINCE 1 hour ago
Error Rate:
FROM Span SELECT percentage(count(*), WHERE error.message IS NOT NULL)
WHERE service.name = 'databrain-api'
TIMESERIES SINCE 1 day ago
Slowest Endpoints:
FROM Span SELECT percentile(duration.ms, 95) 
WHERE service.name = 'databrain-api'
AND span.kind = 'server'
FACET name SINCE 1 hour ago
LIMIT 10
Throughput:
FROM Span SELECT rate(count(*), 1 minute)
WHERE service.name = 'databrain-api'
AND span.kind = 'server'
TIMESERIES SINCE 1 hour ago

Create Alerts

Set up alerts in New Relic:

High Error Rate Alert

  1. Go to Alerts & AIAlert conditions (policies)
  2. Create a new alert condition
  3. Use NRQL query:
FROM Span SELECT percentage(count(*), WHERE error.message IS NOT NULL)
WHERE service.name = 'databrain-api'
  1. Set threshold: Critical when query returns value > 5 for at least 5 minutes
  2. Add notification channel (email, Slack, PagerDuty, etc.)

High Latency Alert

FROM Span SELECT percentile(duration.ms, 95)
WHERE service.name = 'databrain-api'
AND span.kind = 'server'
Threshold: Critical when p95 latency > 2000ms for at least 5 minutes

Troubleshooting

IssueSolution
No data in New RelicVerify OTEL_ENABLED=true and License Key is correct
403 ForbiddenCheck License Key has ingest permissions
Connection refusedVerify OTLP endpoint URL matches your data center (US/EU)
Missing tracesWait 2-3 minutes; check backend logs for errors
High data ingest costsImplement sampling in collector configuration

Debug Mode

Enable detailed logging:
LOG_LEVEL=debug
OTEL_LOG_LEVEL=debug
Check logs for:
  • [Telemetry] OpenTelemetry initialized
  • Connection errors to New Relic endpoint
  • Trace export confirmations

Verify Collector (if using)

Check collector logs:
docker logs otel-collector | grep -i "error\|failed"
Successful export logs:
Traces exported successfully to New Relic

Dashboard Templates

New Relic provides pre-built dashboard templates for OpenTelemetry:
  1. Go to DashboardsImport dashboard
  2. Search for “OpenTelemetry” templates
  3. Import the “Service Performance” template
  4. Customize filters to show service.name = databrain-api

Best Practices

1. Use Sampling for High Traffic

Configure head-based sampling in the collector:
processors:
  probabilistic_sampler:
    sampling_percentage: 10  # Sample 10% of traces

service:
  pipelines:
    traces:
      processors: [probabilistic_sampler, batch]

2. Add Business Context

Include business-relevant attributes:
import logger from 'utils/logger';

logger.info('Order processed', {
  orderId: '12345',
  userId: 'user-789',
  amount: 99.99,
  currency: 'USD'
});
These appear in New Relic logs and can be queried.

3. Use Service Levels (SLIs/SLOs)

Create SLIs in New Relic:
  1. Go to Service levels
  2. Define SLI: “95% of requests complete in < 1s”
  3. Track SLO compliance over time

Pricing Considerations

New Relic pricing is based on:
  • Data Ingest: GB of data ingested per month
  • User Seats: Number of full platform users
Free Tier: 100 GB/month data ingest, 1 full platform user Cost Optimization:
  1. Use sampling (doesn’t affect metrics accuracy)
  2. Set appropriate data retention periods
  3. Filter out low-value spans (health checks)
  4. Use the collector for local aggregation

Support

For Databrain configuration issues, contact your Databrain support team.