Documentation Index
Fetch the complete documentation index at: https://docs.usedatabrain.com/llms.txt
Use this file to discover all available pages before exploring further.
Integrating Databrain with New Relic
This guide explains how to send OpenTelemetry traces, metrics, and logs from your self-hosted Databrain instance to New Relic.
Prerequisites
- Databrain self-hosted version with OpenTelemetry support
- New Relic account (free tier available)
- New Relic License Key or Ingest Key
Configuration
1. Get Your New Relic Ingest Key
- Log into New Relic
- Click on your name → API Keys
- Copy your Ingest - License key
- Note your account’s data center (US or EU)
2. Determine Your OTLP Endpoint
| Data Center | OTLP HTTP Endpoint |
|---|
| US | https://otlp.nr-data.net:4318 |
| EU | https://otlp.eu01.nr-data.net:4318 |
Add these environment variables to your Databrain backend:
# Enable OpenTelemetry
OTEL_ENABLED=true
# New Relic OTLP endpoint (US data center)
OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.nr-data.net:4318
# Service name (appears in New Relic)
OTEL_SERVICE_NAME=databrain-api
# New Relic License Key (required)
OTEL_EXPORTER_OTLP_HEADERS=api-key=YOUR_LICENSE_KEY_HERE
# Optional: Set environment
NEW_RELIC_ENVIRONMENT=production
# Optional: Enable debug logging
LOG_LEVEL=info
4. Docker Compose Configuration
Update your docker-compose.yml:
services:
databrainbackend:
environment:
OTEL_ENABLED: "true"
OTEL_EXPORTER_OTLP_ENDPOINT: "https://otlp.nr-data.net:4318"
OTEL_SERVICE_NAME: "databrain-api"
OTEL_EXPORTER_OTLP_HEADERS: "api-key=${NEW_RELIC_LICENSE_KEY}"
NEW_RELIC_ENVIRONMENT: "production"
LOG_LEVEL: "info"
Security: Store your NEW_RELIC_LICENSE_KEY in a .env file:
# .env
NEW_RELIC_LICENSE_KEY=your_license_key_here
5. Kubernetes Configuration
For Kubernetes deployments:
apiVersion: v1
kind: Secret
metadata:
name: newrelic-secret
type: Opaque
stringData:
license-key: your_license_key_here
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: databrain-backend
spec:
template:
spec:
containers:
- name: backend
env:
- name: OTEL_ENABLED
value: "true"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "https://otlp.nr-data.net:4318"
- name: OTEL_SERVICE_NAME
value: "databrain-api"
- name: NEW_RELIC_LICENSE_KEY
valueFrom:
secretKeyRef:
name: newrelic-secret
key: license-key
- name: OTEL_EXPORTER_OTLP_HEADERS
value: "api-key=$(NEW_RELIC_LICENSE_KEY)"
Advanced: Using New Relic’s OpenTelemetry Collector
For better control and additional features, deploy the New Relic OpenTelemetry Collector:
Docker Compose with Collector
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317" # OTLP gRPC
- "4318:4318" # OTLP HTTP
environment:
NEW_RELIC_LICENSE_KEY: "${NEW_RELIC_LICENSE_KEY}"
networks:
- databrain
databrainbackend:
environment:
OTEL_ENABLED: "true"
OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel-collector:4318"
OTEL_SERVICE_NAME: "databrain-api"
depends_on:
- otel-collector
Collector Configuration
Create otel-collector-config.yaml:
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
grpc:
endpoint: 0.0.0.0:4317
processors:
batch:
timeout: 1s
send_batch_size: 1024
# Add resource attributes
resource:
attributes:
- key: service.instance.id
from_attribute: host.name
action: upsert
# Add environment attribute
attributes:
actions:
- key: environment
value: production
action: upsert
exporters:
otlphttp:
endpoint: https://otlp.nr-data.net:4318
headers:
api-key: ${NEW_RELIC_LICENSE_KEY}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch, resource, attributes]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [batch, resource, attributes]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: [batch, resource, attributes]
exporters: [otlphttp]
What Gets Sent to New Relic
Once configured, Databrain automatically sends:
| Telemetry Type | New Relic Product | Description |
|---|
| Traces | Distributed Tracing | API request spans with timing, status codes, and errors |
| Metrics | Metrics & Events | Request latency histograms, error rates, throughput |
| Logs | Logs | Correlated logs with trace context (trace_id, span_id) |
Verification
1. Restart Databrain
docker compose restart databrainbackend
# or
kubectl rollout restart deployment/databrain-backend
2. Generate Test Traffic
# Health check
curl -X GET "https://your-databrain-instance.com/api/health"
# Sample API request
curl -X POST "https://your-databrain-instance.com/api/v2/metric/execute" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"metricId": "test-123"}'
3. Check New Relic UI
-
Distributed Tracing:
- Navigate to APM & Services → Select databrain-api
- Click Distributed tracing
- You should see traces within 1-2 minutes
-
Service Map:
- Go to APM & Services → databrain-api → Service map
- View dependencies and relationships
-
Metrics:
- Navigate to Metrics & events
- Query:
FROM Metric SELECT * WHERE service.name = 'databrain-api'
-
Logs:
- Go to Logs
- Filter:
service.name = databrain-api
- Click any log to see correlated traces
4. Check Backend Logs
Look for the initialization message:
{
"level": "info",
"message": "[Telemetry] OpenTelemetry initialized - service: databrain-api, endpoint: https://otlp.nr-data.net:4318"
}
Add custom attributes to all telemetry:
# docker-compose.yml
environment:
OTEL_RESOURCE_ATTRIBUTES: "service.namespace=databrain,deployment.environment=production,team=backend"
These appear in New Relic as filterable attributes.
New Relic Query Language (NRQL)
Use NRQL to create custom dashboards and alerts:
Example Queries
Average Response Time:
FROM Span SELECT average(duration.ms)
WHERE service.name = 'databrain-api'
AND span.kind = 'server'
FACET name SINCE 1 hour ago
Error Rate:
FROM Span SELECT percentage(count(*), WHERE error.message IS NOT NULL)
WHERE service.name = 'databrain-api'
TIMESERIES SINCE 1 day ago
Slowest Endpoints:
FROM Span SELECT percentile(duration.ms, 95)
WHERE service.name = 'databrain-api'
AND span.kind = 'server'
FACET name SINCE 1 hour ago
LIMIT 10
Throughput:
FROM Span SELECT rate(count(*), 1 minute)
WHERE service.name = 'databrain-api'
AND span.kind = 'server'
TIMESERIES SINCE 1 hour ago
Create Alerts
Set up alerts in New Relic:
High Error Rate Alert
- Go to Alerts & AI → Alert conditions (policies)
- Create a new alert condition
- Use NRQL query:
FROM Span SELECT percentage(count(*), WHERE error.message IS NOT NULL)
WHERE service.name = 'databrain-api'
- Set threshold: Critical when query returns value > 5 for at least 5 minutes
- Add notification channel (email, Slack, PagerDuty, etc.)
High Latency Alert
FROM Span SELECT percentile(duration.ms, 95)
WHERE service.name = 'databrain-api'
AND span.kind = 'server'
Threshold: Critical when p95 latency > 2000ms for at least 5 minutes
Troubleshooting
| Issue | Solution |
|---|
| No data in New Relic | Verify OTEL_ENABLED=true and License Key is correct |
| 403 Forbidden | Check License Key has ingest permissions |
| Connection refused | Verify OTLP endpoint URL matches your data center (US/EU) |
| Missing traces | Wait 2-3 minutes; check backend logs for errors |
| High data ingest costs | Implement sampling in collector configuration |
Debug Mode
Enable detailed logging:
LOG_LEVEL=debug
OTEL_LOG_LEVEL=debug
Check logs for:
[Telemetry] OpenTelemetry initialized
- Connection errors to New Relic endpoint
- Trace export confirmations
Verify Collector (if using)
Check collector logs:
docker logs otel-collector | grep -i "error\|failed"
Successful export logs:
Traces exported successfully to New Relic
Dashboard Templates
New Relic provides pre-built dashboard templates for OpenTelemetry:
- Go to Dashboards → Import dashboard
- Search for “OpenTelemetry” templates
- Import the “Service Performance” template
- Customize filters to show
service.name = databrain-api
Best Practices
1. Use Sampling for High Traffic
Configure head-based sampling in the collector:
processors:
probabilistic_sampler:
sampling_percentage: 10 # Sample 10% of traces
service:
pipelines:
traces:
processors: [probabilistic_sampler, batch]
2. Add Business Context
Include business-relevant attributes:
import logger from 'utils/logger';
logger.info('Order processed', {
orderId: '12345',
userId: 'user-789',
amount: 99.99,
currency: 'USD'
});
These appear in New Relic logs and can be queried.
3. Use Service Levels (SLIs/SLOs)
Create SLIs in New Relic:
- Go to Service levels
- Define SLI: “95% of requests complete in < 1s”
- Track SLO compliance over time
Pricing Considerations
New Relic pricing is based on:
- Data Ingest: GB of data ingested per month
- User Seats: Number of full platform users
Free Tier: 100 GB/month data ingest, 1 full platform user
Cost Optimization:
- Use sampling (doesn’t affect metrics accuracy)
- Set appropriate data retention periods
- Filter out low-value spans (health checks)
- Use the collector for local aggregation
Support
For Databrain configuration issues, contact your Databrain support team.