Integrating Databrain with Kloudfuse
This guide explains how to send OpenTelemetry traces and metrics from your self-hosted Databrain instance to Kloudfuse.Prerequisites
- Databrain self-hosted version with OpenTelemetry support
- Kloudfuse account with OTLP ingestion enabled
- Your Kloudfuse OTLP endpoint URL
Configuration
1. Get Your Kloudfuse OTLP Endpoint
In your Kloudfuse console, locate your OTLP ingestion endpoint. It typically follows this format:2. Configure Databrain Environment Variables
Add these environment variables to your Databrain backend service:3. Docker Compose Configuration
Update yourdocker-compose.yml:
4. Kubernetes Configuration
If deploying on Kubernetes, add to your deployment:What Gets Sent to Kloudfuse
Once configured, Databrain automatically sends:| Telemetry Type | Description |
|---|---|
| Traces | API request spans with timing, status codes, and errors |
| Metrics | Request latency histograms, error rates, throughput |
| Logs | Correlated logs with trace context (trace_id, span_id) |
Verification
- Restart Databrain after configuration changes
- Make a few API requests to generate telemetry
- Check Kloudfuse UI:
- Navigate to Traces → filter by
service.name = databrain-api - Check Metrics for
http.server.durationhistograms - View Logs correlated with trace IDs
- Navigate to Traces → filter by
Troubleshooting
| Issue | Solution |
|---|---|
| No data in Kloudfuse | Verify OTEL_ENABLED=true and endpoint is reachable |
| Connection refused | Check network connectivity and firewall rules |
| 401/403 errors | Verify Kloudfuse API credentials if authentication is required |
| Missing traces | Ensure requests are being made; traces appear after ~30 seconds |

