- RisingWave Webhook connector (
connector = 'webhook') - OpenTelemetry Collector OTLP/HTTP exporter (
otlphttp) withencoding: json
If you want a standalone HTTP ingestion service (JSON/NDJSON) and SQL-over-HTTP, see Events API.
How it works
Data flow:- Your apps/agents send telemetry to the OpenTelemetry Collector.
- The Collector exports telemetry as JSON over HTTP (OTLP/HTTP JSON) to RisingWave.
- RisingWave receives the HTTP POST requests on the webhook listener (default port
4560) and stores each request body as a row in a webhook table (data JSONB). - You build materialized views to parse, filter, and aggregate the telemetry in real time.
Prerequisites
- A running RisingWave cluster with the webhook listener enabled.
- Default webhook port:
4560 - Webhook endpoint format:
http://<HOST>:4560/webhook/<database>/<schema>/<table> - See: Ingest data from webhook
- Default webhook port:
- An OpenTelemetry Collector that can reach the RisingWave webhook endpoint.
1. Create webhook tables in RisingWave
Create one table per signal type. Each table stores the incoming request body in aJSONB column.
Webhook tables currently support
JSONB payload columns. See: Ingest data from webhook.Optional: request validation
For production, validate incoming requests so only authenticated senders can write into your tables. RisingWave supportsVALIDATE ... secure_compare(...).
See: Request validation.
2. Configure OpenTelemetry Collector to export to RisingWave (OTLP/HTTP JSON)
Configure anotlphttp exporter and point its endpoints to your RisingWave webhook tables:
http://<rw-host>:4560/webhook/<db>/<schema>/otel_traceshttp://<rw-host>:4560/webhook/<db>/<schema>/otel_metricshttp://<rw-host>:4560/webhook/<db>/<schema>/otel_logs
Example: collector config from the demo (metrics)
The following example is taken from the09-otel-demos demo setup. It scrapes Prometheus metrics and exports them to RisingWave over HTTP.
Source:
3. Verify ingestion
Once your Collector is running and exporting to RisingWave, query the tables:4. Analyze telemetry with materialized views
OTLP/HTTP JSON payloads are nested. A common workflow is:- Start by inspecting a few rows in
otel_traces/otel_metrics/otel_logs. - Extract the fields you care about using JSON operators (
->,->>) into a materialized view. - Build dashboards/alerts by querying the materialized views.
For high-volume pipelines, consider transforming telemetry in the Collector (for example, to flatten fields) before exporting to RisingWave. This reduces JSON parsing work inside RisingWave.