Skip to main content
This quickstart creates an internal Iceberg table (RisingWave-managed) backed by AWS S3, using RisingWave’s built-in catalog (no separate catalog service to deploy).

Prerequisites

  • A running RisingWave cluster (self-hosted or RisingWave Cloud) and access to run SQL.
  • AWS CLI configured with credentials (AK/SK).
  • An S3 bucket for your Iceberg warehouse.
The built-in catalog stores Iceberg metadata in RisingWave’s metastore database. The metastore backend must be PostgreSQL or MySQL (SQLite is not supported).

Step 1: Create an S3 bucket for the Iceberg warehouse

export AWS_REGION=us-west-2
export ICEBERG_BUCKET=my-iceberg-demo-bucket

# If AWS_REGION is us-east-1, omit --create-bucket-configuration.
aws s3api create-bucket \
  --bucket "${ICEBERG_BUCKET}" \
  --region "${AWS_REGION}" \
  --create-bucket-configuration "LocationConstraint=${AWS_REGION}"

Step 2: Create an Iceberg CONNECTION (built-in catalog)

Run the following in RisingWave (replace placeholders):
CREATE CONNECTION iceberg_builtin_conn WITH (
  type = 'iceberg',
  hosted_catalog = true,

  -- S3 warehouse (Iceberg data files)
  warehouse.path = 's3://my-iceberg-demo-bucket/warehouse/',
  s3.region = 'us-west-2',
  s3.access.key = 'YOUR_AWS_ACCESS_KEY_ID',
  s3.secret.key = 'YOUR_AWS_SECRET_ACCESS_KEY',
  enable_config_load = false
);

Step 3: Create and write to an internal Iceberg table

-- Use the connection as the default for this session
SET iceberg_engine_connection = 'public.iceberg_builtin_conn';

CREATE TABLE user_events (
  user_id INT,
  event_type VARCHAR,
  event_ts TIMESTAMPTZ,
  PRIMARY KEY (user_id, event_ts)
)
WITH (
  -- Faster visibility (1-second latency) for this demo
  commit_checkpoint_interval = 1
)
ENGINE = iceberg;
Insert some rows and query them:
INSERT INTO user_events VALUES
  (1, 'login',  '2026-01-01 10:00:00Z'),
  (1, 'click',  '2026-01-01 10:01:00Z'),
  (2, 'login',  '2026-01-01 11:00:00Z');

SELECT * FROM user_events ORDER BY event_ts;

What you just built

  • RisingWave now manages an Iceberg table (ENGINE = iceberg) whose data is stored under warehouse.path on S3.
  • You can point external Iceberg engines to the same catalog + warehouse to read the table.
For next steps, see Create and manage internal Iceberg tables.