Create an internal Iceberg table
Creating and using an internal Iceberg table is a two-step process: first, you define the storage and catalog details in aCONNECTION object, and then you create the table itself.
Step 1: Create an Iceberg Connection
An IcebergCONNECTION defines the catalog and object storage configuration.
You must specify the type and warehouse.path parameters, along with the required parameters for your catalog and object storage. To use the JDBC-based built-in catalog, set hosted_catalog to true.
You can also set the optional commit_checkpoint_interval parameter to control the Iceberg commit frequency. The default is 60, which means RisingWave commits changes to Iceberg about every 60 seconds in the default configuration. You can adjust this value based on your freshness and performance requirements.
When you create a CONNECTION, you specify the object storage backend where the table data will be stored. You also specify the catalog that will manage the table’s metadata.
For S3 credentials (applies to all catalogs):
- If
enable_config_load = false: you must provides3.access.keyands3.secret.key(you may also sets3.iam_role_arn). - If
enable_config_load = true: don’t provides3.access.key/s3.secret.key(you may sets3.iam_role_arn, or rely on the role already available in your environment/config).
- Built-in catalog
- JDBC catalog
- Glue catalog
- REST catalog
- S3 Tables catalog
Step 2: Create an internal Iceberg table
Create an internal Iceberg table using theENGINE = iceberg clause.
To create Iceberg tables, RisingWave needs to know which Iceberg CONNECTION to use (this connection contains both the object storage settings and the catalog settings). Choose one option below.
WITH clause to optimize query performance.
bucket(n, column) or truncate(n, column). The partition key must be a prefix of the primary key.
Work with internal tables
Once created, you can work with an internal Iceberg table using familiar SQL (insert, query, materialized views). One important difference: new writes become queryable only after an Iceberg commit. By default, Iceberg commits happen about every 60 seconds (controlled bycommit_checkpoint_interval).
Ingest data
You can ingest data using standardINSERT statements or by streaming data from a source using CREATE SINK ... INTO.
Query data
Query the table directly withSELECT or use it as a source for a materialized view.
Time travel
Time travel queries work on committed Iceberg snapshots. Make sure at least one Iceberg commit has happened before using these queries.Partition strategy
RisingWave’s Iceberg table engine supports table partitioning using thepartition_by option. Partitioning helps organize data for efficient storage and query performance. You can partition by one or multiple columns, separated by commas, and optionally apply a Transform function to each column to customize partitioning.
Supported transformations include identity, truncate(n), bucket(n), year, month, day, hour, and void. For more details on Iceberg partitioning, see Partition transforms.
Table maintenance
To maintain good performance and manage storage costs, internal Iceberg tables require periodic maintenance, including compaction and snapshot expiration. RisingWave provides both automatic and manual maintenance options. For complete details, see the Iceberg table maintenance guide.External access
Because internal tables are standard Iceberg tables, they can be read by external query engines like Spark or Trino using the same catalog and storage configuration. Spark Example:Limitations
- Advanced schema evolution operations are not yet supported.
- To ensure data consistency, only RisingWave should write to internal Iceberg tables.