This article provides a step-by-step guide for defining and running external Python UDFs, and calling them from RisingWave.
RisingWave uses the arrow-udf as its remote UDF framework. The framework provides a Python SDK for defining and running UDFs outside of the RisingWave process.
Run the following command to install arrow-udf:
The minimum version of RisingWave that supports arrow-udf Python UDFs is 1.10. If you are using an older version of RisingWave, please refer to the historical version of the documentation. If you have used an older version of the RisingWave UDF SDK (risingwave 0.1), we strongly encourage you to upgrade to the latest version. You can refer to the migration guide for instructions.
To define UDFs in Python, you need to create a Python file and define your functions using the udf
(for scalar functions) and udtf
(for set-returning/table functions) decorators provided by the arrow-udf module.
As an example, let’s define some simple UDFs in a Python file named udf.py
:
See code explanation
The scalar function gcd
, decorated with @udf
, takes two integer inputs and returns the greatest common divisor of the two integers.
The scalar function blocking
, decorated with @udf
. The io_threads
parameter specifies the number of threads that the Python UDF will use during execution to enhance processing performance of IO-intensive functions. Please note that multithreading can not speed up compute-intensive functions due to the GIL.
The scalar function key_value
, decorated with @udf
, takes a single string input and returns a structured output.
The table function series
, decorated with @udtf
, takes an integer input and yields a sequence of integers from 0 to n-1
.
The scalar function text_embedding
, decorated with @udf
, calls the OpenAI API to generate text embeddings for input texts. The batch=True
parameter indicates that the function accepts batch input and returns batch output. Each embedding vector in the returned list should correspond to the input text at the same index.
Finally, the script starts a UDF server using UdfServer
and listens for incoming requests on address 0.0.0.0:8815
. All defined functions are registered to the server using server.add_function
before starting the server using the serve()
method. The if __name__ == '__main__':
conditional is used to ensure that the server is only started if the script is run directly, rather than being imported as a module.
For more examples of UDFs, such as functions handling complex data types like JSONB, see this test file in RisingWave source code.
Simply run the Python file to start the UDF server.
The UDF server will start serving requests, allowing you to call the defined UDFs from RisingWave.
In RisingWave, use the CREATE FUNCTION command to declare the functions you defined.
Here are the SQL statements for declaring the functions defined in step 2.
The function signature in the CREATE FUNCTION
statement must match the signature defined in the Python function decorator. The field names in the STRUCT
type must exactly match the ones defined in the Python decorator.
If you are running RisingWave using Docker, you may need to replace the host localhost
with host.docker.internal
in the USING LINK
clause.
Once the UDFs are created in RisingWave, you can use them in SQL queries just like any built-in functions. For example:
Due to the limitations of the Python interpreter’s Global Interpreter Lock (GIL), the UDF server can only utilize a single CPU core when processing requests. If you find that the throughput of the UDF server is insufficient, consider scaling out the UDF server.
How to determine if the UDF server needs scaling?
You can use tools like top
to monitor the CPU usage of the UDF server. If the CPU usage is close to 100%, it indicates that the CPU resources of the UDF server are insufficient, and scaling is necessary.
To scale the UDF server, you can launch multiple UDF servers on different ports and use a load balancer to distribute requests among these servers.
The specific code is as follows:
Then, you can start a load balancer, such as Nginx. It listens on port 8815 and forwards requests to UDF servers on ports 8816-8819.
The RisingWave Python UDF SDK supports the following data types:
SQL Type | Python Type | Notes |
---|---|---|
BOOLEAN | bool | |
SMALLINT | int | |
INT | int | |
BIGINT | int | |
REAL | float | |
DOUBLE PRECISION | float | |
DECIMAL | decimal.Decimal | |
DATE | datetime.date | |
TIME | datetime.time | |
TIMESTAMP | datetime.datetime | |
INTERVAL | pyarrow.MonthDayNano | Fields can be obtained by months(), days() and nanoseconds() from MonthDayNano |
VARCHAR | str | |
BYTEA | bytes | |
JSONB | Any | Parsed / Serialized by json.loads / json.dumps |
T[] | List[T] | |
STRUCT<> | Dict[str, Any] | |
…others | Not supported yet. |
If you have used the Python UDF SDK in RisingWave 1.9 or earlier versions, please refer to the following steps for upgrading.
Import the arrow_udf
package instead of risingwave.udf
.
The type aliases FLOAT4
and FLOAT8
are removed and replaced by REAL
and DOUBLE PRECISION
.
The STRUCT
type now requires field names. The field names must exactly match the ones defined in CREATE FUNCTION
. The function that returns a struct type now returns a dictionary instead of a tuple. The field names of the dictionary must match the definition in the signature.