The Redshift connector allows querying and creating tables in an external Amazon Redshift cluster. This can be used to join data between different systems like Redshift and Hive, or between two different Redshift clusters.
To connect to Redshift, you need:
Network access from the Trino coordinator and workers to Redshift. Port 5439 is the default port.
To configure the Redshift connector, create a catalog properties file in
etc/catalog named, for example,
example.properties, to mount the
Redshift connector as the
example catalog. Create the file with the
following contents, replacing the connection properties as appropriate for your
connection-password are typically required and
determine the user credentials for the connection, often a service user. You can
use secrets to avoid actual values in the catalog
If you have TLS configured with a globally-trusted certificate installed on your
data source, you can enable TLS between your cluster and the data
source by appending a parameter to the JDBC connection string set in the
connection-url catalog configuration property.
For example, on version 2.1 of the Redshift JDBC driver, TLS/SSL is enabled by
default with the
SSL parameter. You can disable or further configure TLS
by appending parameters to the
connection-url configuration property:
For more information on TLS configuration options, see the Redshift JDBC driver documentation.
Data source authentication#
The connector can provide credentials for the data source connection in multiple ways:
inline, in the connector configuration file
in a separate properties file
in a key store file
as extra credentials set when connecting to Trino
You can use secrets to avoid storing sensitive values in the catalog properties files.
The following table describes configuration properties for connection credentials:
Type of the credential provider. Must be one of
Connection user name.
Name of the extra credentials property, whose value to use as the user
Name of the extra credentials property, whose value to use as the password.
Location of the properties file where credentials are present. It must
The location of the Java Keystore file, from which to read credentials.
File format of the keystore file, for example
Password for the key store.
Name of the key store entity to use as the user name.
Password for the user name key store entity.
Name of the key store entity to use as the password.
Password for the password key store entity.
Multiple Redshift databases or clusters#
The Redshift connector can only access a single database within a Redshift cluster. Thus, if you have multiple Redshift databases, or want to connect to multiple Redshift clusters, you must configure multiple instances of the Redshift connector.
To add another catalog, simply add another properties file to
with a different name, making sure it ends in
.properties. For example,
if you name the property file
sales.properties, Trino creates a
sales using the configured connector.
General configuration properties#
The following table describes general catalog configuration properties for the connector:
Support case insensitive schema and table names. Defaults to
Duration for which case insensitive schema and table
names are cached. Defaults to
Path to a name mapping configuration file in JSON format that allows
Trino to disambiguate between schemas and tables with similar names in
different cases. Defaults to
Frequency with which Trino checks the name matching configuration file
for changes. The duration value defaults to
Duration for which metadata, including table and
column statistics, is cached. Defaults to
Cache the fact that metadata, including table and column statistics, is
not available. Defaults to
Duration for which schema metadata is cached.
Defaults to the value of
Duration for which table metadata is cached.
Defaults to the value of
Duration for which tables statistics are cached.
Defaults to the value of
Maximum number of objects stored in the metadata cache. Defaults to
Maximum number of statements in a batched execution. Do not change
this setting from the default. Non-default values may negatively
impact performance. Defaults to
Push down dynamic filters into JDBC queries. Defaults to
Maximum duration for which Trino waits for dynamic
filters to be collected from the build side of joins before starting a
JDBC query. Using a large timeout can potentially result in more detailed
dynamic filters. However, it can also increase latency for some queries.
Appending query metadata#
The optional parameter
query.comment-format allows you to configure a SQL
comment that is sent to the datasource with each query. The format of this
comment can contain any characters and the following metadata:
$QUERY_ID: The identifier of the query.
$USER: The name of the user who submits the query to Trino.
$SOURCE: The identifier of the client tool used to submit the query, for example
$TRACE_TOKEN: The trace token configured with the client tool.
The comment can provide more context about the query. This additional
information is available in the logs of the datasource. To include environment
variables from the Trino cluster with the comment , use the
The following example sets a simple comment that identifies each query sent by Trino:
query.comment-format=Query sent by Trino.
With this configuration, a query such as
SELECT * FROM example_table; is
sent to the datasource with the comment appended:
SELECT * FROM example_table; /*Query sent by Trino.*/
The following example improves on the preceding example by using metadata:
query.comment-format=Query $QUERY_ID sent by user $USER from Trino.
Jane sent the query with the query identifier
20230622_180528_00000_bkizg, the following comment string is sent to the
SELECT * FROM example_table; /*Query 20230622_180528_00000_bkizg sent by user Jane from Trino.*/
Certain JDBC driver settings and logging configurations might cause the comment to be removed.
Domain compaction threshold#
Pushing down a large list of predicates to the data source can compromise
performance. Trino compacts large predicates into a simpler range predicate
by default to ensure a balance between performance and predicate pushdown.
If necessary, the threshold for this compaction can be increased to improve
performance when the data source is capable of taking advantage of large
predicates. Increasing this threshold may improve pushdown of large
domain-compaction-threshold catalog configuration property or the
domain_compaction_threshold catalog session property can be used to adjust the default value of
32 for this threshold.
Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
USE example.example_schema; CALL system.flush_metadata_cache();
Case insensitive matching#
case-insensitive-name-matching is set to
is able to query non-lowercase schemas and tables by maintaining a mapping of
the lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as “customers”
and “Customers”) then Trino fails to query them due to ambiguity.
In these cases, use the
configuration property to specify a configuration file that maps these remote
schemas/tables to their respective Trino schemas/tables:
Queries against one of the tables or schemes defined in the
attributes are run against the corresponding remote entity. For example, a query
against tables in the
case_insensitive_1 schema is forwarded to the
CaseSensitiveName schema and a query against
case_insensitive_2 is forwarded
At the table mapping level, a query on
configured above is forwarded to
CaseSensitiveName.tablex, and a query on
case_insensitive_1.table_2 is forwarded to
By default, when a change is made to the mapping configuration file, Trino must
be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period to have Trino refresh the
properties without requiring a restart:
The connector supports adding rows using INSERT statements.
By default, data insertion is performed by writing data to a temporary table.
You can skip this step to improve performance and write directly to the target
table. Set the
insert.non-transactional-insert.enabled catalog property
or the corresponding
non_transactional_insert catalog session property to
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
The Redshift connector provides a schema for every Redshift schema.
You can see the available Redshift schemas by running
SHOW SCHEMAS FROM example;
If you have a Redshift schema named
web, you can view the tables
in this schema by running
SHOW TABLES FROM example.web;
You can see a list of the columns in the
clicks table in the
using either of the following:
SHOW COLUMNS FROM example.web.clicks;
Finally, you can access the
clicks table in the
SELECT * FROM example.web.clicks;
If you used a different name for your catalog properties file, use that catalog
name instead of
example in the above examples.
Type mapping configuration properties#
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
Configure how unsupported column data types are handled:
The respective catalog session property is
Allow forced mapping of comma separated lists of data types to convert to
The connector provides read access and write access to data and metadata in Redshift. In addition to the globally available and read operation statements, the connector supports the following features:
UPDATE statements with constant assignments and predicates are
supported. For example, the following statement is supported because the values
assigned are constants:
UPDATE table SET col1 = 1 WHERE col3 = 1
Arithmetic expressions, function calls, and other non-constant
statements are not supported. For example, the following statement is not
supported because arithmetic expressions cannot be used with the
UPDATE table SET col1 = col2 + 2 WHERE col3 = 1
NOT IN operators are supported in
predicates. The following statement is not supported because the
cannot be used in predicates:
UPDATE table SET col1 = 1 WHERE col3 = 1 AND col2 = 3
All column values of a table row cannot be updated simultaneously. For a three column table, the following statement is not supported:
UPDATE table SET col1 = 1, col2 = 2, col3 = 3 WHERE col3 = 1
WHERE clause is specified, the
DELETE operation only works if the
predicate in the clause can be fully pushed down to the data source.
ALTER TABLE RENAME TO#
The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two
The following statement attempts to rename a table across schemas, and therefore is not supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two
The connector supports renaming a schema with the
ALTER SCHEMA RENAME
ALTER SCHEMA SET AUTHORIZATION is not supported.
Fault-tolerant execution support#
The connector supports Fault-tolerant execution of query processing. Read and write operations are both supported with any retry policy.
The connector provides specific table functions to access Redshift.
query(varchar) -> table#
query function allows you to query the underlying database directly. It
requires syntax native to Redshift, because the full query is pushed down and
processed in Redshift. This can be useful for accessing native features which
are not implemented in Trino or for improving query performance in situations
where running a query natively may be faster.
The native query passed to the underlying data source is required to return a table as a result set. Only the data source performs validation or security checks for these queries using its own configuration. Trino does not perform these tasks. Only use passthrough queries to read data.
For example, query the
example catalog and select the top 10 nations by
query => 'SELECT
TOP 10 *
The query engine does not preserve the order of the results of this
function. If the passed query contains an
ORDER BY clause, the
function result may not be ordered as expected.