Snowflake connector#
The Snowflake connector allows querying and creating tables in an external Snowflake account. This can be used to join data between different systems like Snowflake and Hive, or between two different Snowflake accounts.
Configuration#
To configure the Snowflake connector, create a catalog properties file
in etc/catalog
named, for example, example.properties
, to
mount the Snowflake connector as the snowflake
catalog.
Create the file with the following contents, replacing the
connection properties as appropriate for your setup:
connector.name=snowflake
connection-url=jdbc:snowflake://<account>.snowflakecomputing.com
connection-user=root
connection-password=secret
snowflake.account=account
snowflake.database=database
snowflake.role=role
snowflake.warehouse=warehouse
Arrow serialization support#
This is an experimental feature which introduces support for using Apache Arrow as the serialization format when reading from Snowflake. Please note there are a few caveats:
Using Apache Arrow serialization is disabled by default. In order to enable it, add
--add-opens=java.base/java.nio=ALL-UNNAMED
to the Trino JVM config.
Multiple Snowflake databases or accounts#
The Snowflake connector can only access a single database within a Snowflake account. Thus, if you have multiple Snowflake databases, or want to connect to multiple Snowflake accounts, you must configure multiple instances of the Snowflake connector.
Type mapping#
Because Trino and Snowflake each support types that the other does not, this connector modifies some types when reading or writing data. Data types may not map the same way in both directions between Trino and the data source. Refer to the following sections for type mapping in each direction.
List of Snowflake data types.
Snowflake type to Trino type mapping#
The connector maps Snowflake types to the corresponding Trino types following this table:
Snowflake type |
Trino type |
Notes |
---|---|---|
|
|
|
|
|
Synonymous with |
|
|
The names |
|
|
Synonymous with |
|
|
Default precision and scale are (38,0). |
|
|
Synonymous with |
|
|
|
|
|
Synonymous with |
|
|
Synonymous with |
|
|
|
|
|
Synonymous with |
|
|
|
|
|
|
|
|
TIMESTAMP with no time zone; time zone, if provided, is not stored. See Snowflake Date & Time Data Types for more information. |
|
|
Alias for |
|
|
Alias for one of the |
|
|
TIMESTAMP with time zone. |
No other types are supported.
Trino type to Snowflake type mapping#
The connector maps Trino types to the corresponding Snowflake types following this table:
Trino type |
Snowflake type |
Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
No other types are supported.
Type mapping configuration properties#
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
Property name |
Description |
Default value |
---|---|---|
|
Configure how unsupported column data types are handled:
The respective catalog session property is |
|
|
Allow forced mapping of comma separated lists of data types to convert to
unbounded |
SQL support#
The connector provides read access and write access to data and metadata in a Snowflake database. In addition to the globally available and read operation statements, the connector supports the following features:
Procedures#
system.flush_metadata_cache()
#
Flush JDBC metadata caches. For example, the following system call
flushes the metadata caches for all schemas in the example
catalog
USE example.example_schema;
CALL system.flush_metadata_cache();
system.execute('query')
#
The execute
procedure allows you to execute a query in the underlying data
source directly. The query must use supported syntax of the connected data
source. Use the procedure to access features which are not available in Trino
or to execute queries that return no result set and therefore can not be used
with the query
or raw_query
pass-through table function. Typical use cases
are statements that create or alter objects, and require native feature such
as constraints, default values, automatic identifier creation, or indexes.
Queries can also invoke statements that insert, update, or delete data, and do
not return any data as a result.
The query text is not parsed by Trino, only passed through, and therefore only subject to any security or access control of the underlying data source.
The following example sets the current database to the example_schema
of the
example
catalog. Then it calls the procedure in that schema to drop the
default value from your_column
on your_table
table using the standard SQL
syntax in the parameter value assigned for query
:
USE example.example_schema;
CALL system.execute(query => 'ALTER TABLE your_table ALTER COLUMN your_column DROP DEFAULT');
Verify that the specific database supports this syntax, and adapt as necessary based on the documentation for the specific connected database and database version.
Table functions#
The connector provides specific table functions to access Snowflake.
query(varchar) -> table
#
The query
function allows you to query the underlying database directly. It
requires syntax native to Snowflake, because the full query is pushed down and
processed in Snowflake. This can be useful for accessing native features which
are not available in Trino or for improving query performance in situations
where running a query natively may be faster.
Find details about the SQL support of Snowflake that you can use in the query in the Snowflake SQL Command Reference, including PIVOT, lateral joins and other statements and functions.
The native query passed to the underlying data source is required to return a table as a result set. Only the data source performs validation or security checks for these queries using its own configuration. Trino does not perform these tasks. Only use passthrough queries to read data.
As a simple example, query the example
catalog and select an entire table:
SELECT
*
FROM
TABLE(
example.system.query(
query => 'SELECT
*
FROM
tpch.nation'
)
);
As a practical example, you can use the Snowflake SQL support for PIVOT to pivot on all distinct column values automatically with a dynamic pivot.
SELECT
*
FROM
TABLE(
example.system.query(
query => '
SELECT *
FROM quarterly_sales
PIVOT(SUM(amount) FOR quarter IN (ANY ORDER BY quarter))
ORDER BY empid;
'
)
);
Note
The query engine does not preserve the order of the results of this
function. If the passed query contains an ORDER BY
clause, the
function result may not be ordered as expected.
Performance#
The connector includes a number of performance improvements, detailed in the following sections.
Pushdown#
The connector supports pushdown for a number of operations:
Aggregate pushdown for the following functions:
Note
The connector performs pushdown where performance may be improved, but in order to preserve correctness an operation may not be pushed down. When pushdown of an operation may result in better performance but risks correctness, the connector prioritizes correctness.