Do you ❤️ Trino? Give us a 🌟 on GitHub

Trino blog

News from the community of users and contributors

A cache refresh for Trino


Thinking about our recent work on caching in Trino reminds me of the famous saying, “There are only two hard things in computer science: cache invalidation and naming things.” Well, in the Trino community we know all about caching and naming. With the recent Trino 439 release, caching from object storage file systems got a refresh. Catalogs using the Delta Lake, Hive, Iceberg, and soon Hudi connectors now get to access performance benefits from the new Alluxio-powered file system caching.

In the past #

So how did we get here? A long, long time ago, Qubole open-sourced a light light-weight data caching framework called RubiX. The library was integrated into the Trino Hive connector, and it enabled Hive connector storage caching. But over time, any open source project without active maintenance becomes stale. And like a stale cache, a stale open source project can cause issues, or becomes outdated and unsuitable for modern use. Though RubiX had once served Trino well, it was time to remove the dust, and RubiX had to go.

Making progress #

Catching back up to 2024, Trino now includes powerful connectors for the modern lakehouse formats Delta Lake, Hudi, and Iceberg:

Hive is still around, just like HDFS, but we consider them both close to legacy status. Yet all four connectors could benefit from caching. Good news came at Trino Summit 2022 when Hope Wang and Beinan Wang from Alluxio presented about their integration with Trino and the Hive connector - Trino optimization with distributed caching on data lake. They mentioned plans to open source their implementation and an initial pull request (PR) was created.

Collaboration #

The initial presentation and PR planted a seed in the community. The Trino project had been moving fast in terms of deprecating the old dependencies from the Hadoop and Hive ecosystem, so the initial Alluxio PR was no longer up to date and compatible with latest Trino version. Discussions with David Phillips laid out the path to adjust to the new file system support and get ready for reviews towards a merge.

In the end it was Florent Delannoy who started another PR for file system caching support, specifically for the Delta Lake connector. His teammate Jonas Irgens Kylling, also a presenter from Trino Fest 2023, took over the work on the PR. The collaboration on it was an epic effort. After many months of time, over 300 comments directly on GitHub and numerous hours of coding, reviewing, testing, and discussion on Slack and elsewhere the work finally resulted in a successful merge, and therefore inclusion in the next release.

Special props for their help for Florent and Jonas must go out to David Phillips, Raunaq Morarka, Piotr Findeisen, Mateusz Gajewski, Beinan Wang, Amogh Margoor, Manish Malhorta, and Marton Bod.

Finishing #

In parallel to the work on the initial PR for Delta Lake, yours truly ended up working on the documentation, and pulled together an issue and conversations to streamline the roll out.

Mateusz Gajewski had also put together a PR to remove the old RubiX integration already. With the merge of the initial PR we were off to the races. We merged the removal of RubiX and the addition of the docs. Mateusz also added support for OpenTelemetry.

Manish Malhorta and Amogh Margoor sent a PR for Iceberg support. They were also about to add Hive support, when Raunaq Morarka beat them and submitted that PR.

After some final clean up, Cole Bowden and Martin Traverso got the release notes together and shipped Trino 439! Now you can use it, too.

Using file system caching #

There are only a few relatively simple steps to add file system caching to your catalogs that use Delta Lake, Hive, or Iceberg connectors:

  • Provision fast local file system storage on all your Trino cluster nodes. How you do that depends on your cluster provisioning.
  • Enable file system caching and configure the cache location, for example at /tmp/trino-cache on the nodes, in your catalog properties files.
fs.cache.enabled=true
fs.cache.directories=/tmp/trino-cache

After a cluster restart, file system caching is active for the configured catalogs, and you can tweak it with further, optional configuration properties.

What’s next #

What a success! It took many members from the global Trino village to get this feature added. Now our users across the globe can enjoy even more benefits of using Trino, and also participate in our next steps:

  • Further improvements to the current implementation, maybe adding worker-to-worker connections for exchanging cached files.
  • Preparation to add file system caching with the Hudi connector is in progress with Sagar Sumit and Y Ethan Guo and implementation is following next.
  • Adjust to any learnings from production usage.

Our thanks, and those from all current and future users, go out to everyone involved in this effort. What are we going to do next?

Manfred

PS: If you want to share your use of Trino or connect with other Trino users, join us for the free Trino Fest 2024 as speaker or attendee live in Boston, or virtually from your home.