Process event and change data capture data with the SQL you already know. Decodable supports everything from simple transformation and filtering, to multi-stream stateful joins, and even complex event processing. Native integration with dbt makes it easy for data engineers to ingest and cleanse data so it’s immediately useful in the warehouse. Backend developers can use Decodable to process and transform data between microservices as part of an event-driven architecture.
Have a preexisting Flink job, or just prefer to build your pipelines in code? No problem. Upload your existing jar file to Decodable, pick a task size, tell us how many tasks you want and you’re in production. Use the open source Decodable SDK for Flink to tap into Decodable’s streams and fully managed connectors.
Stateful stream processing allows pipelines to maintain data across multiple records. This allows pipelines written in both SQL and code to maintain materialized views for joins, detect complex patterns across records, aggregate data, and more. It’s even possible to maintain custom state for advanced use cases. State does not need to fit in memory, and is properly restored when a pipeline is restarted.
Decodable pipelines process data exactly once, by default. That means no missing or duplicate records within a pipeline. If you stop a pipeline or there’s a failure, pipelines pick up exactly where they left off.
Need more processing power or throughput? Pick a task size and count for your pipeline and Decodable scales your pipeline automatically. Each pipeline and connection can use different settings so you don’t have to over-provision or overpay. Some of the largest services in the world are powered by Apache Flink.