Roadmap
What's coming up next in Obsrv
Following table captures the high level road-map planned for the next few releases of Obsrv.
Capability | Description |
---|---|
Connectors Management APIs | Ability to add and manage a connector into Obsrv via APIs |
Job Framework | Ability to create and drop in a stream (Flik) or batch job (Spark) into Obsrv |
Job Management APIs | Ability to add and manage custom jobs in Obsrv via APIs |
Obsrv Exporter | Export open telemetry compliant monitoring data out to be able to integrate with any external monitoring system |
Support additional data formats | Add capability to support Parquet, Avro, ORC and XML data formats out of the box |
Auto-Schema Detection | Add capability to auto-detect the schema (both input data and storage table) based on the data format and connector type |
Schema Evolution | Add capability to ensure that schema evolution is handled automatically at the processing and storage layer |
Masking & Encryption | Add capability to mask or encrypt the data while flowing in to address data privacy concerns |
JSONata and SQL Transformations | Add capability to provide custom transformation scripts in JSONata or SQL which perform transformations in real-time |
LakeHouse | Add capability for lakehouse. Hudi is already tested and is in experimental mode. It is now being battle hardened before being added as part of open-source |
Right to be forgotten | Add scripts to provide the ability to safely delete a dataset from all storages |
Data Aliases | Add aliases to tables or datasources (similar to ElasticSearch) so that data replay and migrations are simplified and can be performed without any downtime |
Query Access Control | Add capability to add access control of the data via OPA rules. An API to manage all access policies across any dataset |
Sink Connectors | Add capability to reverse ETL the processed and enriched data |
Auto Scaling | Add ability to auto-scale the infra based on processing speed, data lag/back-pressure and query response times |
API Management | Add ability to configure consume tokens for the APIs so that integrations with end-user systems are seamless |
New Connectors | Add new Database connectors - Oracle, SQL Server, MongoDB, Cassandra, ElasticSearch Add new Stream connectors - Postgresql Debezium, MySQL Debezium, DB2 Debezium, Oracle Debezium, SQL Server Debezium, MongoDB Debezium, Cassandra Debezium Add new File connectors - Azure Blob Storage, MinIO, Google Cloud Storage |
Simplified Archival and Retention policies | Ability to apply the archival and retention policy on the datasets and table via a unified API without any knowledge about the underlying storage (whether the data is stored in Hudi or Druid) |
Last updated