Performance Benchmarks
Proof of the pudding for scalability of Obsrv
Note: This is a work in progress page. Following results are from initial benchmarks. Detailed benchmarks will be added shortly once the benchmark exercise is completed
Cluster Size
Processing Benchmarks
Processing benchmark is independent on number of datasets created, hence the strategy is to test with volume with all configurations enabled. Disabling any configuration is going to improve throughput
Configuration 1
Dedup turned on
De-normalization configured on 2 master datasets
Transformations configured on 2 fields
Event size of 1 kb
Results
Note: Many other scenarios with varying flink configurations are under benchmarking and will be updated post completion
Secor Backups Benchmark
To ensure there is no data loss across obsrv pipeline all data is backuped to object store using S3. Following are the benchmark results of Secor backups in real-time
Configuration 1
Total Secor processes - 7
Total CPU Allocated - 1.5 cpu
Event size of 1 kb
Results
Note: In DIKSHA we have observed each secor process with 1cpu was able to upload 200Million events (200 Gb) to Azure blob storage
Druid Indexing Benchmark
Druid indexing benchmark is dependent on number of datasets created and number of aggregate tables. This benchmark is done with minimal configuration only and can actually linearly scale with the number of CPUs provided
Minimum Configuration
Results
Note: How does the indexing scale when more cpu resources are provided will be added once the benchmark is complete
Query Benchmark
Similar to processing, query benchmark is dependent on the volume of data but not on the number of datasets (or tables) created. Query performance will increase linearly with the amount of CPU/Memory assigned to the Druid Historical process
Minimum Configuration
RAW Table Results
Aggregate (Rollup) Table Results
Note: Multiple query types with varying interval and historical configuration combinations are being benchmarked actively and results will be updated once the activity is completed.
Last updated