Sunbird Obsrv is a high-performance, cost-effective data stack with several components such as ingestion, querying, processing, backup, visualisation and monitoring. Obsrv 2.0 can be either installed using Terraform (Infrastructure as Code tool) or using Helm (Kubernets Package Manager).
Prerequisites
Obsrv runs completely on a Kubernetes cluster. A completely functional Kubernetes cluster is expected for a seamless Obsrv installation.
Hardware
Obsrv can support a volume of 5 million events per day with an average size of each event to be around 5 kb with the following specifications.
Kubernetes version of 1.25 or greater
Minimum of 16 cores of CPU
Minimum of 64 GB of RAM
PersistentVolume support in the Kubernetes cluster
Support for LoadBalancer service to externally expose some of the Obsrv services. Popular implementations such as MetalLB or Traefik can be used to expose the services using external IPs.
Please be advised that the list of resources will be completely different for different cloud service providers.
Common
The following list of buckets/containers need to be created for different services to store the data. This is applicable to Object Storage such as MinIO/Ceph as well.
flink-checkpoints
velero-backup
obsrv
AWS
IAM role with AmazonS3FullAccess policy. Services such as Api, Druid, Flink, Secor need to read and write access to S3 buckets.
Velero is a service which provides backups of the entire Obsrv cluster state through snapshots. Velero backup service needs a restricted user access to upload the snapshot state onto S3. The following IAM role policy needs to be attached to user created for velero backup. The access keys needs to be generated for the velero backup user as well.
Serive Accounts: Service accounts enable access of the S3 object storage without the need for the access keys. If you prefer to use keys instead, you can skip the creation of service accounts. The list of service accounts needed
Dataset API with the name dataset-api-sa
Druid with the name druid-raw-sa
Flink with the name flink-sa
Secor with the name secor-sa
Deployment Instructions
Helm package manager provides an easy way to install specific components using a generic command. Configurations can be overriden by updating the values.yaml file in the respective Helm charts.
Helm package manager needs access to the Kubernetes cluster. The path to the KUBECONFIG file needs to be exported as an environment variable, either in the current shell or in environment configuration files such as .bashrc
export KUBECONFIG=<path_to_kubeconfig file>
Postgres
Postgres is a RDBMS database which is used as the metadata store
The following list of kafka topics are created by default. If you would like to add more topics to the list, you can do so by adding it to provisioning.topics configuration in the values.yaml file.
dev.ingest
masterdata.ingest
Druid
Druid is a high performance, real-time analytics database that delivers sub-second queries on streaming and batch data at scale
Druid requires the following set of configurations to be provided for specific storage systems such as AWS S3, Azure Blob Storage, GCP Storage or MinIO/Ceph
druid_deepstorage_type:s3druid.extensions.loadList: ["druid-s3-extensions"]# S3 Access keyss3_access_key:""s3_secret_key:""# Use the ClusterIP of the MinIO service instead of the Kubernetes service name# We have noticed that the service names don't resolve properlydruid_s3_endpoint_url:http://172.20.126.232:9000/s3_bucket:"obsrv"druid_s3_endpoint_signingRegion:"us-east-2"
druid.extensions.loadList: ["druid-google-extensions"]druid_deepstorage_type:google# Google cloud credentials json file where the access_token and credentials are stored.google_application_credentials:gcs_bucket:"obsrv"
druid_deepstorage_type:"hdfs"# Include the "druid-hdfs-storage" extension as part of the existing the extensions listdruid.extensions.loadList: ["druid-hdfs-storage"]druid.indexer.logs.directory:"/druid/indexing-logs"druid.storage.storageDirectory:"/druid/segments"
This service provides metadata APIs related to various resources such as datasets/datasources in Obsrv. The following configurations need to be specified in the values.yaml file.
Flink jobs are used to process and enrich the data ingested into Obsrv in near-realtime.
Configuration Overrides
AWS
checkpoint_store_type:s3# S3 Access keyss3_access_key:""s3_secret_key:""# Under base_config in the values.yamlbase.url:s3://flink-checkpoints
MinIO/Ceph
checkpoint_store_type:s3# S3 Access keyss3_access_key:""s3_secret_key:""# Use the ClusterIP of the MinIO service instead of the Kubernetes service name# We have noticed that the service names don't resolve properlys3_endpoint:http://172.20.126.232:9000/# Under base_config in the values.yamlbase.url:s3://flink-checkpoints
Azure
checkpoint_store_type:azureazure_account:""azure_secret:""# Under base_config in the values.yamlbase.url:blob://flink-bucket
GCP
checkpoint_store_type:gcp# Google cloud credentials json file where the access_token and credentials are stored.google_application_credentials:""base.url:blob://flink-bucket
# S3 upload manager which is responsible to upload backup to deepstorage.upload_manager:com.pinterest.secor.uploader.S3UploadManagercloud_store_provider:S3aws_access_key:""aws_secret_key:""aws_region:us-east-2
MinIO/Ceph
# S3 upload manager which is responsible to upload backup to deepstorage.upload_manager:com.pinterest.secor.uploader.S3UploadManagercloud_store_provider:S3aws_access_key:""aws_secret_key:""# Use the ClusterIP of the MinIO service instead of the Kubernetes service name# We have noticed that the service names don't resolve properlyaws_endpoint:http://172.20.126.232:9000/aws_region:us-east-2
upload_manager:com.pinterest.secor.uploader.GsUploadManager# Credentials path where access token and secrets are stored.gs_credentials_path:google_app_credentials.json
Hadoop
upload_manager:com.pinterest.secor.uploader.HadoopS3UploadManager# Ensure the secor.s3.filesystem property is updated with the `hdfs` valuecloud_store_provider=hdfscloud_storage_bucket=namenode-host:8020/dir_path# For More details please check here - https://github.com/pinterest/secor/issues/129
Secor backups are performed from various kafka topics which are part of the data processing pipeline. The following list of backup names need to be replaced in the below mentioned command.