Cloud-Native Database GitOps on OpenShift
Problem
One of our teams decided to use centralised cloud-native database on OpenShift clusters and onboard all projects on it to avoid an issue with multiple operation points. For sure it’s not aligned with DevOps mantra “You build, you own”, but for big organisations “X-as-a-Service” is much important from Ops point of view. Usually, teams deploy databases per project and maintain them as part of their application, but developers, not DBAs and they assume that DB just works. Unfortunately, when something happened, devs have to escalate to DBAs who spend some time to understand setup and configuration. So that’s how an idea of centralised Cloud Native DB maintained by DevOps DBAs came from. The idea is fresh and considered as a prototype, so please take it with a grain of salt.
Simplified architecture
The database sits in one namespace and access to it controlled by roles and network policies. This picture illustrates this:
Percona operator used to operate and manage MySQL database on OpenShift cluster. There are several worker nodes that have MySQL instances with fast attached persistent storage for redundancy. There are some optimisations enabled by DBAs to make it work right in cloud environments. For simplicity in this article, I will propose a simplified version with default settings just to illustrate the idea.
Tekton
Manage database and control schemas was selected Tekton, as it’s cloud-native and part of OpenShift 4.6 (TechPreview). If you are interested in Tekton, please have a look details here. It’s pretty cool and many teams within our organisations like it for customisation and template-driven operations. As you will see later all steps can be defined as YAML file, therefore the pipeline can be part of your (or your organisation) GitOps.
DB schemas, functions, and data stored in GIT. When Devs want to modify schema or create a new schema they commit a new schema to GIT and raise pull requests to DBA DevOps. After pull requests, GIT triggers a webhook to Tekton to modify schema. This example uses Ansible Runner and simple playbooks, but in reality, more complex methods like MySQL client can be used.
Hands-on!
At the moment of writing, ansible-runner having some issues with dependencies, therefore I switched GitHub repository from “main” branch to “no-ansible-build”. Main branch builds ansible-runner from source, “no-ansible-build” branch uses image from Docker Hub.
Before start, we need to review the test scenario. We have a git repository with the following resources:
1. Ansible playbook to apply SQL schema and files
2. SQL Files
3. Tekton pipelines
The Tekton pipeline has 3 steps
build-runner-image
— downloads ansible-runner and alters to installs MySQL client to work with MySQL databasefetch-repository
— checks out code from git with ansible-playbook and SQL files ansible-runner
— runs ansible playbook on the database (with mounted secret)
I use OpenShift 4.6, so all manifest designed for this platform. Before start, you might need to install Percona Operator in percona namespace and Pipeline operator for OpenShift cluster. It’s easy to do via UI or Manifest files.
Assuming you have completed prerequisites, let’s create a new namespace (project) percona what will be our working namespace
oc new-project percona
Now let’s install a basic Percona cluster.
# Ensure you are in "percona" namespace
oc project percona# deploy Percona Secret
oc apply -f https://raw.githubusercontent.com/mancubus77/tekton-db-ansible/main/base/percona-secret.yaml# intitiate percona cluster
oc apply -f https://raw.githubusercontent.com/mancubus77/tekton-db-ansible/main/base/percona.yaml
Please be noted that my cluster has Storage Class, therefore PVCs will be created automatically. If your cluster doesn’t have SC, you might need to add PV and PVC manually
If you see this, cluster installed
Let’s have a look what databases we have:
oc rsh mysql-cluster-proxysql-0 mysql -h mysql-cluster-proxysql -uroot -p$(oc get secrets/my-cluster-secrets -o jsonpath='{.data.root}'|base64 -D) -e 'show databases;'
Let’s deal with Tekton. To use Tekton from CLI you might need to install tkn
utility, have a look here for details.
Tekton has2 kinds of tasks
“clustertask”
$ tkn clustertask list NAME DESCRIPTION AGE
buildah Buildah task builds... 1 day ago
buildah-pr Buildah task builds... 1 day ago
buildah-pr-v0-16-3 Buildah task builds... 1 day ago
buildah-v0-16-3 Buildah task builds... 1 day ago
git-cli This task can be us... 1 day ago
git-clone These Tasks are Git... 1 day ago
git-clone-v0-16-3 These Tasks are Git... 1 day ago
helm-upgrade-from-repo These tasks will in... 1 day ago
helm-upgrade-from-source These tasks will in... 1 day ago
jib-maven This Task builds Ja... 1 day ago
kn This Task performs ... 1 day ago
kn-v0-16-3 This Task performs ... 1 day ago
and user-defined. We will come back to them shortly.
Let’s install our tasks and pipeline:
# install buildah task
oc apply -f https://raw.githubusercontent.com/mancubus77/tekton-db-ansible/no-ansible-build/tekton/buildah.yaml# install ansible-runner task
oc apply -f https://raw.githubusercontent.com/mancubus77/tekton-db-ansible/no-ansible-build/tekton/task-ansible-runner.yaml# install a tekton pipeline
oc apply -f https://raw.githubusercontent.com/mancubus77/tekton-db-ansible/no-ansible-build/tekton/pipeline.yaml
If all was successful will see this output:
bash-3.2$ tkn task list
NAME DESCRIPTION AGE
ansible-runner Task to run Ansible... 8 hours ago
buildah Buildah task builds... 8 hours agobash-3.2$ tkn pipeline list
NAME AGE LAST RUN STARTED DURATION STATUS
migrate-db 8 hours ago migrate-db-quf0ha 59 minutes ago 1 minute Succeeded
Now we ready to kick off our first task. Let’s do it with tkn
tkn pipeline start migrate-db \
-w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/01_pipeline/03_persistent_volume_claim.yaml \
-p git-url=https://github.com/mancubus77/tekton-db-ansible.git \
-p git-revision=main
Please be noted that I use PVC in my, define yours if you do not have StorageClass.
After execution, you will be able to see a new table and entries from SQL file.
Sure we can go further and configure Tekton WebHooks on Git commit, but I leave it to you to play with.
Conclusion
Tekton is a very powerful cloud-native CI/CD tool, it fully templates driven and can be stored in git as source code, therefore it’s GitOps ready solution. GitOps Database is one of the simplest use cases to illustrate how it can be used in your projects. From my point of view, it’s a great replacement for Jenkins and can be fully integrated with Kubernetes/OpenShift. This moves you one step closer to NoOps model.