Cloud-Native Database GitOps on OpenShift


One of our teams decided to use centralised cloud-native database on OpenShift clusters and onboard all projects on it to avoid an issue with multiple operation points. For sure it’s not aligned with DevOps mantra “You build, you own”, but for big organisations “X-as-a-Service” is much important from Ops point of view. Usually, teams deploy databases per project and maintain them as part of their application, but developers, not DBAs and they assume that DB just works. Unfortunately, when something happened, devs have to escalate to DBAs who spend some time to understand setup and configuration. So that’s how an idea of centralised Cloud Native DB maintained by DevOps DBAs came from. The idea is fresh and considered as a prototype, so please take it with a grain of salt.

Simplified architecture

The database sits in one namespace and access to it controlled by roles and network policies. This picture illustrates this:

Simplified architecture

Percona operator used to operate and manage MySQL database on OpenShift cluster. There are several worker nodes that have MySQL instances with fast attached persistent storage for redundancy. There are some optimisations enabled by DBAs to make it work right in cloud environments. For simplicity in this article, I will propose a simplified version with default settings just to illustrate the idea.


Manage database and control schemas was selected Tekton, as it’s cloud-native and part of OpenShift 4.6 (TechPreview). If you are interested in Tekton, please have a look details here. It’s pretty cool and many teams within our organisations like it for customisation and template-driven operations. As you will see later all steps can be defined as YAML file, therefore the pipeline can be part of your (or your organisation) GitOps.

Delivery pipeline

DB schemas, functions, and data stored in GIT. When Devs want to modify schema or create a new schema they commit a new schema to GIT and raise pull requests to DBA DevOps. After pull requests, GIT triggers a webhook to Tekton to modify schema. This example uses Ansible Runner and simple playbooks, but in reality, more complex methods like MySQL client can be used.


At the moment of writing, ansible-runner having some issues with dependencies, therefore I switched GitHub repository from “main” branch to “no-ansible-build”. Main branch builds ansible-runner from source, “no-ansible-build” branch uses image from Docker Hub.

Before start, we need to review the test scenario. We have a git repository with the following resources:
1. Ansible playbook to apply SQL schema and files
2. SQL Files
3. Tekton pipelines

The Tekton pipeline has 3 steps

build-runner-image — downloads ansible-runner and alters to installs MySQL client to work with MySQL database
fetch-repository — checks out code from git with ansible-playbook and SQL files
ansible-runner — runs ansible playbook on the database (with mounted secret)

I use OpenShift 4.6, so all manifest designed for this platform. Before start, you might need to install Percona Operator in percona namespace and Pipeline operator for OpenShift cluster. It’s easy to do via UI or Manifest files.

Assuming you have completed prerequisites, let’s create a new namespace (project) percona what will be our working namespace

oc new-project percona

Now let’s install a basic Percona cluster.

# Ensure you are in "percona" namespace
oc project percona
# deploy Percona Secret
oc apply -f
# intitiate percona cluster
oc apply -f

Please be noted that my cluster has Storage Class, therefore PVCs will be created automatically. If your cluster doesn’t have SC, you might need to add PV and PVC manually

If you see this, cluster installed

Percona namespace pods

Let’s have a look what databases we have:

oc rsh mysql-cluster-proxysql-0 mysql -h mysql-cluster-proxysql -uroot -p$(oc get secrets/my-cluster-secrets -o jsonpath='{.data.root}'|base64 -D) -e 'show databases;'

Let’s deal with Tekton. To use Tekton from CLI you might need to install tkn utility, have a look here for details.
Tekton has2 kinds of tasks

$ tkn clustertask list                                                                           NAME                       DESCRIPTION              AGE
buildah Buildah task builds... 1 day ago
buildah-pr Buildah task builds... 1 day ago
buildah-pr-v0-16-3 Buildah task builds... 1 day ago
buildah-v0-16-3 Buildah task builds... 1 day ago
git-cli This task can be us... 1 day ago
git-clone These Tasks are Git... 1 day ago
git-clone-v0-16-3 These Tasks are Git... 1 day ago
helm-upgrade-from-repo These tasks will in... 1 day ago
helm-upgrade-from-source These tasks will in... 1 day ago
jib-maven This Task builds Ja... 1 day ago
kn This Task performs ... 1 day ago
kn-v0-16-3 This Task performs ... 1 day ago

and user-defined. We will come back to them shortly.

Let’s install our tasks and pipeline:

# install buildah task
oc apply -f
# install ansible-runner task
oc apply -f
# install a tekton pipeline
oc apply -f

If all was successful will see this output:

bash-3.2$ tkn task list
ansible-runner Task to run Ansible... 8 hours ago
buildah Buildah task builds... 8 hours ago
bash-3.2$ tkn pipeline list
migrate-db 8 hours ago migrate-db-quf0ha 59 minutes ago 1 minute Succeeded

Now we ready to kick off our first task. Let’s do it with tkn

tkn pipeline start migrate-db \
-w name=shared-workspace,volumeClaimTemplateFile= \
-p git-url= \
-p git-revision=main

Please be noted that I use PVC in my, define yours if you do not have StorageClass.

Pipeline execution

After execution, you will be able to see a new table and entries from SQL file.

New entries in the MySQL database

Sure we can go further and configure Tekton WebHooks on Git commit, but I leave it to you to play with.


Tekton is a very powerful cloud-native CI/CD tool, it fully templates driven and can be stored in git as source code, therefore it’s GitOps ready solution. GitOps Database is one of the simplest use cases to illustrate how it can be used in your projects. From my point of view, it’s a great replacement for Jenkins and can be fully integrated with Kubernetes/OpenShift. This moves you one step closer to NoOps model.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store