Using MinIO as an object store backend for Zercurity on Kubernetes
Presently when deploying Zercurity on Kubernetes. Zercurity requires the use of a shared disk in order to build and distribute packages, store uploaded content and process other various objects.
Whilst you can configure Zercurity to make use of an NFS share or an AWS S3 bucket. You can also keep things on premise making use of MinIO which provides and AWS S3 compatible API. That you can deploy within your own Kubernetes cluster. In this post we’re going to deploy and configure Zercurity to use MinIO as our S3 bucket.
Prerequisites
- Kubernetes 1.19 and higher configured with at least 4 nodes.
- Kubernetes certificate API.
Before we continue lets verify that Kubernetes has the required certificate API.
kubectl -n kube-system get po | grep kube-controller-manager
kubectl get pod kube-controller-manager-prod-control-plane-xyz \
-n kube-system -o yaml
Upon the result of the kubectl command
. You are looking for the following lines:
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key
Next you can then install MinIO via the kubectl command line extension. Please check the MinIO website for the latest release of MinIO.
wget https://github.com/minio/operator/releases/download/v4.4.4/kubectl-minio_4.4.4_linux_amd64 -O kubectl-minio
chmod +x kubectl-minio
mv kubectl-minio /usr/local/bin/
Verify the installation with:
kubectl minio version
Deploying MinIO
With MinIO installed, you can now deploy the MinIO operator. This will deploy the Operator into the default minio-operator
namespace. You can specify the kubectl minio init -namespace minio
argument to deploy the operator into a different namespace.
kubectl minio init
After a few minutes you can then validate the installation. Ensuring that all the pods have been successfully deployed:
kubectl get all --namespace minio-operator
Lastly, you can then connect to the MinIO server using the kubectl proxy
.
kubectl minio proxy
When you run this command it will provide you with Current JWT to login
. This is your login token. Navigate your browser to the provided URL entering the provided login token.
Creating our Tenant
In the very top right you’ll see a button to create a new Tenant. A Tenant will be used to create the S3-alike service. This will be your Bucket. Once created you’ll automatically be generated an AWS-alike IAM access key and token.
On the Create New Tenant screen we’re going to setup a Tenant for the Zercurity namespace. As soon as you enter the namespace you want to provision the Tenant Storage Class field will be populated with the available storage classes. MinIO does recommend provisioning a locally attached volume for better performance. MinIO’s has strict read-after-write and list-after-write consistency model requires local disk filesystems. However, we’re just going to use our default storage class.
Once you click Create you’ll immediately be provided with your IAM bucket keys. Either download or store these keys securely. They'll be used in your Zercurity deployment.
It’ll take a few minutes to deploy your Tenant. You can view the progress by clicking on the Tenant and viewing the status under the State header.
You’ll see the health move from gray to yellow and then finally green once all the pods have been deployed and are running.
In the meantime should you wish to configure a trusted third-party certificate you can provision one under the security tab. This will remove any warnings about self signed certificates and avoid the use of the--no-verify-ssl
flag.
Once all the nodes are running and the Tenant is in a green state you can connect to your S3-alike bucket using an AWS S3 client tool and providing the custom endpoint URL as shown on the summary tab.
Configuring Zercurity
To configure Zercurity you’ll need to provide these additional environment variables into either your production.env
or Kubernetes configuration file:
AWS_ENDPOINT_URL=https://10.72.32.16
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_S3_BUCKET=zercurity
If you don’t specify your tenant (bucket) name then the hostname of your deployment will be used. This prefixed with the download subdomain. By default this will be download.zercurity.local
. Once configured you can apply the new configuration file and restart the pods.
kubectl -n zercurity apply -f config.yml
If you’ve deployed Zercurity with docker-compose
you need to restart the containers with systemctl restart zercurity
.
Migrating the existing data
If you’ve been using Zercurity via docker-compose
you can synchronize your existing files with the aws sync
command. First edit or create the file: nano ~/.aws/credentials
. The brackets specify the profile name.
[zercurity]
aws_access_key_id=Drdyg7kKlyQmKkgl
aws_secret_access_key=8TDrdyg7kKlyQmKkglDrdyg7kKlyQmKkgl
You can then use the aws
command to synchronize the existing data. If you get the error command not found
you can install the awscli
python package like so: sudo pip install awscli
.
aws --profile zercurity --endpoint-url https://10.72.32.16 --no-verify-ssl s3 sync /var/lib/zercurity/data/ s3://zercurity/
Once the command completes you can validate the new files are present by using the aws
command or a GUI client to list the bucket files.
Its all over!
We hope you found this helpful. Please feel free to get in touch if you have any questions.