a

Troubleshooting and migrating your secured AWS DocumentDB inside Kubernetes

In this article, we explain a few tools to manage your AWS DocumentDB inside Kubernetes. 1) Launching a MongoDB client pod to connect to AWS DocumentDB Using the following command You will get shell access to a pod inside the k8s environment that will allow you to connect into the AWS DocumentDB database. As the […]

In this article, we explain a few tools to manage your AWS DocumentDB inside Kubernetes.

1) Launching a MongoDB client pod to connect to AWS DocumentDB

Using the following command

kubectl run -i --rm --tty mongo-client --image=mvertes/alpine-mongo --restart=Never --command -- /bin/bash

You will get shell access to a pod inside the k8s environment that will allow you to connect into the AWS DocumentDB database.

As the database requires SSL connection we need to download the public key from AWS using the following command:

wget http://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem

this will download the public key file into the current folder, next, if we run the following command from the same folder

mongo --ssl
    --host <hostname>
    --sslCAFile rds-combined-ca-bundle.pem
    --username <mongo-user>
    --password <mongo-password>

After running the previous command we should be connected to the AWS DocumentDB and see a prompt like this:

MongoDB shell version v3.4.2
connecting to: mongodb://<server>:27017/
MongoDB server version: 3.6.0
WARNING: shell and server versions do not match
rs0:PRIMARY> 

Then, we can just switch to the desired db using the following command:

rs0:PRIMARY> use <db-name>;
switched to db <dn-name>

At this point we can run any mongoldb cli command we need to see/troubleshoot data issues.

2) Launching a Mongo Dump client pod to to create a backup from DocumentDB

Using the following command

kubectl run -i --rm --tty mongodump-client --image=bigtruedata/mongodump --restart=Never --command -- /bin/sh

You will get shell access to a pod inside the k8s environment that will allow you to execute mongodump against the AWS DocumentDB database.

As the database requires SSL connection we need to download the public key from AWS using the following command:

wget http://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem

We are ready now to dump the database using the following command:

mongodump --ssl      --host <host>     --sslCAFile rds-combined-ca-bundle.pem     --username <db-user>     --password <db-password> --db <db-name>

At the end of the dump we will see a confirmation message like:
2019-08-14T20:22:46.547+0000 done dumping <db-name>.collectionX (12775 documents)

This should have created a new folder named dump/ with all the dump data. This means we are ready to create a tar/gz file with that folder using the following command:
tar -zcvf db-dump.tar.gz dump

Now we have the tar/gz file with all the information:

ls -ltrh *.tar.gz
-rw-r--r--    1 root     root        9.9M Aug 14 20:37 db-dump.tar.gz

Now without closing the current bash shell, open a new terminal tab and run the following command:

kubectl default/mongo-client3:/dump/db-dump.tar.gz /tmp 

This will copy the backup db-dump.tar.gz file into the tmp folder of the computer you are connected from tmp (hopefully still inside a secure network)

At this point, we can run the same cp command to copy to your second k8s environment, but we need to first launch a pod instance of the mongodump pod to be able to do a mongorestore so let’s launch it.

Finally, we can exit the mongodump pod and it will be automatically deleted.

NOTE: At this point you should switch kubectl to point to the destination k8s cluster where you want your backup to be restored.

3) Launching a Mongo Dump client pod to restore the backup from DocumentDB

Using the following command we will launch a pod that will be used to restore the database.

kubectl run -i --rm --tty mongorestore-client --image=bigtruedata/mongodump --restart=Never --command -- /bin/sh

You will get shell access to a pod inside the k8s environment that will allow you to execute mongorestore against the AWS DocumentDB database.

As the database requires SSL connection we need to download the public key from AWS using the following command:

wget http://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem

Open a new terminal tab to copy the backup file into the mongorestore pod by running:

kubectl cp /tmp/db-dump.tar.gz default/mongorestore-client:/dump

After it finishes uploading the file into our pod we can now delete our local copy of the backup.

Now, we need to extract the tar/gz file using the following command

tar xvzf db-dump.tar.gz

This will extract all the contents of the tar/gz file into our dump/ folder. We can check by running:

 ls -ltrh
total 10120
-rw-r--r--    1 501      wheel       9.9M Aug 14 20:44 db-dump.tar.gz
drwxr-xr-x    3 root     root          29 Aug 14 21:00 dump

Finally we can run the restore:

mongorestore dump/ --ssl  --host <host>  --sslCAFile rds-combined-ca-bundle.pem  --username <db-user> --password <db-password> --db <db-name>

At this point our database should have been fully restored into our new k8s environment. You can refer to the first section fo this document to connect to your AWS DocumentDB and see your data.

Related Articles

3 Ways Gen AI and AWS can Enhance Your Business

3 Ways Gen AI and AWS can Enhance Your Business

Amazon is on the cutting edge of new technologies. They have been increasingly experimenting with AI and learning algorithms, culminating in their most recent breakthroughs in Generative AI. Developers and technology enthusiasts have access to their innovations through the tools available on AWS.

Business Owner’s Guide to DevOps Essentials

Business Owner’s Guide to DevOps Essentials

As a business owner, it’s essential to maximize workplace efficiency. DevOps is a methodology that unites various departments to achieve business goals swiftly. Maintaining a DevOps loop is essential for the health and upkeep of deployed applications.

AWS Graviton and Arm-architecture Processors

AWS Graviton and Arm-architecture Processors

AWS launched its new batch of Arm-based processors in 2018 with AWS Graviton. It is a series of server processors designed for Amazon EC2 virtual machines. The EC2 AI instances support web servers, caching fleets, distributed data centers, and containerized microservices. Arm architecture is gradually being rolled out to handle enterprise-grade utilities at scale. Graviton instances are popular for handling intense workloads in the cloud.