Backing up google cloud platform instances

Posted Leave a comment

I’ve used Google Cloud for a long time now, and using it often has made me mighty fond of Google’s gcloud command line API. It lets me backup my 200 instances I look over.
If you want to back up GCE instances, all you need is a windows or linux machine, either in cloud or outside, with gcloud sdk installed.

I won’t go over installing it yet, as there are plenty of writeups around, including on Google’s own documentation, that covers all operating systems. Being a 100% linux user, I’m not very windowsey. This writeup is written with a linux user in mind, but feel free to ask any question if you have any, I’ll try to help!

I’ll explain things as I go.

To backup GCE instances:

backing up google instances is super easy!

I’m in this for business, and I want to make sure my customers data is safe, I keep 30 daily backups (30 days of daily), 2 months of weekly backups, and 6 months of weekly backups of all my machines.

daily

[code]

#!/bin/bash

dt=date +%Y-%m-%d
echo $dt
#Delimiter to exclude is “|”
exclude_list=”NAME”
all_disks=gcloud compute disks list |awk '{print $1"="$2}'|grep -v $exclude_list
for line in echo -e $all_disks ;do
zone=echo $line |awk -F"=" '{print $2}'
disks=echo $line |awk -F"=" '{print $1}'
gcloud compute disks snapshot $disks –zone $zone –snapshot-name “internal-daily-$disks-$dt”
done
[/code]

weekly

[code]

#!/bin/bash

dt=date +%Y-%m-%d
echo $dt
#Delimiter to exclude is “|”
exclude_list=”NAME”
all_disks=gcloud compute disks list |awk '{print $1"="$2}'|grep -v $exclude_list
for line in echo -e $all_disks ;do
zone=echo $line |awk -F"=" '{print $2}'
disks=echo $line |awk -F"=" '{print $1}'
gcloud compute disks snapshot $disks –zone $zone –snapshot-name “internal-weekly-$disks-$dt”
done

[/code]

monthly:

[code]
#!/bin/bash

dt=date +%Y-%m-%d
echo $dt
#Delimiter to exclude is “|”
exclude_list=”NAME”
all_disks=gcloud compute disks list |awk '{print $1"="$2}'|grep -v $exclude_list
for line in echo -e $all_disks ;do
zone=echo $line |awk -F"=" '{print $2}'
disks=echo $line |awk -F"=" '{print $1}'
gcloud compute disks snapshot $disks –zone $zone –snapshot-name “internal-monthly-$disks-$dt”
done

[/code]

The cleanup script

My cleanups are manual, but this could be scripted

when I clean up I specify the date I want to delete:
example: ./vm_snapshot_deleter.sh 2018-02 2018-01 2017-12
would delete all machine snapshots, with the date string included.

This could be scripted just doing some date math. I’ll update this post if I feel froggy.

[code]

#!/bin/bash

#gcloud cleanup script by Justin Reiners
#cleans google cloud backups.
#will also use command line options as well.

echo “Usage ./vm_snapshot_delete.sh (date in YYYY-MM format)”
echo “Daily delete is assummed, but weekly and monthly can be specified

SEARCHSTRINGD=$1
SEARCHSTRINGW=$2
SEARCHSTRINGM=$3

echo $SEARCHSTRING

#start daily cleanup
gcloud compute snapshots list | grep internal-daily | grep $SEARCHSTRINGD | cut -f 1 -d ” ” | while read line; do gcloud compute snapshots delete $line –quiet ;done

#start weekly cleanup
gcloud compute snapshots list | grep internal-weekly | grep $SEARCHSTRINGW | cut -f 1 -d ” ” | while read line; do gcloud compute snapshots delete $line –quiet ;done

#start monthly cleanup
gcloud compute snapshots list | grep internal-monthly | grep $SEARCHSTRINGM | cut -f 1 -d ” ” | while read line; do gcloud compute snapshots delete $line –quiet ;done

[/code]

Backups SQL servers (Hosted mysql)

Google makes and auto deletes about 5-6 backups of hosted SQL, which didnt really work for us, as we like keeping longer retention times for our data.

The following script will allow you to back up 3 instances, databasedb, databasedb2, databasedb3, and will not auto delete. The delete script will follow:

[code]
echo “backing up databasedb…”
gcloud beta sql backups create –instance databasedb –description “production dbdb db $datedfile” –async
echo “done.”

sleep 2

echo “backing up databasedb2…”
gcloud beta sql backups create –instance databasedb2 –description “production dbdb db $datedfile” –async
echo “done.”

sleep 2
echo “backing up databasedb3…”
gcloud beta sql backups create –instance databasedb3 –description “production dbdb db $datedfile” –async
echo “done.”

[/code]

Trimming SQL backups

[code]
#!/bin/bash
#logfile trim cron scripts for cleaning google cloud SQL backups.
#this file can be run daily or weekly, it just keeps the bill down.
#pass sql instance name as first command line parameter.
#trimsql.sh

example:

./trimsql.sh development +60

will keep 60 days of backups for the “development” instance

Assign variables

backups=$2
outfile=”/tmp/sql-trim-output-file.tmp” # temp file for process use.
sql_instance=$1 # this is the master instance name on cloud SQL.
gcloud_location=”/usr/bin/gcloud” # gcloud executable location.
echo “”
echo “Cleaning up Google Cloud SQL instance named ” $1
echo “”
echo “trimming snapshots to: ” $2 ” snapshots, including autosnaps.”
#

gcloud needs to be installed and in a path accessable by the user. This user or account must have the appropriate permissions.

#

gcloud beta sql backups list (lists backups) of $sql_instance | skips deleted backups because you cant delete them twice

the tail command skips $dayskip amount | the cut command cuts the first column (f1) with a file delimited by spaces.

it then save the gcloud clensed output to a file we can ingest into the next step.

#
#$gcloud_location beta sql backups list –instance $sql_instance | grep -v DELETED | tail -n +$dayskip | cut -f1 -d ” ” > $outfile

$gcloud_location beta sql backups list –instance $sql_instance | grep -v DELETED | grep -v UNKNOWN_STATUS | grep -v OVERDUE | tail -n $backups > $outfile

echo “–”
echo “”
echo “You will be deleting the following snapshots in 30 seconds:”
echo “if this is not what you want, please exit now.”

cat $outfile && sleep 30

we cat the $outfile into a while loop, that loops over the file created by first step line by line until we’re done with the file. |

we use the quiet option to not ask you each line, and async to not wait for each command to completely erase.

tac $outfile | cut -f1 -d ” ” | while read line; do $gcloud_location sql backups delete $line --instance $sql_instance --quiet --async; done

delete file in

rm -f $outfile && echo “” && echo “temp file deleted successfully”

[/code]

Downscaling 4K video with ffmpeg

Posted Leave a comment

Just about every camera I have records in 4K, I record in full quality, and down convert when needed:

single file:

[code]
ffmpeg -i DJI_0029.MP4 -vf scale=1920:1080 -c:v libx264 -crf 35 DJI_0029-smaller.mp4
[/code]

bash script:

save this file to convert.sh and chmod +x convert.sh

[code]
#!/bin/bash
ffmpeg -i $1 -vf scale=1920:1080 -c:v libx264 -crf 35 $1-1080P.mp4
[/code]

usage:

[code]
./convert.sh filename.mp4
[/code]

the output files will have a filename of filename.mp4.1080P.mp4, but for my use it’s fine.

Migrate DNS from Dyn to Google Cloud

Posted Leave a comment

My day job had used Dynect Managed DNS for years, but as our queries per second increased to 30, we had a problem. At one time we used their traffic manager, but we no longer balanced at DNS level, which dropped the bill by quite a bit.

I’ve been hosting in Digital Ocean for years, and also have machines in Google Cloud, a very large buildout between them both. I did the math using the Google Cloud calculator, and realized Google’s DNS was way cheap for our access level so I took the plunge.

Log in to manage.dynect.net
under “manage dns” select your zone.
under zone the zone, find “zone reports” and click it
click “download” under zone file.

to create a zone within Google DNS:

[code]
gcloud dns managed-zones create reinersio –description=”mydomain” –dns-name=”reiners.io”
[/code]

to import your zone file:

[code]
gcloud dns record-sets import -z=reinersio –zone-file-format Downloads/reiners.io.zonefile.txt –delete-all-existing
pending
[/code]

‘pending’ here just means it sent the load info to google dns, and its processing it.

Now we need to test it, but first, we need our DNS servers:

[code]
gcloud dns managed-zones reinersio
creationTime: ‘2016-11-01T02:06:09.863Z’
description: ”
dnsName: reiners.io.
id: ‘415624439522711xxxx’
kind: dns#managedZone
name: reinersio
nameServers:
– ns-cloud-xx.googledomains.com.
– ns-cloud-xx.googledomains.com.
– ns-cloud-xx.googledomains.com.
– ns-cloud-xx.googledomains.com.
[/code]

Now we want to take note of the [code]ns-cloud-xx.googledomains.com.[/code] lines, we can use dig with them to test the new server has your records loaded:

[code]
watch dig reiners.io @ns-cloud-xx.googledomains.com
[/code]

watch will loop the dig command, allowing you to see when one of your records loaded, you can check other with this command, but if the records are there, we should be good to edit our domain registrar DNS records to your nameServers we got from the describe command.

I moved 6 domains with very little work. Moving other zonefiles should work. you just need to make sure all of the domain records are utilizing the trailing dot.

[code]
example: ‘reiners.io.’ not ‘reiners.io’
[/code]

Moving a docker container to a new host without dockerhub

Posted Leave a comment

I’m getting started using docker for somethings. I needed to move a container from a test machine to a new production home. This is how I did it. If there is a better way, please let me know!

I ran this where the machine was in development:

[code]sudo docker export container –output outfile.tar[/code]

on the destination machine I did the following:

[code]
sudo yum install docker
mkdir ~/containers && cd ~/containers
scp justin@reiners.io:/path/to/outfile.tar .
#import
sudo docker import outfile.tar reiners.io/test
#run
sudo docker run –name imagename -ir reiners.io/test -p 9000:9000 bash –restart always
[/code]

once you are done with that, you should be able to see your imported image listed:

[code]
sudo docker images ls
[justin@development images]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
reiners.io/test latest 92e2e1eab9fe 58 seconds ago 1.13 GB
[/code]