Category: rclone

Jul 12

Restic scripting plus jq and minio client

I am jotting down some recent work on scripting restic and also using restic's json output with jq and mc (minio client).

NOTE this is not production just example. Use at your own risk. These are edited by hand from real working scripts but since they are edited they will probably have typos etc in them. Again just examples!

Example backup script. Plus uploading json output to an object storage bucket for analysis later.

# cat restic-backup.sh
#!/bin/bash
source /root/.restic-keys
resticprog=/usr/local/bin/restic-custom
#rcloneargs="serve restic --stdio --b2-hard-delete --cache-workers 64 --transfers 64 --retries 21"
region="s3_phx"
rundate=$(date +"%Y-%m-%d-%H%M")
logtop=/reports
logyear=$(date +"%Y")
logmonth=$(date +"%m")
logname=$logtop/$logyear/$logmonth/restic/$rundate-restic-backup
jsonspool=/tmp/restic-fss-jobs

## Backing up some OCI FSS (same as AWS EFS) NFS folders
FSS=(
"fs-oracle-apps|fs-oracle-apps|.snapshot"           ## backup all exclude .snapshot tree
"fs-app1|fs-app1|.snapshot"                         ## backup all exclude .snapshot tree
"fs-sw|fs-sw/oracle_sw,fs-sw/restic_pkg|.snapshot"  ## backup two folders exclude .snapshot tree
"fs-tifs|fs-tifs|.snapshot,.tif"                  ## backup all exclude .snapshot tree and *.tif files
)

## test commands especially before kicking off large backups
function verify_cmds
{
  f=$1
  restic_cmd=$2
  printf "\n$rundate and cmd: $restic_cmd\n"
}

function backup
{
 f=$1
 restic_cmd=$2

 jobstart=$(date +"%Y-%m-%d-%H%M")

 mkdir $jsonspool/$f
 jsonfile=$jsonspool/$f/$jobstart-restic-backup.json
 printf "$jobstart with cmd: $restic_cmd\n"

 mkdir /mnt/$f
 mount -o ro xx.xx.xx.xx:/$f /mnt/$f

 ## TODO: shell issue with passing exclude from variable. verify exclude .snapshot is working
 ## TODO: not passing *.tif exclude fail?  howto pass *?
 $restic_cmd > $jsonfile

 #cat $jsonfile >> $logname-$f.log
 umount /mnt/$f
 rmdir /mnt/$f

## Using rclone to copy to OCI object storage bucket.
## Note the extra level folder so rclone can simulate 
## a server/20190711-restic.log style.
## Very useful with using minio client to analyze logs.
 rclone copy $jsonspool s3_ash:restic-backup-logs

 rm $jsonfile
 rmdir $jsonspool/$f

 jobfinish=$(date +"%Y-%m-%d-%H%M")
 printf "jobfinish $jobfinish\n"
}

for fss in "${FSS[@]}"; do
 arrFSS=(${fss//|/ })

 folders=""
 f=${arrFSS[0]}
 IFS=',' read -ra folderarr <<< ${arrFSS[1]}
 for folder in ${folderarr[@]};do folders+="/mnt/${folder} "; done

 excludearg=""
 IFS=',' read -ra excludearr <<< ${arrFSS[2]}
 for exclude in ${excludearr[@]};do excludearg+=" --exclude ${exclude}"; done

 backup_cmd="$resticprog -r rclone:$region:restic-$f backup ${folders} $excludearg --json"

## play with verify_cmds first before actual backups
 verify_cmds "$f" "$backup_cmd"
 #backup "$f" "$backup_cmd"
done

Since we have json logs in object storage lets check some of then with minio client.

# cat restic-check-logs.sh
#!/bin/bash

fss=(
 fs-oracle-apps
)

#checkdate="2019-07-11"
checkdate=$(date +"%Y-%m-%d")

for f in ${fss[@]}; do
  echo
  echo
  printf "$f:  "
  name=$(mc find s3-ash/restic-backup-logs/$f -name "*$checkdate*" | head -1)
  if [ -n "$name" ]
  then
    echo $name
    # play with sql --query later
    #mc sql --query "select * from S3Object"  --json-input .message_type=summary s3-ash/restic-backup-logs/$f/2019-07-09-1827-restic-backup.json
    mc cat $name  | jq -r 'select(.message_type=="summary")'
  else
    echo "Fail - no file found"
  fi
done

Example run of minio client against json

# ./restic-check-logs.sh

fs-oracle-apps:  s3-ash/restic-backup-logs/fs-oracle-apps/2019-07-12-0928-restic-backup.json
{
  "message_type": "summary",
  "files_new": 291,
  "files_changed": 1,
  "files_unmodified": 678976,
  "dirs_new": 0,
  "dirs_changed": 1,
  "dirs_unmodified": 0,
  "data_blobs": 171,
  "tree_blobs": 2,
  "data_added": 2244824,
  "total_files_processed": 679268,
  "total_bytes_processed": 38808398197,
  "total_duration": 1708.162522559,
  "snapshot_id": "f3e4dc06"
}

Note all of this was done with Oracle Cloud Infrastructure (OCI) object storage. Here are some observations around the OCI S3 compatible object storage.

  1. restic can not reach both us-ashburn-1 and us-phoenix-1 regions natively. s3:<tenant>.compat.objectstorage.us-ashburn-1.oraclecloud.com works but s3:<tenant>.compat.objectstorage.us-phoenix-1.oraclecloud.com does NOT work. Since restic can use rclone I am using rclone to access OCI object storage and rclone can reach both regions.
  2. rclone can reach both regions.
  3. minio command line client (mc) have the same issue as restic. Can reach us-ashburn-1 but not us-phoenix-1.
  4. minio python API can connect to us-ashburn-1 but shows an empty bucket list.

Comments Off on Restic scripting plus jq and minio client
comments

Jan 04

Object Storage Listing with Rclone and jq

Some examples of using rclone and jq to see object listing in a bucket. These examples was using Oracle (OCI) Object Storage but since this is rclone it should not matter what the target is.

Rclone retrieving JSON object listing of a bucket:

$ rclone lsjson -R s3_ashburn:APPS 
[
{"Path":"config","Name":"config","Size":155,"MimeType":"application/octet-stream","ModTime":"2018-11-02T18:01:31.028653533Z","IsDir":false},
{"Path":"data","Name":"data","Size":0,"MimeType":"inode/directory","ModTime":"2019-01-04T15:31:54.533157179Z","IsDir":true},
{"Path":"index","Name":"index","Size":0,"MimeType":"inode/directory","ModTime":"2019-01-04T15:31:54.533226556Z","IsDir":true},
{"Path":"keys","Name":"keys","Size":0,"MimeType":"inode/directory","ModTime":"2019-01-04T15:31:54.533246534Z","IsDir":true},
{"Path":"snapshots","Name":"snapshots","Size":0,"MimeType":"inode/directory","ModTime":"2019-01-04T15:31:54.533266804Z","IsDir":true},
{"Path":"index/6f0870dc3d699c0e550f62c535f11a3e52396f45d9c3439760a5f648ee2f1533","Name":"6f0870dc3d699c0e550f62c535f11a3e52396f45d9c3439760a5f648ee2f1533","Size":37828
350,"MimeType":"application/octet-stream","ModTime":"2019-01-03T21:27:05Z","IsDir":false},
{"Path":"index/b20a6e07f25d834739e3c3fd82cf3b7ade3e7f1f0f286aab61006532621220ae","Name":"b20a6e07f25d834739e3c3fd82cf3b7ade3e7f1f0f286aab61006532621220ae","Size":36726
493,"MimeType":"application/octet-stream","ModTime":"2019-01-03T21:27:02Z","IsDir":false},

Use jq select to grab older than certain dates:

$ rclone lsjson -R s3_ashburn:APPS | jq -r '.[] | select (."ModTime" < "2018-12-01")|.Name'
ffea09b644533ddcde68a93095bc512646fd0ac0557d39e6e06e004bf73b6bed
ffef7980ade85ea2d9b436c40df46384bbbe8e7e6e71219aff0757ad90f1652f
fff3f56e384ab055c3aa4b6e2dd527c368bf2280863d357e577402460fe9d41a

Use jq csv filter and specific fields

$ rclone lsjson -R s3_ashburn:APPS | jq -r '.[] | [.Name,.Size] | @csv'

Use jq select for older than certain date, specific fields and csv

$ rclone lsjson -R s3_ashburn:APPS | jq -r '.[] | select (."ModTime" < "2018-11-01") | [.Name,.Size,.ModTime] | @csv'

Rclone size

$ rclone size s3_ashburn:APPS --json 
{"count":8088,"bytes":38670955795}

Rclone size and jq csv filter

$ rclone size s3_ashburn:APPS --json | jq -r '[.count,.bytes] | @csv'
8088,38670955795

Comments Off on Object Storage Listing with Rclone and jq
comments

Nov 10

Restic and Oracle OCI Object Storage

It seems that after some time went by the S3 compatible object storage OCI interface can now work with restic directly and not necessary to use rclone. Tests a few months ago this did not work.

Using S3 directly mean we may not have this issue we see when using restic + rclone:
rclone: 2018/11/02 20:04:16 ERROR : data/fa/fadbb4f1d9172a4ecb591ddf5677b0889c16a8b98e5e3329d63aa152e235602e: Didn't finish writing GET request (wrote 9086/15280 bytes): http2: stream closed

This shows how I setup restic to Oracle OCI object storage(no rclone required).

Current restic env pointing to rclone.conf
##########################################

# more /root/.restic-env 
export RESTIC_REPOSITORY="rclone:s3_servers_ashburn:bucket1"
export RESTIC_PASSWORD="blahblah"

# more /root/.config/rclone/rclone.conf 
[s3_servers_phoenix]
type = s3
env_auth = false
access_key_id =  
secret_access_key =  
region = us-phoenix-1
endpoint = <client-id>.compat.objectstorage.us-phoenix-1.oraclecloud.com
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
[s3_servers_ashburn]
type = s3
env_auth = false
access_key_id =  
secret_access_key = 
region = us-ashburn-1
endpoint = <client-id>.compat.objectstorage.us-ashburn-1.oraclecloud.com
location_constraint =
acl = private
server_side_encryption =

New restic env pointing to S3 style
###################################

# more /root/.restic-env 
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export RESTIC_REPOSITORY="s3:<client-id>.compat.objectstorage.us-ashburn-1.oraclecloud.com/bucket1"
export RESTIC_PASSWORD="blahblah"

# . /root/.restic-env

# /usr/local/bin/restic snapshots
repository 26e5f447 opened successfully, password is correct
ID        Date                 Host             Tags        Directory
----------------------------------------------------------------------
dc9827fd  2018-08-31 21:20:02  server1                      /etc
cb311517  2018-08-31 21:20:04  server1                      /home
f65a3bb5  2018-08-31 21:20:06  server1                      /var
{...}
----------------------------------------------------------------------
36 snapshots

Comments Off on Restic and Oracle OCI Object Storage
comments

Aug 07

Object Storage with Duplicity and Rclone

At this point I prefer using restic for my object storage backup needs but since I did a POC for duplicity and specifically using rclone with duplicity I am writing down my notes. A good description of duplicity and restic here:

Backing Up Linux to Backblaze B2 with Duplicity and Restic


We’re highlighting Duplicity and Restic because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

Since I am doing my tests with Oracle Cloud Infrastructure (OCI) Object Storage and so far it's Amazon S3 Compatibility Interface does not work out of the box with most tools except with rclone, I am using rclone as a backend. With restic using rclone as a back-end worked pretty smooth but duplicity does not have good rclone support so I used a python back-end written by Francesco Magno and hosted here: https://github.com/GilGalaad/duplicity-rclone/blob/master/README.md

I had a couple issues with getting duplicity to work with this back-end so I will show how to get around it.

First:
1. Make sure rclone is working with your rclone config and can at least "ls" your bucket.
2. Setup a gpg key.
3. Copy rclonebackend.py to duplicity backends folder. In my case /usr/lib64/python2.7/site-packages/duplicity/backends

# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 /tmp rclone://mycompany-POC-phoenix:dr01-duplicity
InvalidBackendURL: Syntax error (port) in: rclone://mycompany-POC-phoenix:dr01-duplicity AFalse BNone Cmycompany-POC-phoenix:dr01-duplicity

## Hack backends.py

# diff /usr/lib64/python2.7/site-packages/duplicity/backend.py /tmp/backend.py 
303c303
< if not (self.scheme in ['rsync'] and re.search('::[^:]*$', self.url_string) or (self.scheme in ['rclone']) ): --- >             if not (self.scheme in ['rsync'] and re.search('::[^:]*$', self.url_string)):
# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 /tmp rclone://mycompany-POC-phoenix:dr01-duplicity
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
No signatures found, switching to full backup.
--------------[ Backup Statistics ]--------------
StartTime 1533652997.49 (Tue Aug  7 14:43:17 2018)
EndTime 1533653022.35 (Tue Aug  7 14:43:42 2018)
ElapsedTime 24.86 (24.86 seconds)
SourceFiles 50
SourceFileSize 293736179 (280 MB)
NewFiles 50
NewFileSize 136467418 (130 MB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 50
RawDeltaSize 293723433 (280 MB)
TotalDestinationSizeChange 279406571 (266 MB)
Errors 0
-------------------------------------------------

# rclone ls mycompany-POC-phoenix:dr01-duplicity
  1773668 duplicity-full-signatures.20180807T144317Z.sigtar.gpg
      485 duplicity-full.20180807T144317Z.manifest.gpg
209763240 duplicity-full.20180807T144317Z.vol1.difftar.gpg
 69643331 duplicity-full.20180807T144317Z.vol2.difftar.gpg

# PASSPHRASE="mypassphrase" duplicity --encrypt-key 094CA414 collection-status rclone://mycompany-POC-phoenix:dr01-duplicity
Last full backup date: Tue Aug  7 14:43:17 2018
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /root/.cache/duplicity/df529824ba5d10f9e31329e440c5efa6

Found 0 secondary backup chains.

Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Tue Aug  7 14:43:17 2018
Chain end time: Tue Aug  7 14:50:12 2018
Number of contained backup sets: 2
Total number of contained volumes: 3
 Type of backup set:                            Time:      Num volumes:
                Full         Tue Aug  7 14:43:17 2018                 2
         Incremental         Tue Aug  7 14:50:12 2018                 1
-------------------------
No orphaned or incomplete backup sets found.

Comments Off on Object Storage with Duplicity and Rclone
comments

Aug 03

Object Storage with Restic and Rclone

I have been playing around with some options to utilize Object Storage for backups. Since I am working on Oracle Cloud Infrastructure (OCI) I am doing my POC using the OCI Object Storage. OCI object storage does have Swift and S3 Compatibility API's to interface with. Of course if you want commercial backups many of them can use object storage as back-ends now so that would be the correct answer. If your needs does not warrant commercial backups solutions you can try several things. A few options I played with.

1. Bareos server/client with the object storage droplet. Not working reliably. Too experimental with droplet?
2. Rclone and using tar to pipe with rclone's rcat feature. This works well but is not a backup solution as in incrementals etc.
3. Duplicati. In my case using rclone as connection since S3 interface on OCI did not work.
4. Dupliciti. Could not get this one to work to S3 interface on OCI.
5. Restic. In my case using rclone as connection since S3 interface on OCI did not work.

So far duplicati was not bad but had some bugs. It is beta software so probably should expect problems. Restic is doing a good job so far and I show a recipe of my POC below:

Out of scope is setting up rclone, rclone.conf. Make sure you test that rclone is accessing your bucket first.

Restic binary

# wget https://github.com/restic/restic/releases/download/v0.9.1/restic_0.9.1_linux_amd64.bz2
2018-08-03 10:25:10 (3.22 MB/s) - ‘restic_0.9.1_linux_amd64.bz2’ saved [3786622/3786622]
# bunzip2 restic_0.9.1_linux_amd64.bz2 
# mv restic_0.9.1_linux_amd64 /usr/local/bin/
# chmod +x /usr/local/bin/restic_0.9.1_linux_amd64 
# mv /usr/local/bin/restic_0.9.1_linux_amd64 /usr/local/bin/restic
# /usr/local/bin/restic version
restic 0.9.1 compiled with go1.10.3 on linux/amd64

Initialize repo

# rclone ls s3_servers_phoenix:oci02a
# export RESTIC_PASSWORD="WRHYEjblahblah0VWq5qM"
# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a init
created restic repository 2bcf4f5864 at rclone:s3_servers_phoenix:oci02a

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

# rclone ls s3_servers_phoenix:oci02a
      155 config
      458 keys/530a67c4674b9abf6dcc9e7b75c6b319187cb8c3ed91e6db992a3e2cb862af63

Run a backup

# time /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:       1200934 new,     0 changed,     0 unmodified
Dirs:            2 new,     0 changed,     0 unmodified
Added:      37.334 GiB

processed 1200934 files, 86.311 GiB in 1:31:40
snapshot af4d5598 saved

real	91m40.824s
user	23m4.072s
sys	7m23.715s

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a snapshots
repository 2bcf4f58 opened successfully, password is correct
ID        Date                 Host              Tags        Directory
----------------------------------------------------------------------
af4d5598  2018-08-03 10:35:45  oci02a              /opt/applmgr/12.2
----------------------------------------------------------------------
1 snapshots

Run second backup

# /usr/local/bin/restic -r rclone:s3_servers_phoenix:oci02a backup /opt/applmgr/12.2
repository 2bcf4f58 opened successfully, password is correct

Files:           0 new,     0 changed, 1200934 unmodified
Dirs:            0 new,     0 changed,     2 unmodified
Added:      0 B  

processed 1200934 files, 86.311 GiB in 47:46
snapshot a158688a saved

Example cron entry

# crontab -l
05 * * * * /usr/local/bin/restic -r servers_phoenix:oci02a backup -q /usr; /usr/local/bin/restic -r servers_phoenix:oci02a forget -q --prune --keep-hourly 2 --keep-daily 7

Comments Off on Object Storage with Restic and Rclone
comments

Jul 18

Tar to Object Storage Using rclone

Sometimes using curl and uploading/downloading with an object storage back end will work just fine but in this case I was looking to tar straight into object storage. One option is using rclone with the rcat command. Some example below.

This test was done using Oracle Cloud Infrastructure Object Storage with an Amazon S3 Compatibility API Key. This test consists of:
- 2 196 914 files
- size using df -h 122G
- local tar/gzip file for comparison 52G
- correct rclone.conf setup for the API Key and OCI policies if required for this user

# rclone ls s3_servers_ashburn:SERVERS
 10738097 oci01-20180717_/etc.tgz
  2132252 oci01-20180718_/home/opc.tgz
   286946 oci01-20180717_/home/opc/terraform.tgz

# time tar zcpf - /opt/app2/12.2 | rclone rcat s3_servers_ashburn:SERVERS/oci01-20180718_/opt/app2/12.2.tgz
tar: Removing leading `/' from member names
real	149m48.812s
user	78m13.544s
sys	11m42.817s

# rclone ls s3_servers_ashburn:SERVERS
 10738097 oci01-20180717_/etc.tgz
  2132252 oci01-20180718_/home/opc.tgz
40476682243 oci01-20180718_/opt/app2/12.2.tgz
   286946 ocil01-20180717_/home/opc/terraform.tgz

Comments Off on Tar to Object Storage Using rclone
comments

Mar 31

Borg Backup and Rclone to Object Storage

I recently used Borg for protecting some critical files and jotting down some notes here.

Borg exist in many distribution repos so easy to install. When not in a repo they have pre-compiled binaries that can easily be added to your Linux OS.

Pick a server to act like your backup server (repository). Pretty much any Linux server where you can direct your client to send their backups to. You want to make your backup folder big enough of course.

Using Borg backup across SSH with sshkeys
https://opensource.com/article/17/10/backing-your-machines-borg

# yum install borgbackup
# useradd borg
# passwd borg
# sudo su - borg 
$ mkdir /mnt/backups
$ cat /home/borg/.ssh/authorized_keys
ssh-rsa AAAAB3N[..]6N/Yw== root@server01
$ borg init /mnt/backups/repo1 -e none

 **** CLIENT server01 with single binary(no repo for borgbackup on this server)

$ sudo su - root
# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): /root/.ssh/borg_key
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/borg_key.
Your public key has been saved in /root/.ssh/borg_key.pub.

# ./backup.sh 
Warning: Attempting to access a previously unknown unencrypted repository!
Do you want to continue? [yN] y
Synchronizing chunks cache...
Archives: 0, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 0.
Done.
------------------------------------------------------------------------------
Archive name: server01-2018-03-29
Archive fingerprint: 79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3
Time (start): Thu, 2018-03-29 19:32:45
Time (end):   Thu, 2018-03-29 19:32:47
Duration: 1.36 seconds
Number of files: 1069
Utilization of max. archive size: 0%
------------------------------------------------------------------------------
                       Original size      Compressed size    Deduplicated size
This archive:               42.29 MB             15.41 MB             11.84 MB
All archives:               42.29 MB             15.41 MB             11.84 MB

                       Unique chunks         Total chunks
Chunk index:                    1023                 1059
------------------------------------------------------------------------------
Keeping archive: server01-2018-03-29                     Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3]

*** RECOVER test. Done on BORG server directly but will test from client directly also. May need BORG_RSH variable.

$ borg list repo1
server01-2018-03-29                     Thu, 2018-03-29 19:32:45 [79f91d82291db36be7de90c421c082d7ee4333d11ac77cd5d543a4fe568431e3]

$ borg list repo1::server01-2018-03-29 | less

$ cd /tmp
$ borg extract /mnt/backups/repo1::server01-2018-03-29  etc/hosts

$ ls -l etc/hosts 
-rw-r--r--. 1 borg borg 389 Mar 26 15:50 etc/hosts

APPENDIX: client backup.sh cron and source

# crontab -l
0 0 * * * /root/scripts/backup.sh &amp;gt; /dev/null 2&amp;gt;&amp;amp;1

# sudo su - root
# cd scripts/
# cat backup.sh 
#!/usr/bin/env bash

##
## Set environment variables
##

## if you don't use the standard SSH key,
## you have to specify the path to the key like this
export BORG_RSH='ssh -i /root/.ssh/borg_key'

## You can save your borg passphrase in an environment
## variable, so you don't need to type it in when using borg
# export BORG_PASSPHRASE="top_secret_passphrase"

##
## Set some variables
##

LOG="/var/log/borg/backup.log"
BACKUP_USER="borg"
REPOSITORY="ssh://${BACKUP_USER}@10.1.1.2/mnt/backups/repo1"

#export BORG_PASSCOMMAND=''

#Bail if borg is already running, maybe previous run didn't finish
if pidof -x borg &amp;gt;/dev/null; then
    echo "Backup already running"
    exit
fi

##
## Output to a logfile
##

exec &amp;gt; &amp;gt;(tee -i ${LOG})
exec 2&amp;gt;&amp;amp;1

echo "###### Backup started: $(date) ######"

##
## At this place you could perform different tasks
## that will take place before the backup, e.g.
##
## - Create a list of installed software
## - Create a database dump
##

##
## Transfer the files into the repository.
## In this example the folders root, etc,
## var/www and home will be saved.
## In addition you find a list of excludes that should not
## be in a backup and are excluded by default.
##

echo "Transfer files ..."
/usr/local/bin/borg create -v --stats                   \
    $REPOSITORY::'{hostname}-{now:%Y-%m-%d}'    \
    /root                                \
    /etc                                 \
    /u01                                 \
    /home                                \
    --exclude /dev                       \
    --exclude /proc                      \
    --exclude /sys                       \
    --exclude /var/run                   \
    --exclude /run                       \
    --exclude /lost+found                \
    --exclude /mnt                       \
    --exclude /var/lib/lxcfs


# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly
# archives of THIS machine. The '{hostname}-' prefix is very important to
# limit prune's operation to this machine's archives and not apply to
# other machine's archives also.
/usr/local/bin/borg prune -v --list $REPOSITORY --prefix '{hostname}-' \
    --keep-daily=7 --keep-weekly=4 --keep-monthly=6

echo "###### Backup ended: $(date) ######"

In addition to using Borg this test was also about pushing backups to Oracle OCI object storage so below is some steps I followed. I had to use the newest rclone because v1.36 had weird issues with the Oracle OCI S3 interface.

# curl https://rclone.org/install.sh | sudo bash

# df -h | grep borg
/dev/mapper/vg01-vg01--lv01  980G  7.3G  973G   1% /mnt/backups

# sudo su - borg

[$ cat ~/.config/rclone/rclone.conf 
[s3_backups]
type = s3
env_auth = false
access_key_id = ocid1.credential.oc1..aaaa[snipped]
secret_access_key = KJFevw6s=
region = us-ashburn-1
endpoint = [snipped].compat.objectstorage.us-ashburn-1.oraclecloud.com
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 

$ rclone  lsd s3_backups: 
          -1 2018-03-27 21:07:11        -1 backups
          -1 2018-03-29 13:39:42        -1 repo1
          -1 2018-03-26 22:23:35        -1 terraform
          -1 2018-03-27 14:34:55        -1 terraform-src

Initial sync. Note I am using sync but be warned you need to figure out if you want to use copy or sync. As far as I know sync may delete not only on target but also on source when syncing.

$ /usr/bin/rclone -v sync /mnt/borg/repo1 s3_backups:repo1
2018/03/29 22:37:00 INFO  : S3 bucket repo1: Modify window is 1ns
2018/03/29 22:37:00 INFO  : README: Copied (replaced existing)
2018/03/29 22:37:00 INFO  : hints.38: Copied (new)
2018/03/29 22:37:00 INFO  : integrity.38: Copied (new)
2018/03/29 22:37:00 INFO  : data/0/17: Copied (new)
2018/03/29 22:37:00 INFO  : config: Copied (replaced existing)
2018/03/29 22:37:00 INFO  : data/0/18: Copied (new)
2018/03/29 22:37:00 INFO  : index.38: Copied (new)
2018/03/29 22:37:59 INFO  : data/0/24: Copied (new)
2018/03/29 22:38:00 INFO  : 
Transferred:   1.955 GBytes (33.361 MBytes/s)
Errors:                 0
Checks:                 2
Transferred:            8
Elapsed time:        1m0s
Transferring:
 *                                     data/0/21: 100% /501.284M, 16.383M/s, 0s
 *                                     data/0/22: 98% /500.855M, 18.072M/s, 0s
 *                                     data/0/23: 100% /500.951M, 14.231M/s, 0s
 *                                     data/0/25:  0% /501.379M, 0/s, -

2018/03/29 22:38:00 INFO  : data/0/22: Copied (new)
2018/03/29 22:38:00 INFO  : data/0/23: Copied (new)
2018/03/29 22:38:01 INFO  : data/0/21: Copied (new)
2018/03/29 22:38:57 INFO  : data/0/25: Copied (new)
2018/03/29 22:38:58 INFO  : data/0/27: Copied (new)
2018/03/29 22:38:59 INFO  : data/0/26: Copied (new)
2018/03/29 22:38:59 INFO  : data/0/28: Copied (new)
2018/03/29 22:39:00 INFO  : 
Transferred:   3.919 GBytes (33.438 MBytes/s)
Errors:                 0
Checks:                 2
Transferred:           15
Elapsed time:        2m0s
Transferring:
 *                                     data/0/29:  0% /500.335M, 0/s, -
 *                                     data/0/30:  0% /500.294M, 0/s, -
 *                                     data/0/31:  0% /500.393M, 0/s, -
 *                                     data/0/32:  0% /500.264M, 0/s, -

2018/03/29 22:39:45 INFO  : data/0/29: Copied (new)
2018/03/29 22:39:52 INFO  : data/0/30: Copied (new)
2018/03/29 22:39:52 INFO  : S3 bucket repo1: Waiting for checks to finish
2018/03/29 22:39:55 INFO  : data/0/32: Copied (new)
2018/03/29 22:39:55 INFO  : S3 bucket repo1: Waiting for transfers to finish
2018/03/29 22:39:56 INFO  : data/0/31: Copied (new)
2018/03/29 22:39:57 INFO  : data/0/36: Copied (new)
2018/03/29 22:39:57 INFO  : data/0/37: Copied (new)
2018/03/29 22:39:57 INFO  : data/0/38: Copied (new)
2018/03/29 22:39:58 INFO  : data/0/1: Copied (replaced existing)
2018/03/29 22:40:00 INFO  : 
Transferred:   5.874 GBytes (33.413 MBytes/s)
Errors:                 0
Checks:                 3
Transferred:           23
Elapsed time:        3m0s
Transferring:
 *                                     data/0/33:  0% /500.895M, 0/s, -
 *                                     data/0/34:  0% /501.276M, 0/s, -
 *                                     data/0/35:  0% /346.645M, 0/s, -

2018/03/29 22:40:25 INFO  : data/0/35: Copied (new)
2018/03/29 22:40:28 INFO  : data/0/33: Copied (new)
2018/03/29 22:40:30 INFO  : data/0/34: Copied (new)
2018/03/29 22:40:30 INFO  : Waiting for deletions to finish
2018/03/29 22:40:30 INFO  : data/0/3: Deleted
2018/03/29 22:40:30 INFO  : index.3: Deleted
2018/03/29 22:40:30 INFO  : hints.3: Deleted
2018/03/29 22:40:30 INFO  : 
Transferred:   7.191 GBytes (34.943 MBytes/s)
Errors:                 0
Checks:                 6
Transferred:           26
Elapsed time:     3m30.7s

Run another sync showing nothing to do.

$ /usr/bin/rclone -v sync /mnt/borg/repo1 s3_backups:repo1
2018/03/29 22:43:13 INFO  : S3 bucket repo1: Modify window is 1ns
2018/03/29 22:43:13 INFO  : S3 bucket repo1: Waiting for checks to finish
2018/03/29 22:43:13 INFO  : S3 bucket repo1: Waiting for transfers to finish
2018/03/29 22:43:13 INFO  : Waiting for deletions to finish
2018/03/29 22:43:13 INFO  : 
Transferred:      0 Bytes (0 Bytes/s)
Errors:                 0
Checks:                26
Transferred:            0
Elapsed time:       100ms

Test script and check log

$ cd scripts/
$ ./s3_backup.sh 
$ more ../s3_backups.log 
2018/03/29 22:43:56 INFO  : S3 bucket repo1: Modify window is 1ns
2018/03/29 22:43:56 INFO  : S3 bucket repo1: Waiting for checks to finish
2018/03/29 22:43:56 INFO  : S3 bucket repo1: Waiting for transfers to finish
2018/03/29 22:43:56 INFO  : Waiting for deletions to finish
2018/03/29 22:43:56 INFO  : 
Transferred:      0 Bytes (0 Bytes/s)
Errors:                 0
Checks:                26
Transferred:            0
Elapsed time:       100ms

Check size used on object storage.

$ rclone size s3_backups:repo1
Total objects: 26
Total size: 7.191 GBytes (7721115523 Bytes)

APPENDIX: s3_backup.sh crontab and source

$ crontab -l
50 23 * * * /home/borg/scripts/s3_backup.sh

$ cat s3_backup.sh 
#!/bin/bash
set -e

#repos=( repo1 repo2 repo3 )
repos=( repo1 )

#Bail if rclone is already running, maybe previous run didn't finish
if pidof -x rclone &amp;gt;/dev/null; then
    echo "Process already running"
    exit
fi

for i in "${repos[@]}"
do
    #Lets see how much space is used by directory to back up
    #if directory is gone, or has gotten small, we will exit
    space=`du -s /mnt/backups/$i|awk '{print $1}'`

    if (( $space &amp;lt; 3450000 )); then echo "EXITING - not enough space used in $i" exit fi /usr/bin/rclone -v sync /mnt/backups/$i s3_backups:$i &amp;gt;&amp;gt; /home/borg/s3_backups.log 2&amp;gt;&amp;amp;1
done

Comments Off on Borg Backup and Rclone to Object Storage
comments

Mar 29

Rclone and OCI S3 Interface

I am testing rclone to the Oracle Cloud Interface object storage and recording what worked for me.

Note I could not get the swift interface to work with rclone, duplicity or swiftclient yet. Although straightforward curl does work to the swift interface.

rclone configuration generated with rclone config

# cat /root/.config/rclone/rclone.conf
[s3_backups]
type = s3
env_auth = false
access_key_id = ocid1.credential.oc1..a<redacted>ta
secret_access_key = K<redacted>6s=
region = us-ashburn-1
endpoint = <tenancy>.compat.objectstorage.us-ashburn-1.oraclecloud.com
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 

Issue with max-keys. This problem although very difficult to find was also preventing copy/sync of folders although a single file was working. rclone v1.36 was installed form Ubuntu repos and issue resolved with newer version.

# rclone ls s3_backups:repo1
2018/03/29 08:55:44 Failed to ls: InvalidArgument: The 'max-keys' parameter must be between 1 and 1000 (it was 1024) status code: 400, request id: fa704a55-44a8-1146-1b62-688df0366f63

Update and try again.

# curl https://rclone.org/install.sh | sudo bash
[..]
rclone v1.40 has successfully installed.

# rclone -V
rclone v1.40
- os/arch: linux/amd64
- go version: go1.10

# rclone ls s3_backups:repo1
      655 config
       38 hints.3

# rclone copy /root/backup/repo1 s3_backups:repo1

# rclone sync /root/backup/repo1 s3_backups:repo1

# rclone ls s3_backups:repo1
       26 README
      655 config
       38 hints.3
    82138 index.3
  5245384 data/0/1
  3067202 data/0/3

# rclone lsd s3_backups:
          -1 2018-03-27 21:07:11        -1 backups
          -1 2018-03-29 13:39:42        -1 repo1
          -1 2018-03-26 22:23:35        -1 terraform
          -1 2018-03-27 14:34:55        -1 terraform-src

References:
https://rclone.org/docs/
https://docs.us-phoenix-1.oraclecloud.com/api/#/en/s3objectstorage/20160918/

Rclone: Rsync for Cloud Storage

In a future article I will add my testing around BorgBackup + rclone + OCI objectstorage from this interesting idea: https://opensource.com/article/17/10/backing-your-machines-borg

Comments Off on Rclone and OCI S3 Interface
comments