Category: Uncategorized

May 20

Powerline In Visual Studio Code

There are some examples here:

I chose to follow a comment suggestion Meslo

download the Meslo font

https://github.com/ryanoasis/nerd-fonts/releases/tag/v2.1.0

rrosso  ~  Downloads  sudo -i
root@pop-os:~# cd /usr/share/fonts/truetype
root@pop-os:/usr/share/fonts/truetype# mkdir Meslo
root@pop-os:/usr/share/fonts/truetype# cd Meslo/

root@pop-os:/usr/share/fonts/truetype/Meslo# unzip /home/rrosso/Downloads/Meslo.zip 
Archive:  /home/rrosso/Downloads/Meslo.zip
  inflating: Meslo LG M Bold Nerd Font Complete Mono.ttf  

root@pop-os:/usr/share/fonts/truetype/Meslo#  fc-cache -vf /usr/share/fonts/

update the vscode settings

rrosso  ~  .config  Code  User  pwd
/home/rrosso/.config/Code/User

rrosso  ~  .config  Code  User  cat settings.json 
{
    "editor.fontSize": 12,
    "editor.fontFamily": "MesloLGM Nerd Font",
    "terminal.integrated.fontSize": 11,
    "terminal.integrated.fontFamily": "MesloLGM Nerd Font",
    "editor.minimap.enabled": false
}

Comments Off on Powerline In Visual Studio Code
comments

Apr 19

htmly flat-file blog

Test Htmly on Ubuntu 19.10

# apt install apache2 php php-zip php-xml

# cat /etc/apache2/sites-available/000-default.conf 
...
<VirtualHost *:80>
...
    DocumentRoot /var/www/html

    <Directory "/var/www/html/">
          Options FollowSymLinks Indexes
          AllowOverride All
          Order Allow,Deny
          Allow from all
          DirectoryIndex index.php
    </Directory>
...

# systemctl enable apache2
# systemctl start apache2
# systemctl status apache2

# cd /var/www/html
# wget https://github.com/danpros/htmly/releases/latest

visit server http://localhost/installer.php and run through initial steps

# cd /var/www
# chown -R www-data html/
# cd html/

# ls -l backup/
total 5
-rw-r--r-- 1 www-data www-data 1773 Apr 19 12:07 htmly_2020-04-19-12-07-28.zip

# tree content/
content/
├── admin
│   └── blog
│       ├── general
│       │   ├── draft
│       │   └── post
│       │       └── 2020-04-19-12-05-14_general_post-1.md
│       └── uncategorized
│           └── post
└── data
    ├── category
    │   └── general.md
    └── tags.lang

9 directories, 3 files

# cat content/admin/blog/general/post/2020-04-19-12-05-14_general_post-1.md 
<!--t post 1 t-->
<!--d this is a test post #1 d-->
<!--tag general tag-->

Comments Off on htmly flat-file blog
comments

Dec 14

Bash Read Json Config File

Couple of things here:

  • I wanted to do some restic scripts
  • At the same time use a configuration file. The restic developers is working on this functionality for restic and possibly using TOML.

Meanwhile I was trying json since I can definitely use bash/json for other applications. And as you know bash is not great at this kind of thing specifically arrays etc. So this example reads a configuration file and process the json. To further complicate things my json typically need arrays or lists of values as in the restic example you can see for folders, excludes and tags.

You will also note a unique problem with bash. When using while loops with a pipe into the while a subshell is used and you can\'t use variable in your main shell. So my appending to a variable inside the while loop does not produce any strings. In bash 4.2 you can use shopt -s latpipe to get around this. Apparently this is not a problem with ksh.

This is not a working restic script. This is a script to read a configuration file. It just happen to be for something I am going to do with restic.

Example json config file.

$ cat restic-jobs.json 
{ Jobs:
  [
   {
    jobname: aws-s3,
    repo: sftp:myuser@192.168.1.112:/TANK/RESTIC-REPO,
    sets:
      [
       {
        folders: [ /DATA ],
        excludes: [ .snapshots,temp],
        tags: [ data,biz ]
       },
       {
        folders: [ /ARCHIVE ],
        excludes: [ .snapshots,temp],
        tags: [ archive,biz ]
       }
      ],
      quiet: true
    },
    {
     jobname: azure-onedrive,
     repo:  rclone:azure-onedrive:restic-backups,
     sets:
       [
       {
        folders: [ /DATA ],
        excludes: [ .snapshots,temp],
        tags: [ data,biz ]
       },
       {
        folders: [ /ARCHIVE ],
        excludes: [ .snapshots,temp],
        tags: [ archive,biz ]
       }
      ],
     quiet: true
    }
  ]
} 

Script details.

$ cat restic-jobs.sh 
#!/bin/bash
#v0.9.1

JOB=aws-s3
eval $(jq --arg JOB ${JOB} -r '.Jobs[] | select(.jobname==$JOB) | del(.sets) | to_entries[] | .key + =\ + .value + \' restic-jobs.json)
if [[ $jobname ==  ]]; then
  echo no job found in config:  $JOB
  exit
fi

echo found: $jobname

#sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB) | .sets | .[]' restic-jobs.json )

echo

sets=$(jq --arg JOB ${JOB} -r '.Jobs[] | select (.jobname==$JOB)' restic-jobs.json)

backup_jobs=()
## need this for bash issue with variables and pipe subshell
shopt -s lastpipe

echo $sets | jq -rc '.sets[]' | while IFS='' read set;do
    cmd_line=restic backup -q --json 

    folders=$(echo $set | jq -r '.folders | .[]')
    for st in $folders; do cmd_line+= $st; done
    excludes=$(echo $set | jq -r '.excludes | .[]')
    for st in $excludes; do cmd_line+= --exclude $st; done
    tags=$(echo $set | jq -r '.tags | .[]')
    for st in $tags; do cmd_line+= --tag $st; done

    backup_jobs+=($cmd_line)
done

for i in ${backup_jobs[@]}; do
  echo cmd_line: $i
done

Script run example. Note I am not passing the job name just hard code at the top for my test.

$ ./restic-jobs.sh 
found: iqonda-aws-s3

cmd_line: restic backup -q --json  /DATA --exclude .snapshots --exclude temp --tag iqonda --tag biz
cmd_line: restic backup -q --json  /ARCHIVE --exclude .snapshots --exclude temp --tag iqonda --tag biz

Comments Off on Bash Read Json Config File
comments

Oct 25

Restic snapshot detail json to csv

Restic shows details of a snapshot. Sometimes you want that to be CSV but the json output for paths, excludes and tags are lists which will choke the @csv jq filter. Furthermore not all snapshots have the excludes key. Here are some snippets on solving above. Use join to collapse the lists and use if to test if key exists.

# restic -r $REPO snapshots --last --json | jq -r '.[] | [.hostname,.short_id,.time,(.paths|join(",")),if (.excludes) then (.excludes|join(",")) else empty end]'
[
  "bkupserver.domain.com",
  "c56d3e2e",
  "2019-10-25T00:10:01.767408581-05:00",
  "/etc,/home,/root,/u01/backuplogs,/var/log,/var/spool/cron",
  "**/diag/**,/var/spool/lastlog"
]

And using CSV filter

# restic -r $REPO snapshots --last --json | jq -r '.[] | [.hostname,.short_id,.time,(.paths|join(",")),if (.excludes) then (.excludes|join(",")) else empty end] | @csv'
"bkupserver.domain.com","c56d3e2e","2019-10-25T00:10:01.767408581-05:00","/etc,/home,/root,/u01/backuplogs,/var/log,/var/spool/cron","**/diag/**,/var/spool/lastlog"

Comments Off on Restic snapshot detail json to csv
comments

Jul 21

zfsbackup-go test with minio server

Recording my test with zfsbackup-go. While I am playing around with backup/DR/object storage I also compared the concept here with a previous test around restic/rclone/object storage.

In general ZFS snapshot and replication should work much better with file systems containing huge numbers of files. Most solutions struggle with millions of files and rsync on file level and restic/rclone on object storage level. Walking the tree is just never efficient. So this test works well but has not been scaled yet. I plan to work on that as well as seeing how well the bucket can be synced to different regions.

Minio server

Tip: minio server has a nice browser interface

# docker run -p 9000:9000 --name minio1 -e "MINIO_ACCESS_KEY=AKIAIOSFODNN7EXAMPLE" -e "MINIO_SECRET_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" -v /DATA/minio-repos/:/minio-repos minio/minio server /minio-repos

 You are running an older version of MinIO released 1 week ago 
 Update: docker pull minio/minio:RELEASE.2019-07-17T22-54-12Z 


Endpoint:  http://172.17.0.2:9000  http://127.0.0.1:9000

Browser Access:
   http://172.17.0.2:9000  http://127.0.0.1:9000

Object API (Amazon S3 compatible):
   Go:         https://docs.min.io/docs/golang-client-quickstart-guide
   Java:       https://docs.min.io/docs/java-client-quickstart-guide
   Python:     https://docs.min.io/docs/python-client-quickstart-guide
   JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
   .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide

server 1:

This server simulate our "prod" server. We create an initial data set in /DATA on our server, take snapshot and backup to object storage.

# rsync -a /media/sf_DATA/MyWorkDocs /DATA/

# du -sh /DATA/MyWorkDocs/
1.5G	/DATA/MyWorkDocs/

# export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
# export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# export AWS_S3_CUSTOM_ENDPOINT=http://192.168.1.112:9000
# export AWS_REGION=us-east-1

# zfs snapshot DATA@20190721-0752

# /usr/local/bin/zfsbackup-go send --full DATA s3://zfs-poc
2019/07/21 07:53:12 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Done.
	Total ZFS Stream Bytes: 1514016976 (1.4 GiB)
	Total Bytes Written: 1176757570 (1.1 GiB)
	Elapsed Time: 1m17.522630438s
	Total Files Uploaded: 7

# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 07:56:57 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 1 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)


There are 4 manifests found locally that are not on the target destination.

server 2:

This server is a possible DR or new server but the idea is somewhere else preferably another cloud region or data center.

# export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
# export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# export AWS_S3_CUSTOM_ENDPOINT=http://192.168.1.112:9000
# export AWS_REGION=us-east-1
# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 07:59:16 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 1 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
DATA  2.70M  96.4G    26K  /DATA
# zfs list -t snapshot
no datasets available
# ls /DATA/

** using -F. This is a COLD DR style test with no existing infrastructure/ZFS sets on target systems
# /usr/local/bin/zfsbackup-go receive --auto DATA s3://zfs-poc DATA -F
2019/07/21 08:05:28 Ignoring user provided number of cores (2) and using the number of detected cores (1).
2019/07/21 08:06:42 Done. Elapsed Time: 1m13.968871681s
2019/07/21 08:06:42 Done.
# ls /DATA/
MyWorkDocs
# du -sh /DATA/MyWorkDocs/
1.5G	/DATA/MyWorkDocs/
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
DATA  1.41G  95.0G  1.40G  /DATA
# zfs list -t snapshot
NAME                 USED  AVAIL  REFER  MOUNTPOINT
DATA@20190721-0752   247K      -  1.40G  -

That concludes one test. In theory that is a cold DR situation where you have nothing really ready until you need it. So think build a server and recover /DATA from zfs backup in object storage. So initial restore will be very long depending on your size.

Read on if you are thinking you want to go more towards pilot light or warm DR we can run incremental backups, then on the target server keep receiving snapshots periodically into our target ZFS file system DATA. You may observe why not just do real ZFS send/receive an no object storage in between. There is no good answer except there are many ways you could solve DR and this is one of them. In this case I could argue object storage is cheap and has some very good redundancy/availability features. And your replication between regions may be using a back haul very fast/cheap channel where your VPN or fastconnect WAN between regions may be slow and/or expensive.

You could also be thinking something between cold and warm DR is where you want to be and therefore only apply the full DATA receive when you are ready. That could mean a lot of snapshots likely to apply afterwards. Or maybe not I have not checked on that aspect of a recovery process.

Regardless I like the idea of leveraging zfs with object storage so you may not have a use for this but I definitely will.

Incremental snapshots:

server 1:

Add more data to source, snapshot and backup to object storage.

# rsync -a /media/sf_DATA/MySrc /DATA/
# du -sh /DATA/MySrc/
1.1M	/DATA/MySrc/

# zfs snapshot DATA@20190721-0809
# zfs list -t snapshot
NAME                 USED  AVAIL  REFER  MOUNTPOINT
DATA@20190721-0752    31K      -  1.40G  -
DATA@20190721-0809     0B      -  1.41G  -

# /usr/local/bin/zfsbackup-go send --increment DATA s3://zfs-poc
2019/07/21 08:10:49 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Done.
	Total ZFS Stream Bytes: 1202792 (1.1 MiB)
	Total Bytes Written: 254909 (249 KiB)
	Elapsed Time: 228.123591ms
	Total Files Uploaded: 2

# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 08:11:17 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 2 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)


Volume: DATA
	Snapshot: 20190721-0809 (2019-07-21 08:09:47 -0500 CDT)
	Incremental From Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Intermediary: false
	Replication: false
	Archives: 1 - 254909 bytes (249 KiB)
	Volume Size (Raw): 1202792 bytes (1.1 MiB)
	Uploaded: 2019-07-21 08:10:49.3280703 -0500 CDT (took 214.139056ms)

There are 4 manifests found locally that are not on the target destination.

server 2:

# /usr/local/bin/zfsbackup-go list s3://zfs-poc
2019/07/21 08:11:44 Ignoring user provided number of cores (2) and using the number of detected cores (1).
Found 2 backup sets:

Volume: DATA
	Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Replication: false
	Archives: 6 - 1176757570 bytes (1.1 GiB)
	Volume Size (Raw): 1514016976 bytes (1.4 GiB)
	Uploaded: 2019-07-21 07:53:12.42972167 -0500 CDT (took 1m16.313538867s)


Volume: DATA
	Snapshot: 20190721-0809 (2019-07-21 08:09:47 -0500 CDT)
	Incremental From Snapshot: 20190721-0752 (2019-07-21 07:52:31 -0500 CDT)
	Intermediary: false
	Replication: false
	Archives: 1 - 254909 bytes (249 KiB)
	Volume Size (Raw): 1202792 bytes (1.1 MiB)
	Uploaded: 2019-07-21 08:10:49.3280703 -0500 CDT (took 214.139056ms)

** not sure why I need to force (-F) maybe because data set is mounted? message like this:
** cannot receive incremental stream: destination DATA has been modified since most recent snapshot
*** 2019/07/21 08:12:25 Error while trying to read from volume DATA|20190721-0752|to|20190721-0809.zstream.gz.vol1 - io: read/write on closed pipe

# /usr/local/bin/zfsbackup-go receive --auto DATA s3://zfs-poc DATA -F
2019/07/21 08:12:53 Ignoring user provided number of cores (2) and using the number of detected cores (1).
2019/07/21 08:12:54 Done. Elapsed Time: 379.712693ms
2019/07/21 08:12:54 Done.

# ls /DATA/
MySrc  MyWorkDocs
# du -sh /DATA/MySrc/
1.1M	/DATA/MySrc/
# zfs list -t snapshot
NAME                 USED  AVAIL  REFER  MOUNTPOINT
DATA@20190721-0752    30K      -  1.40G  -
DATA@20190721-0809    34K      -  1.41G  -

LINK: https://github.com/someone1/zfsbackup-go

Comments Off on zfsbackup-go test with minio server
comments

Jul 19

OCI Bucket Delete Fail

If you have trouble deleting an object storage bucket in Oracle Cloud Infrastructure you may have to clear old multipart uploads. The message may look something like this: Bucket named 'DR-Validation' has pending multipart uploads. Stop all multipart uploads first.

At the time the only way I could do this was through the API. Did not appear like the CLI or Console could clear out the upload. Below is a little python that may help. Below is an example just to show the idea. And of course if you have thousands of multipart uploads(yes its possible); you will need to change this was only for one or two.

#!/usr/bin/python
#: Script Name  : lobjectparts.py
#: Author       : Riaan Rossouw
#: Date Created : June 13, 2019
#: Date Updated : July 18, 2019
#: Description  : Python Script to list multipart uploads
#: Examples     : lobjectparts.py -t tenancy -r region -b bucket
#:              : lobjectparts.py --tenancy <ocid> --region  <region> --bucket <bucket>

## Will need the api modules
## new: https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/
## old: https://oracle-bare-metal-cloud-services-python-sdk.readthedocs.io/en/latest/installation.html#install
## https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/api/object_storage/client/oci.object_storage.ObjectStorageClient.html

from __future__ import print_function
import os, optparse, sys, time, datetime
import oci

__version__ = '0.9.1'
optdesc = 'This script is used to list multipart uploads in a bucket'

parser = optparse.OptionParser(version='%prog version ' + __version__)
parser.formatter.max_help_position = 50
parser.add_option('-t', '--tenancy', help='Specify Tenancy ocid', dest='tenancy', action='append')
parser.add_option('-r', '--region', help='region', dest='region', action='append')
parser.add_option('-b', '--bucket', help='bucket', dest='bucket', action='append')

opts, args = parser.parse_args()

def showMultipartUploads(identity, bucket_name):
  object_storage = oci.object_storage.ObjectStorageClient(config)
  namespace_name = object_storage.get_namespace().data
  uploads = object_storage.list_multipart_uploads(namespace_name, bucket_name, limit = 1000).data
  print(' {:35}  | {:15} | {:30} | {:35} | {:20}'.format('bucket','namespace','object','time_created','upload_id'))
  for o in uploads:
    print(' {:35}  | {:15} | {:30} | {:35} | {:20}'.format(o.bucket, o.namespace, o.object, str(o.time_created), o.upload_id))
    confirm = input("Confirm if you want to abort this multipart upload (Y/N): ")
    if confirm == "Y":
      response = object_storage.abort_multipart_upload(o.namespace, o.bucket, o.object, o.upload_id).data
    else:
      print ("Chose to not do the abort action on this multipart upload at this time...")

def main():
  mandatories = ['tenancy','region','bucket']
  for m in mandatories:
    if not opts.__dict__[m]:
      print ("mandatory option is missing\n")
      parser.print_help()
      exit(-1)

  print ('Multipart Uploads')
  config['region'] = opts.region[0]
  identity = oci.identity.IdentityClient(config)
  showMultipartUploads(identity, opts.bucket)

if __name__ == '__main__':
  config = oci.config.from_file("/root/.oci/config","oci.api")
  main()

Comments Off on OCI Bucket Delete Fail
comments

Jun 18

Linux Screen Utility Buffer Scrolling

If you use the Linux screen utility a lot for long running jobs etc you may have experienced scrolling issues. The quickest way is to try Control-a and then Escape. You should now be able to use Up/Down keys or even PgUp/PgDn. Press Escape to exit scrolling.

In my case most of the time the terminal is running in a Virtualbox guest so you may or may not have to take into account Virtualbox key grabbing/assigning.

Comments Off on Linux Screen Utility Buffer Scrolling
comments

Jun 14

Bash Array Dynamic Name

Sometimes you want to have dynamic array names to simplify code. Below is one way of making the array name dynamic in a loop.

#!/bin/bash

section1=(
 fs-01
 fs-02
)
section2=(
 fs-03
)

function snap() {
  tag=$1
  echo
  echo "TAG: $tag"
  x=$tag
  var=$x[@]
  for f in "${!var}"
  do
    echo "fss: $f"
  done
}

snap "section1"
snap "section2"

And output like this.

# ./i.sh

TAG: section1
fss: fs-01
fss: fs-02

TAG: section2
fss: fs-03

Comments Off on Bash Array Dynamic Name
comments

Apr 24

OCI Cli Query

If you want to manipulate the output of Oracle Cloud Infrastructure CLI commands you can pipe output through jq. I have examples of jq elsewhere. You can also use the query option like follow.

$ oci network vcn list --compartment-id <> --config-file <> --profile <> --cli-rc-file <> --output table --query 'data [*].{"display-name":"display-name", "vcn-domain-name":"vcn-domain-name" "cidr-block":"cidr-block", "lifecycle-state":"lifecycle-state"}'
+--------------+-----------------+-----------------+-----------------------------+
| cidr-block   | display-name    | lifecycle-state | vcn-domain-name             |
+--------------+-----------------+-----------------+-----------------------------+
| 10.35.0.0/17 | My Primary VCN | AVAILABLE       | myprimaryvcn.oraclevcn.com |
+--------------+-----------------+-----------------+-----------------------------+

And for good measure also a jq example. Plus csv filter.

$ oci os object list --config-file /root/.oci/config --profile oci-backup --bucket-name "commvault-backup" | jq -r '.data[] | [.name,.size] | @csv'
"SILTFS_04.23.2019_19.21/CV_MAGNETIC/_DIRECTORY_HOLDER_",0
"SILTFS_04.23.2019_19.21/_DIRECTORY_HOLDER_",0

Comments Off on OCI Cli Query
comments

Apr 18

Azure AD SSO Login to AWS CLI

Note out of scope here is setting up the services itself. This article is about using a Node application to login to Azure on a client and then being able to use the AWS CLI. Specifically this information applied to a Linux desktop.

Setting up the services are documented here: https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial

We are following this tutorial https://github.com/dtjohnson/aws-azure-login and focussed on one account having an administrative role and then switching to different accounts which allows the original role to administer resources.

Linux Lite 4.4 OS Setup

# cat /etc/issue
Linux Lite 4.2 LTS \n \l
# apt install nodejs npm
# npm install -g aws-azure-login --unsafe-perm
# chmod -R go+rx $(npm root -g)
# apt install awscli 

Configure Named Profile (First Time)

$ aws-azure-login --profile awsaccount1 --configure
Configuring profile ‘awsaccount1’
? Azure Tenant ID: domain1.com
? Azure App ID URI: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
? Default Username: myaccount@domain1.com
? Default Role ARN (if multiple): 
arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role
? Default Session Duration Hours (up to 12): 12
Profile saved.

Login with Named Profile

$ aws-azure-login --profile awsaccount1
Logging in with profile ‘awsaccount1’...
? Username: myaccount1@mydomain1.com
? Password: [hidden]
We texted your phone +X XXXXXXXXXX. Please enter the code to sign in.
? Verification Code: 213194
? Role: arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role
? Session Duration Hours (up to 12): 12
Assuming role arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin-Role

Update Credentials File For Different Accounts to Switch Roles To

$ cat .aws/credentials 
[awsaccount2]
region=us-east-1
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin
source_profile=awsaccount1

[awsaccount3]
region=us-east-1
role_arn=arn:aws:iam::xxxxxxxxxxxx:role/awsaccount1-Admin
source_profile=awsaccount1

[awsaccount1]
aws_access_key_id=XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
aws_session_token="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=="
aws_session_expiration=2019-04-18T10:22:06.000Z

Test Access

$ aws iam list-account-aliases --profile awsaccount2
{
    "AccountAliases": [
        "awsaccount2"
    ]
}
$ aws iam list-account-aliases --profile awsaccount3
{
    "AccountAliases": [
        "awsaccount3"
    ]
}

So next time just login with the named profile awsaccount1 and you have AWS CLI to the other accounts. Note you will need to make sure ARN's and roles etc are 100% accurate. It gets a bit confusing.

Also this is informational and you carry your own risks of accessing the wrong account.

Comments Off on Azure AD SSO Login to AWS CLI
comments