Oct 29

Linux Broadcom Wireless Issue 5.x kernel

Broadcom Wireless Issue

Recent update caused wireless to stop working. Seems like PopOS (20.04 flavor and likely other distros) does not have a new enough bcmwl-kernel-source for 5.6 or 5.8 kernels.

LINKS:

NOTE: I tried to install a different kernel to see if that will work but that showed me the issue at least.

root cause

# apt install linux-image-5.8.0-23-generic
...
make -j4 KERNELRELEASE=5.8.0-23-generic -C /lib/modules/5.8.0-23-generic/build M=/var/lib/dkms/bcmwl/6.30.223.271+bdcom/build....(bad exit status: 2)
ERROR (dkms apport): kernel package linux-headers-5.8.0-23-generic is not supported
Error! Bad return status for module build on kernel: 5.8.0-23-generic (x86_64)
Consult /var/lib/dkms/bcmwl/6.30.223.271+bdcom/build/make.log for more information.

hold a kernel that works just for safety

# dpkg -l | grep linux-image-
ii  linux-image-5.4.0-7642-generic                   5.4.0-7642.46~1598628707~20.04~040157c               amd64        Linux kernel image for version 5.4.0 on 64 bit x86 SMP
ii  linux-image-5.8.0-23-generic                     5.8.0-23.24~20.04.1                                  amd64        Signed kernel image generic
ii  linux-image-5.8.0-7625-generic                   5.8.0-7625.26~1603389471~20.04~f6b125f               amd64        Linux kernel image for version 5.8.0 on 64 bit x86 SMP
ii  linux-image-generic                              5.8.0.7625.26~1603389471~20.04~f6b125f               amd64        Generic Linux kernel image

# echo linux-image-5.4.0-7642-generic hold | dpkg --set-selections

# dpkg -l | grep linux-image-
hi  linux-image-5.4.0-7642-generic                   5.4.0-7642.46~1598628707~20.04~040157c               amd64        Linux kernel image for version 5.4.0 on 64 bit x86 SMP
ii  linux-image-5.8.0-23-generic                     5.8.0-23.24~20.04.1                                  amd64        Signed kernel image generic
ii  linux-image-5.8.0-7625-generic                   5.8.0-7625.26~1603389471~20.04~f6b125f               amd64        Linux kernel image for version 5.8.0 on 64 bit x86 SMP
ii  linux-image-generic                              5.8.0.7625.26~1603389471~20.04~f6b125f               amd64        Generic Linux kernel image

NOTE: set grub with longer timeout, show the boot menu and save last booted item

patches

Looking at the patches it appears we may need 0028? or something for newer than 5.1 kernels?

# ls /usr/src/bcmwl-6.30.223.271+bdcom/patches/
0001-MODULE_LICENSE.patch                  0008-add-support-for-linux-3.9.0.patch                           0015-add-support-for-Linux-3.18.patch                       0022-add-support-for-Linux-4.8.patch
0002-Makefile.patch                        0009-add-support-for-linux-3.10.0.patch                          0016-repair-make-warnings.patch                             0023-add-support-for-Linux-4.11.patch
0003-Make-up-for-missing-init_MUTEX.patch  0010-change-the-network-interface-name-from-eth-to-wlan.patch    0017-add-support-for-Linux-4.0.patch                        0024-add-support-for-Linux-4.12.patch
0004-Add-support-for-Linux-3.2.patch       0011-do-not-define-__devinit-as-__init-in-linux-3.8-as-__.patch  0018-cfg80211_disconnected.patch                            0025-add-support-for-Linux-4.14.patch
0005-add-support-for-linux-3.4.0.patch     0012-add-support-for-Linux-3.15.patch                            0019-broadcom-sta-6.30.223.248-3.18-null-pointer-fix.patch  0026-add-support-for-Linux-4.15.patch
0006-add-support-for-linux-3.8.0.patch     0013-gcc.patch                                                   0020-add-support-for-linux-4.3.patch                        0027-add-support-for-linux-5.1.patch
0007-nl80211-move-scan-API-to-wdev.patch   0014-add-support-for-Linux-3.17.patch                            0021-add-support-for-Linux-4.7.patch

Install Ubuntu 20.10 (groovy) package

Looking at the file list in the newer Ubuntu 20.10 source I see at least a 5.6 patch although I need 5.8.

# wget http://mirrors.kernel.org/ubuntu/pool/restricted/b/bcmwl/bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb
...
2020-10-29 08:14:10 (656 KB/s) - ‘bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb’ saved [1545816/1545816]

# dpkg -i bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb 
(Reading database ... 283701 files and directories currently installed.)
Preparing to unpack bcmwl-kernel-source_6.30.223.271+bdcom-0ubuntu7_amd64.deb ...
Removing all DKMS Modules
Done.
Unpacking bcmwl-kernel-source (6.30.223.271+bdcom-0ubuntu7) over (6.30.223.271+bdcom-0ubuntu5) ...
Setting up bcmwl-kernel-source (6.30.223.271+bdcom-0ubuntu7) ...
Loading new bcmwl-6.30.223.271+bdcom DKMS files...
Building for 5.4.0-7642-generic 5.8.0-7625-generic
Building for architecture x86_64
Building initial module for 5.4.0-7642-generic
Done.

wl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.4.0-7642-generic/updates/

depmod...

DKMS: install completed.
Building initial module for 5.8.0-7625-generic
Done.

wl.ko:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/5.8.0-7625-generic/updates/

depmod........

DKMS: install completed.
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools (0.136ubuntu6.3) ...
update-initramfs: Generating /boot/initrd.img-5.8.0-7625-generic
cryptsetup: WARNING: Resume target cryptswap uses a key file

looks like rebuild wl.ko ok

# ls /lib/modules/5.8.0-7625-generic/updates/
dkms  wl.ko

# find /lib/modules/5.4.0-7642-generic/ -name wl.ko
/lib/modules/5.4.0-7642-generic/updates/wl.ko

# find /lib/modules/5.8.0- -name wl.ko
5.8.0-23-generic/   5.8.0-7625-generic/ 

# find /lib/modules/5.8.0-23-generic/ -name wl.ko

# find /lib/modules/5.8.0-7625-generic/ -name wl.ko
/lib/modules/5.8.0-7625-generic/updates/wl.ko

cleanup the 5.8.0-23 kernel I tried

# apt purge linux-image-5.8.0-23-generic
...
rmdir: failed to remove '/lib/modules/5.8.0-23-generic': Directory not empty

NOTE: PopOS may not be cleaning up /lib/modules because of the additional module. 

# rm -rf /lib/modules/5.8.0-23-generic

# apt purge linux-headers-5.8.0-23-generic
# apt purge linux-modules-5.8.0-23-generic

# ls /boot
config-5.4.0-7642-generic  grub        initrd.img-5.4.0-7642-generic  initrd.img.old                 System.map-5.8.0-7625-generic  vmlinuz-5.4.0-7642-generic  vmlinuz.old
config-5.8.0-7625-generic  initrd.img  initrd.img-5.8.0-7625-generic  System.map-5.4.0-7642-generic  vmlinuz                        vmlinuz-5.8.0-7625-generic

check

Rebooted with 5.8 kernel and it works

# dkms status
bcmwl, 6.30.223.271+bdcom, 5.4.0-7642-generic, x86_64: installed
bcmwl, 6.30.223.271+bdcom, 5.8.0-7625-generic, x86_64: installed
nvidia-340, 340.108, 5.4.0-7642-generic, x86_64: installed
system76, 1.0.9~1597073326~20.04~5b01933, 5.4.0-7642-generic, x86_64: installed
system76, 1.0.9~1597073326~20.04~5b01933, 5.8.0-7625-generic, x86_64: installed

Comments Off on Linux Broadcom Wireless Issue 5.x kernel
comments

Oct 28

Wireguard VPN between Azure and OCI hosts

Wireguard test between Azure and Oracle OCI hosts

REF: https://www.wireguard.com/

Azure VM setup

Ubuntu 18.04.5 LTS

root@wireguard-az:~# dig +short myip.opendns.com @resolver1.opendns.com
*IPAddress*
root@wireguard-az:~# apt install wireguard

root@wireguard-az:~# wg version
wireguard-tools v1.0.20200513 - https://git.zx2c4.com/wireguard-tools/

root@wireguard-az:~# umask 077
root@wireguard-az:~# wg genkey > privatekey
root@wireguard-az:~# wg pubkey < privatekey > publickey
root@wireguard-az:~# ip link add wg0 type wireguard
root@wireguard-az:~# ip addr add 10.0.0.1/24 dev wg0
root@wireguard-az:~# wg set wg0 private-key ./privatekey
root@wireguard-az:~# ip link set wg0 up

root@wireguard-az:~# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0d:3a:5d:89:a7 brd ff:ff:ff:ff:ff:ff
    inet 10.1.1.4/24 brd 10.1.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20d:3aff:fe5d:89a7/64 scope link 
       valid_lft forever preferred_lft forever
3: wg0:  mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.0.0.1/24 scope global wg0
       valid_lft forever preferred_lft forever

root@wireguard-az:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 43971

root@wireguard-az:~# wg set wg0 peer *redacted* allowed-ips 10.0.0.2/32 endpoint *IPAddress*:40181

root@wireguard-az:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 43971

peer: *redacted*
  endpoint: *IPAddress*:40181
  allowed ips: 10.0.0.2/32
  transfer: 0 B received, 3.32 KiB sent

NOTE: iptables on this server don't need adjustment it is open already

root@wireguard-az:~# ping 10.0.0.2 -c 1
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=10 ttl=64 time=31.7 ms

NOTE: open Azure Security Rule for port we are running on
310 wg 43971 Any IPAddress/32 Any

Oracle OCI

Ubuntu 20.04.1 LTS

root@usph-vmli-do01:~# dig +short myip.opendns.com @resolver1.opendns.com
*IPAddress*
root@usph-vmli-do01:~# apt install wireguard

root@usph-vmli-do01:~# wg version
wireguard-tools v1.0.20200513 - https://git.zx2c4.com/wireguard-tools/
  • open Security Rule for port we are running on
    No IPAddress/32 TCP All 40181 TCP traffic for ports: 40181
root@usph-vmli-do01:~# umask 077
root@usph-vmli-do01:~# wg genkey > privatekey
root@usph-vmli-do01:~# wg pubkey < privatekey > publickey
root@usph-vmli-do01:~# ip link add wg0 type wireguard
root@usph-vmli-do01:~# ip addr add 10.0.0.2/24 dev wg0
root@usph-vmli-do01:~# wg set wg0 private-key ./privatekey
root@usph-vmli-do01:~# ip link set wg0 up

root@usph-vmli-do01:~# ip addr
2: ens3:  mtu 9000 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:00:17:02:8f:09 brd ff:ff:ff:ff:ff:ff
    inet 10.3.1.8/24 brd 10.3.1.255 scope global ens3
       valid_lft forever preferred_lft forever
    inet6 fe80::200:17ff:fe02:8f09/64 scope link 
       valid_lft forever preferred_lft forever
...
20: wg0:  mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 10.0.0.2/24 scope global wg0
       valid_lft forever preferred_lft forever

root@usph-vmli-do01:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 40181

root@usph-vmli-do01:~# wg set wg0 peer *redacted* allowed-ips 10.0.0.1/32 endpoint *IPAddress*:43971

root@usph-vmli-do01:~# wg show
interface: wg0
  public key: *redacted*
  private key: (hidden)
  listening port: 40181

peer: *redacted*
  endpoint: *IPAddress*:43971
  allowed ips: 10.0.0.1/32

NOTE: iptables need adjustment port is not open

root@usph-vmli-do01:~# iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
2    ACCEPT     icmp --  anywhere             anywhere            
3    ACCEPT     all  --  anywhere             anywhere            
4    ACCEPT     udp  --  anywhere             anywhere             udp spt:ntp
5    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http-alt state NEW,ESTABLISHED
6    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https state NEW,ESTABLISHED
7    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http state NEW,ESTABLISHED
8    ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:ssh
9    REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited
...

root@usph-vmli-do01:~# iptables -I INPUT 5 -p tcp -m tcp --dport 40181 -m state --state NEW,ESTABLISHED -j ACCEPT

root@usph-vmli-do01:~# iptables -L --line-numbers
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  anywhere             anywhere             state RELATED,ESTABLISHED
2    ACCEPT     icmp --  anywhere             anywhere            
3    ACCEPT     all  --  anywhere             anywhere            
4    ACCEPT     udp  --  anywhere             anywhere             udp spt:ntp
5    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:40181 state NEW,ESTABLISHED
6    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http-alt state NEW,ESTABLISHED
7    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https state NEW,ESTABLISHED
8    ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http state NEW,ESTABLISHED
9    ACCEPT     tcp  --  anywhere             anywhere             state NEW tcp dpt:ssh
10   REJECT     all  --  anywhere             anywhere             reject-with icmp-host-prohibited

root@usph-vmli-do01:~# ping 10.0.0.1 -c 1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=31.9 ms

ubuntu@usph-vmli-do01:~/.ssh$ ssh ubuntu@10.0.0.1
...
Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1031-azure x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Wed Oct 28 17:35:39 UTC 2020
...

Permanent steps

For routing/NAT of hosts behind these, creating /etc/wireguard/ config files, systemd starting etc read more here: https://linuxize.com/post/how-to-set-up-wireguard-vpn-on-ubuntu-20-04/

Comments Off on Wireguard VPN between Azure and OCI hosts
comments

Oct 28

Logger Socket Issue

Logger Socket Issue

I use logger to inject test messages for a rsyslog relay setup. After I split out my configuration from rsyslog.conf to a file in /etc/rsyslog.d I started seeing this logger issue writing to /dev/log. This was not happening when everything was contained in /etc/rsyslog.conf.

error

[root@ip-10-200-2-41 ~]# logger foobar
logger: socket /dev/log: No such file or directory

links

fix

[root@ip-10-200-2-41 ~]# ls -l /dev/log
ls: cannot access /dev/log: No such file or directory

[root@ip-10-200-2-41 ~]# systemctl restart systemd-journald.socket

[root@ip-10-200-2-41 ~]# ls -l /dev/log
srw-rw-rw- 1 root root 0 Oct 28 08:07 /dev/log

[root@ip-10-200-2-41 ~]# logger foo

NOTE: Not sure if above fix it permanently. Could have been a unique race condiction and resolved now but reading the reference material it is confusing what exactly is the rot cause.

Comments Off on Logger Socket Issue
comments

Jun 23

AWS VPN to Libreswan

AWS VPN to Azure VM with Libreswan

NOTE: As of this article AWS Site to Site VPN gateway can generate an Openswan configuration but not Libreswan. This is a test to use Libreswan.

Using an Azure Virtual Machine on the left and AWS VPN gateway on the right but of course can also use Azure VPN service

For reference OCI to Libreswan from a while back

Setup right side in AWS Console

  • Create Customer Gateway > azure-gw01 using Static Routing and specify Azure VM IP Address - Create Virtual Private Gateway az-vpg01 Amazon default ASN
  • Attach VPG to VPC
    For Site-to-Site VPN
  • Create VPN Connection > iqonda-aws-azure pick VPG and CG Routing Static leave all defaults for now and no Static IP Prefixes for the moment
  • Record Tunnel1 IP Address

Setup left side in Azure

Create a Centos VM in Azure

  • Virtual machines > Add
    | test01 | CentOS-based 8.1 | Standard_B1ls 1 vcpu, 0.5 GiB memory ($3.80/month) | AzureUser
    * I used a password for AzureUser and sort out SSH keys after logged in.

  • I used | Standard HDD | myVnet | mySubnet(10.0.0.0/24)

  • record public IP

  • Network add inbound rules for ipsec. I did an all traffic for the AWS endpoint IP address but you want to be more specific on ipsec ports.

software

# cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core) 

# yum install libreswan

# echo "net.ipv4.ip_forward=1" > /usr/lib/sysctl.d/60-ipsec.conf
# sysctl -p /usr/lib/sysctl.d/60-ipsec.conf
net.ipv4.ip_forward = 1

# for s in /proc/sys/net/ipv4/conf/*; do echo 0 > $s/send_redirects; echo 0 > $s/accept_redirects; done

# echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter

# ipsec verify
Verifying installed system and configuration files

Version check and ipsec on-path                     [OK]
Libreswan 3.29 (netkey) on 4.18.0-147.8.1.el8_1.x86_64
Checking for IPsec support in kernel                [OK]
 NETKEY: Testing XFRM related proc values
         ICMP default/send_redirects                [OK]
         ICMP default/accept_redirects              [OK]
         XFRM larval drop                           [OK]
Pluto ipsec.conf syntax                             [OK]
Checking rp_filter                                  [OK]
Checking that pluto is running                      [OK]
 Pluto listening for IKE on udp 500                 [OK]
 Pluto listening for IKE/NAT-T on udp 4500          [OK]
 Pluto ipsec.secret syntax                          [OK]
Checking 'ip' command                               [OK]
Checking 'iptables' command                         [OK]
Checking 'prelink' command does not interfere with FIPS [OK]
Checking for obsolete ipsec.conf options            [OK]

NOTE: skipping firewalld and rules. this instance did not have firewalld enabled and iptables -L is open.

Download openswan config in AWS console to see the PSK

I had issues bringing the tunnel up but after reboot it works

post tunnel UP

  • add static route(s) to VPN
  • check route table for subnet
  • enable subnet association to route table
  • enable route propagation

ping test both ways works...

source

[root@test01 ipsec.d]# cat aws-az-vpn.conf 
conn Tunnel1
        authby=secret
        auto=start
        encapsulation=yes
        left=%defaultroute
        leftid=[Azure VM IP]
        right=[AWS VPN Tunnel 1 IP]
        type=tunnel
        phase2alg=aes128-sha1;modp1024
        ike=aes128-sha1;modp1024
        leftsubnet=10.0.1.0/16
        rightsubnet=172.31.0.0/16

conn Tunnel2
        authby=secret
        auto=add
        encapsulation=yes
        left=%defaultroute
        leftid=[Azure VM IP]
        right=[AWS VPN Tunnel 2 IP]
        type=tunnel
        phase2alg=aes128-sha1;modp1024
        ike=aes128-sha1;modp1024
        leftsubnet=10.0.1.0/16
        rightsubnet=172.31.0.0/16

[root@test01 ipsec.d]# cat aws-az-vpn.secrets 
52.188.118.56 18.214.218.99: PSK "Qgn...............mn"
52.188.118.56 52.3.140.122: PSK "cWu..................87"

Tunnel switch

Although Libreswan can't manage two tunnels to the same right side without something like Quagga at least I did a very quick and dirty switchover script. It works and very minimal pings missed.

[root@test01 ~]# cat switch-aws-tunnel.sh 
#!/bin/bash
echo "Current Tunnel Status"
ipsec status | grep routed

active=$(ipsec status | grep erouted | cut -d \" -f2)
inactive=$(ipsec status | grep unrouted | cut -d \" -f2)

echo "Showing active and inactive in tunnels"
echo "active: $active"
echo "inactive: $inactive"

echo "down tunnels...."
ipsec auto --down $active
ipsec auto --down $inactive

echo "adding tunnels...."
ipsec auto --add Tunnel1
ipsec auto --add Tunnel2

echo "up the tunnel that was inactive before...."
ipsec auto --up $inactive

echo "Current Tunnel Status"
ipsec status | grep routed

Comments Off on AWS VPN to Libreswan
comments

May 27

Kubernetes Development with MicroK8s

Using Ubuntu's MicroK8s Kubernetes environment to test a Nginx container with a NodePort and also Ingress so we can access from another machine.

install

$ sudo snap install microk8s --classic
microk8s v1.18.2 from Canonical✓ installed

$ sudo usermod -a -G microk8s rrosso

$ microk8s.kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.152.183.1           443/TCP   2m29s

$ microk8s.kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
server1   Ready       3m    v1.18.2-41+b5cdb79a4060a3

$ microk8s.enable dns dashboard
...

$ watch microk8s.kubectl get all --all-namespaces

NOTE: alias the command

$ sudo snap alias microk8s.kubectl kubectl
Added:
  - microk8s.kubectl as kubectl

nginx first attempt

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           9s

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE

nginx-f89759699-jnlng   1/1     Running   0          15s

$ kubectl get all --all-namespaces

NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
default       pod/nginx-f89759699-jnlng                             1/1     Running   0          31s
...
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes                  ClusterIP   10.152.183.1             443/TCP                  94m
...
NAMESPACE     NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx                            1/1     1            1           31s
kube-system   deployment.apps/coredns                          1/1     1            1           90m
...

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-f89759699                             1         1         1       31s
kube-system   replicaset.apps/coredns-588fd544bf                          1         1         1       90m
...

$ kubectl get all --all-namespaces
NAMESPACE     NAME                                                  READY   STATUS    RESTARTS   AGE
default       pod/nginx-f89759699-jnlng                             1/1     Running   0          2m38s
...
NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
...
NAMESPACE     NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
default       deployment.apps/nginx                            1/1     1            1           2m38s

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY   AGE
default       replicaset.apps/nginx-f89759699                             1         1         1       2m38s
...

$ wget 10.152.183.151
--2020-05-25 14:26:14--  http://10.152.183.151/
Connecting to 10.152.183.151:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2020-05-25 14:26:14 ERROR 404: Not Found.

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           3m40s

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-f89759699-jnlng   1/1     Running   0          3m46s

$ microk8s.kubectl expose deployment nginx --port 80 --target-port 80 --type ClusterIP --selector=run=nginx --name nginx
service/nginx exposed

$ microk8s.kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-jnlng   1/1     Running   0          9m29s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.152.183.1            443/TCP   103m
service/nginx        ClusterIP   10.152.183.55           80/TCP    3m55s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           9m29s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-f89759699   1         1         1       9m29s

$ wget 10.152.183.55
--2020-05-25 14:33:02--  http://10.152.183.55/
Connecting to 10.152.183.55:80... failed: Connection refused.

NOTE: Kubernetes does not provide a loadbalancer. It is assumed that loadbalancers are an external component. MicroK8s is not shipping any loadbalancer but even if it did there would not have been any nodes to balance load over. There is only one node so if you want to expose a service you should use the NodePort service type.
There is no external LB shipping with MicroK8s, therefore there is no way to appoint an (LB provided) external IP to a service. What you can do is to expose a service to a host's port using NodePort.

nginx attempt 2

$ kubectl delete services nginx-service
service "nginx-service" deleted

$ kubectl delete deployment nginx
deployment.apps "nginx" deleted

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

$ kubectl expose deployment nginx --type NodePort --port=80 --name nginx-service
service/nginx-service exposed

$ kubectl get all
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-f89759699-jr4gz   1/1     Running   0          23s

NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/kubernetes      ClusterIP   10.152.183.1             443/TCP        19h
service/nginx-service   NodePort    10.152.183.229           80:30856/TCP   10s

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           23s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-f89759699   1         1         1       23s

$ wget 10.152.183.229
Connecting to 10.152.183.229:80... connected.
HTTP request sent, awaiting response... 200 OK
2020-05-26 08:05:22 (150 MB/s) - ‘index.html’ saved [612/612]

ingress

$ cat ingress-nginx.yaml 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: http-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: nginx-service
          servicePort: 80

$ kubectl apply -f ingress-nginx.yaml 
ingress.networking.k8s.io/http-ingress created

NOTE: https://192.168.1.112/ pulls up Nginx homepage

after reboot:

next

  • persistent storage test

Comments Off on Kubernetes Development with MicroK8s
comments

May 20

Systemctl With Docker and ZFS

I previously wrote about Ubuntu 20.04 as a rpool(boot volume) on OCI (Oracle Cloud Infrastucture). If using a ZFS rpool you probably wont have this silly race condition I am writing about here.

So for this POC I was using Docker and an isci mounted disk for the Docker root folder. Unfortunately there are a couple issues. The first one not related to Docker just booting up and the zpool not being imported. Fix A is for that. The second issue is that Docker may not wait for the zpool to be ready before it starts and just automatically lay down its docker folder you specified in daemon.json. And of course then zfs will not mount even if it was imported with fix A.

Fix A

If you don't know yet please create your zpool with the by-id device name not for example /dev/sdb. If this zpool was already created you can fix this after the fact with export and import and updating the cache.

You can look at systemctl status zfs-import-cache.service to see what happened at boot with this zpool. There are many opinions on how to fix this; suffice to say this is what I used and it works reliably for me so far.

Create service

# cat /etc/systemd/system/tank01-pool.service
[Unit]
Description=Zpool start service
After=dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407.device

[Service]
Type=simple
ExecStart=/usr/sbin/zpool import tank01
ExecStartPost=/usr/bin/logger "started ZFS pool tank01"

[Install]
WantedBy=dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407.device

# systemctl daemon-reload
# systemctl enable tank01-pool.service

# systemctl status tank01-pool.service
● tank01-pool.service - Zpool start service
     Loaded: loaded (/etc/systemd/system/tank01-pool.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Tue 2020-05-19 02:18:05 UTC; 5min ago
   Main PID: 1018 (code=exited, status=0/SUCCESS)

May 19 02:18:01 usph-vmli-do01 systemd[1]: Starting Zpool start service...
May 19 02:18:01 usph-vmli-do01 root[1019]: started ZFS pool tank01
May 19 02:18:01 usph-vmli-do01 systemd[1]: Started Zpool start service.
May 19 02:18:05 usph-vmli-do01 systemd[1]: tank01-pool.service: Succeeded.

To find your exact device

# systemctl list-units --all --full | grep disk | grep tank01
      dev-disk-by\x2did-scsi\x2d36081a22b818449d287b13b59a47bc407\x2dpart1.device                                                                                       loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2did-wwn\x2d0x6081a22b818449d287b13b59a47bc407\x2dpart1.device                                                                                       loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dlabel-tank01.device                                                                                                                                loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpartlabel-zfs\x2d9eb05ecca4da97f6.device                                                                                                           loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpartuuid-d7d69ee0\x2d4e45\x2d3148\x2daa7a\x2d7cf375782813.device                                                                                   loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2dpath-ip\x2d169.254.2.2:3260\x2discsi\x2diqn.2015\x2d12.com.oracleiaas:16bca793\x2dc861\x2d49e8\x2da903\x2dd6b3809fe694\x2dlun\x2d1\x2dpart1.device loaded    active   plugged   BlockVolume tank01                                                           
      dev-disk-by\x2duuid-9554707573611221628.device                                                                                                                    loaded    active   plugged   BlockVolume tank01                                                           

# ls -l /dev/disk/by-id/ | grep sdb
    lrwxrwxrwx 1 root root  9 May 18 22:32 scsi-36081a22b818449d287b13b59a47bc407 -> ../../sdb
    lrwxrwxrwx 1 root root 10 May 18 22:32 scsi-36081a22b818449d287b13b59a47bc407-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 May 18 22:33 scsi-36081a22b818449d287b13b59a47bc407-part9 -> ../../sdb9
    lrwxrwxrwx 1 root root  9 May 18 22:32 wwn-0x6081a22b818449d287b13b59a47bc407 -> ../../sdb
    lrwxrwxrwx 1 root root 10 May 18 22:32 wwn-0x6081a22b818449d287b13b59a47bc407-part1 -> ../../sdb1
    lrwxrwxrwx 1 root root 10 May 18 22:33 wwn-0x6081a22b818449d287b13b59a47bc407-part9 -> ../../sdb9

Fix B

This was done before and just showing for reference how you enable the docker zfs storage.

# cat /etc/docker/daemon.json
{ 
  "storage-driver": "zfs",
  "data-root": "/tank01/docker"
}

For the timing issue you have many options in systemctl and probably better than this. For me just delaying a little until isci and zpool import/mount is done works OK.

# grep sleep /etc/systemd/system/multi-user.target.wants/docker.service 
ExecStartPre=/bin/sleep 60

# systemctl daemon-reload

Comments Off on Systemctl With Docker and ZFS
comments

May 20

Test OCI (Oracle Cloud Infrastructure) Vault Secret

assume oci cli working

test an old cli script to list buckets

$ ./list_buckets.sh

{
      "data": [
        {
          "compartment-id": "*masked*",
          "created-by": "*masked*",
          "defined-tags": null,
          "etag": "*masked*",
          "freeform-tags": null,
          "name": "bucket-20200217-1256",
          "namespace": "*masked*",
          "time-created": "2020-02-17T18:56:07.773000+00:00"
        }
      ]
}

test old python script

$ python3 show_user.py 
{
      "capabilities": {
        "can_use_api_keys": true,
        "can_use_auth_tokens": true,
        "can_use_console_password": true,
        "can_use_customer_secret_keys": true,
        "can_use_o_auth2_client_credentials": true,
        "can_use_smtp_credentials": true
      },
      "compartment_id": "*masked*",
      "defined_tags": {},
      "description": "*masked*",
      "email": "*masked*",
      "external_identifier": null,
      "freeform_tags": {},
      "id": "*masked*",
      "identity_provider_id": null,
      "inactive_status": null,
      "is_mfa_activated": false,
      "lifecycle_state": "ACTIVE",
      "name": "*masked*",
      "time_created": "2020-02-11T18:24:37.809000+00:00"
}

create secret in console

  • Security > Vault > testvault
  • Create key rr
  • Create secret rr

test python code

$ python3 check-secret.py *masked*
    Reading vaule of secret_id *masked*.
    Decoded content of the secret is: blah.

test cli

$ oci vault secret list --compartment-id *masked*

     "data": [
       {
         "compartment-id": "*masked*",
         "defined-tags": {
           "Oracle-Tags": {
             "CreatedBy": "*masked*",
             "CreatedOn": "2020-05-19T19:13:52.028Z"
           }
         },
         "description": "test",
         "freeform-tags": {},
         "id": "*masked*",
         "key-id": "*masked*",
         "lifecycle-details": null,
         "lifecycle-state": "ACTIVE",
         "secret-name": "rr",
         "time-created": "2020-05-19T19:13:51.804000+00:00",
         "time-of-current-version-expiry": null,
         "time-of-deletion": null,
         "vault-id": "*masked*"
       }
     ]
    }

$ oci vault secret get --secret-id *masked*
    {
      "data": {
        "compartment-id": "*masked*",
        "current-version-number": 1,
        "defined-tags": {
          "Oracle-Tags": {
            "CreatedBy": "*masked*",
            "CreatedOn": "2020-05-19T19:13:52.028Z"
          }
        },
        "description": "test",
        "freeform-tags": {},
        "id": "*masked*",
        "key-id": "*masked*",
        "lifecycle-details": null,
        "lifecycle-state": "ACTIVE",
        "metadata": null,
        "secret-name": "rr",
        "secret-rules": [],
        "time-created": "2020-05-19T19:13:51.804000+00:00",
        "time-of-current-version-expiry": null,
        "time-of-deletion": null,
        "vault-id": "*masked*"
      },
      "etag": "*masked*"
    }

$ oci secrets secret-bundle get --secret-id *masked*
    {
      "data": {
        "metadata": null,
        "secret-bundle-content": {
          "content": "YmxhaA==",
          "content-type": "BASE64"
        },
        "secret-id": "*masked*",
        "stages": [
          "CURRENT",
          "LATEST"
        ],
        "time-created": "2020-05-19T19:13:51.804000+00:00",
        "time-of-deletion": null,
        "time-of-expiry": null,
        "version-name": null,
        "version-number": 1
      },
      "etag": "*masked*--gzip"
    }

$ echo YmxhaA== | base64 --decode
    blah

one liner

$ oci secrets secret-bundle get --secret-id ocid1.vaultsecret.oc1.phx.*masked* --query "data .{s:\"secret-bundle-content\"}" | jq -r '.s.content' | base64 --decode
blah

Comments Off on Test OCI (Oracle Cloud Infrastructure) Vault Secret
comments

May 20

Powerline In Visual Studio Code

There are some examples here:

I chose to follow a comment suggestion Meslo

download the Meslo font

https://github.com/ryanoasis/nerd-fonts/releases/tag/v2.1.0

rrosso  ~  Downloads  sudo -i
root@pop-os:~# cd /usr/share/fonts/truetype
root@pop-os:/usr/share/fonts/truetype# mkdir Meslo
root@pop-os:/usr/share/fonts/truetype# cd Meslo/

root@pop-os:/usr/share/fonts/truetype/Meslo# unzip /home/rrosso/Downloads/Meslo.zip 
Archive:  /home/rrosso/Downloads/Meslo.zip
  inflating: Meslo LG M Bold Nerd Font Complete Mono.ttf  

root@pop-os:/usr/share/fonts/truetype/Meslo#  fc-cache -vf /usr/share/fonts/

update the vscode settings

rrosso  ~  .config  Code  User  pwd
/home/rrosso/.config/Code/User

rrosso  ~  .config  Code  User  cat settings.json 
{
    "editor.fontSize": 12,
    "editor.fontFamily": "MesloLGM Nerd Font",
    "terminal.integrated.fontSize": 11,
    "terminal.integrated.fontFamily": "MesloLGM Nerd Font",
    "editor.minimap.enabled": false
}

Comments Off on Powerline In Visual Studio Code
comments

May 20

Traefik Wildcard Certificate using Azure DNS

dns challenge letsencrypt Azure DNS

Using Traefik as edge router(reverse proxy) to http sites and enabling a Lets Encrypt ACME v2 wildcard certificate on the docker Traefik container. Verify ourselves using DNS, specifically the dns-01 method, because DNS verification doesn’t interrupt your web server and it works even if your server is unreachable from the outside world. Our DNS provider is Azure DNS.

Azure Configuration

Pre-req

  • azure cli setup
  • Wildcard DNS entry *.my.domain

Get subscription id

$ az account list | jq '.[] | .id'
"masked..."

Create role

$ az role definition create --role-definition role.json 
  {
    "assignableScopes": [
      "/subscriptions/masked..."
    ],
    "description": "Can manage DNS TXT records only.",
    "id": "/subscriptions/masked.../providers/Microsoft.Authorization/roleDefinitions/masked...",
    "name": "masked...",
    "permissions": [
      {
        "actions": [
          "Microsoft.Network/dnsZones/TXT/*",
          "Microsoft.Network/dnsZones/read",
          "Microsoft.Authorization/*/read",
          "Microsoft.Insights/alertRules/*",
          "Microsoft.ResourceHealth/availabilityStatuses/read",
          "Microsoft.Resources/deployments/read",
          "Microsoft.Resources/subscriptions/resourceGroups/read"
        ],
        "dataActions": [],
        "notActions": [],
        "notDataActions": []
      }
    ],
    "roleName": "DNS TXT Contributor",
    "roleType": "CustomRole",
    "type": "Microsoft.Authorization/roleDefinitions"
  }

NOTE: If you screwed up and need to delete do like like this:
az role definition delete --name "DNS TXT Contributor"

Create json file with correct subscription and create role definition

$ cat role.json
  {
    "Name":"DNS TXT Contributor",
    "Id":"",
    "IsCustom":true,
    "Description":"Can manage DNS TXT records only.",
    "Actions":[
      "Microsoft.Network/dnsZones/TXT/*",
      "Microsoft.Network/dnsZones/read",
      "Microsoft.Authorization/*/read",
      "Microsoft.Insights/alertRules/*",
      "Microsoft.ResourceHealth/availabilityStatuses/read",
      "Microsoft.Resources/deployments/read",
      "Microsoft.Resources/subscriptions/resourceGroups/read"
    ],
    "NotActions":[

    ],
    "AssignableScopes":[
      "/subscriptions/masked..."
    ]
  }

  $ az role definition create --role-definition role.json 
  {
    "assignableScopes": [
      "/subscriptions/masked..."
    ],
    "description": "Can manage DNS TXT records only.",
    "id": "/subscriptions/masked.../providers/Microsoft.Authorization/roleDefinitions/masked...",
    "name": "masked...",
    "permissions": [
      {
        "actions": [
          "Microsoft.Network/dnsZones/TXT/*",
          "Microsoft.Network/dnsZones/read",
          "Microsoft.Authorization/*/read",
          "Microsoft.Insights/alertRules/*",
          "Microsoft.ResourceHealth/availabilityStatuses/read",
          "Microsoft.Resources/deployments/read",
          "Microsoft.Resources/subscriptions/resourceGroups/read"
        ],
        "dataActions": [],
        "notActions": [],
        "notDataActions": []
      }
    ],
    "roleName": "DNS TXT Contributor",
    "roleType": "CustomRole",
    "type": "Microsoft.Authorization/roleDefinitions"
  }

Checking DNS and resource group

$ az network dns zone list
  [
    {
      "etag": "masked...",
      "id": "/subscriptions/masked.../resourceGroups/sites/providers/Microsoft.Network/dnszones/iqonda.net",
      "location": "global",
      "maxNumberOfRecordSets": 10000,
      "name": "masked...",
      "nameServers": [
        "ns1-09.azure-dns.com.",
        "ns2-09.azure-dns.net.",
        "ns3-09.azure-dns.org.",
        "ns4-09.azure-dns.info."
      ],
      "numberOfRecordSets": 14,
      "registrationVirtualNetworks": null,
      "resolutionVirtualNetworks": null,
      "resourceGroup": "masked...",
      "tags": {},
      "type": "Microsoft.Network/dnszones",
      "zoneType": "Public"
    }
  ]

$ az network dns zone list --output table
  ZoneName    ResourceGroup    RecordSets    MaxRecordSets
  ----------  ---------------  ------------  ---------------
  masked...  masked...            14            10000

$ az group list --output table
  Name                                Location        Status
  ----------------------------------  --------------  ---------
  cloud-shell-storage-southcentralus  southcentralus  Succeeded
  masked...                    eastus          Succeeded
  masked...                    eastus          Succeeded
  masked...                    eastus          Succeeded

role assign

  $ az ad sp create-for-rbac --name "Acme2DnsValidator" --role "DNS TXT Contributor" --scopes "/subscriptions/masked.../resourceGroups/sites/providers/Microsoft.Network/dnszones/masked..."
  Changing "Acme2DnsValidator" to a valid URI of "http://Acme2DnsValidator", which is the required format used for service principal names
  Found an existing application instance of "masked...". We will patch it
  Creating a role assignment under the scope of "/subscriptions/masked.../resourceGroups/sites/providers/Microsoft.Network/dnszones/masked..."
  {
    "appId": "masked...",
    "displayName": "Acme2DnsValidator",
    "name": "http://Acme2DnsValidator",
    "password": "masked...",
    "tenant": "masked..."
  }

  $ az ad sp create-for-rbac --name "Acme2DnsValidator" --role "DNS TXT Contributor" --scopes "/subscriptions/masked.../resourceGroups/masked..."
  Changing "Acme2DnsValidator" to a valid URI of "http://Acme2DnsValidator", which is the required format used for service principal names
  Found an existing application instance of "masked...". We will patch it
  Creating a role assignment under the scope of "/subscriptions/masked.../resourceGroups/masked..."
  {
    "appId": "masked...",
    "displayName": "Acme2DnsValidator",
    "name": "http://Acme2DnsValidator",
    "password": "masked...",
    "tenant": "masked..."
  }

  $ az role assignment list --all | jq -r '.[] | [.principalName,.roleDefinitionName,.scope]'
  [
    "http://Acme2DnsValidator",
    "DNS TXT Contributor",
    "/subscriptions/masked.../resourceGroups/masked..."
  ]
  [
    "masked...",
    "Owner",
    "/subscriptions/masked.../resourcegroups/masked.../providers/Microsoft.Storage/storageAccounts/masked..."
  ]
  [
    "http://Acme2DnsValidator",
    "DNS TXT Contributor",
    "/subscriptions/masked.../resourceGroups/masked.../providers/Microsoft.Network/dnszones/masked..."
  ]

$ az ad sp list | jq -r '.[] | [.displayName,.appId]'
  The result is not complete. You can still use '--all' to get all of them with long latency expected, or provide a filter through command arguments
...

  [
    "AzureDnsFrontendApp",
    "masked..."
  ]

  [
    "Azure DNS",
    "masked..."
  ]

Traefik Configuration

reference

Azure Credentials in environment file

$ cat .env
    AZURE_CLIENT_ID=masked...
    AZURE_CLIENT_SECRET=masked...
    AZURE_SUBSCRIPTION_ID=masked...
    AZURE_TENANT_ID=masked...
    AZURE_RESOURCE_GROUP=masked...
    #AZURE_METADATA_ENDPOINT=

Traefik Files

    $ cat traefik.yml 
    ## STATIC CONFIGURATION
    log:
      level: INFO

    api:
      insecure: true
      dashboard: true

    entryPoints:
      web:
        address: ":80"
      websecure:
        address: ":443"

    providers:
      docker:
        endpoint: "unix:///var/run/docker.sock"
        exposedByDefault: false

    certificatesResolvers:
      lets-encr:
        acme:
          #caServer: https://acme-staging-v02.api.letsencrypt.org/directory
          storage: acme.json
          email: admin@my.doman
          dnsChallenge:
            provider: azure

        $ cat docker-compose.yml 
        version: "3.3"

        services:

            traefik:
              image: "traefik:v2.2"
              container_name: "traefik"
              restart: always
              env_file:
                - .env
              command:
                #- "--log.level=DEBUG"
                - "--api.insecure=true"
                - "--providers.docker=true"
                - "--providers.docker.exposedbydefault=false"
              labels:
                 ## DNS CHALLENGE
                 - "traefik.http.routers.traefik.tls.certresolver=lets-encr"
                 - "traefik.http.routers.traefik.tls.domains[0].main=*.iqonda.net"
                 - "traefik.http.routers.traefik.tls.domains[0].sans=iqonda.net"
                 ## HTTP REDIRECT
                 #- "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
                 #- "traefik.http.routers.redirect-https.rule=hostregexp({host:.+})"
                 #- "traefik.http.routers.redirect-https.entrypoints=web"
                 #- "traefik.http.routers.redirect-https.middlewares=redirect-to-https"
              ports:
                - "80:80"
                - "8080:8080" #Web UI
                - "443:443"
              volumes:
                - "/var/run/docker.sock:/var/run/docker.sock:ro"
                - "./traefik.yml:/traefik.yml:ro"
                - "./acme.json:/acme.json"
              networks:
                - external_network

            whoami:
              image: "containous/whoami"
              container_name: "whoami"
              restart: always
              labels:
                - "traefik.enable=true"
                - "traefik.http.routers.whoami.entrypoints=web"
                - "traefik.http.routers.whoami.rule=Host(whoami.iqonda.net)"
                #- "traefik.http.routers.whoami.tls.certresolver=lets-encr"
                #- "traefik.http.routers.whoami.tls=true"
              networks:
                - external_network

            db:
              image: mariadb
              container_name: "db"
              volumes:
                - db_data:/var/lib/mysql
              restart: always
              environment:
                MYSQL_ROOT_PASSWORD: somewordpress
                MYSQL_DATABASE: wordpress
                MYSQL_USER: wordpress
                MYSQL_PASSWORD: wordpress
              networks:
                - internal_network

            wpsites:
              depends_on:
                - db
              ports:
                - 8002:80
              image: wordpress:latest
              container_name: "wpsites"
              volumes:
                - /d01/html/wpsites.my.domain:/var/www/html
              restart: always
              environment:
                WORDPRESS_DB_HOST: db:3306
                WORDPRESS_DB_USER: wpsites
                WORDPRESS_DB_NAME: wpsites
              labels:
                 - "traefik.enable=true"
                 - "traefik.http.routers.wpsites.rule=Host(wpsites.my.domain)"
                 - "traefik.http.routers.wpsites.entrypoints=websecure"
                 - "traefik.http.routers.wpsites.tls.certresolver=lets-encr"
                 - "traefik.http.routers.wpsites.service=wpsites-svc"
                 - "traefik.http.services.wpsites-svc.loadbalancer.server.port=80"
              networks:
                - external_network
                - internal_network

        volumes:
              db_data: {}

        networks:
          external_network:
          internal_network:
            internal: true

WARNING: If you are not using the staging endpoint for LetsEncrypt strongly reconside doing that while working on this. You can get blocked for a week.

Start Containers

$ docker-compose up -d --build
whoami is up-to-date
Recreating traefik ... 
db is up-to-date
...
Recreating traefik ... done

Showing some log issues you may see

$ docker logs traefik -f
    ...
    time="2020-05-17T21:17:40Z" level=info msg="Testing certificate renew..." providerName=lets-encr.acme
    ...
    time="2020-05-17T21:17:51Z" level=error msg="Unable to obtain ACME certificate for domains
    ..."AADSTS7000215: Invalid client secret is provided.

$ docker logs traefik -f
    ...
    \"keyType\":\"RSA4096\",\"dnsChallenge\":{\"provider\":\"azure\"},\"ResolverName\":\"lets-encr\",\"store\":{},\"ChallengeStore\":{}}"
     acme: error presenting token: azure: dns.ZonesClient#Get: Invalid input:     autorest/validation: validation failed: parameter=resourceGroupName constraint=Pattern value=\"\\\"sites\\\"\" details: value 

$ docker logs traefik -f
    ...
    time="2020-05-17T22:23:38Z" level=info msg="Starting provider *acme.Provider {\"email\":\"admin@iqonda.com\",\"caServer\":\"https://acme-staging-v02.api.letsencrypt.org/   directory\",\"storage\":\"acme.json\",\"keyType\":\"RSA4096\",\"dnsChallenge\":{\"provider\":\"azure\"},\"ResolverName\":\"lets-encr\",\"store\":{},\"ChallengeStore\":{}}"
    time="2020-05-17T22:23:38Z" level=info msg="Testing certificate renew..." providerName=lets-encr.acme
    time="2020-05-17T22:23:38Z" level=info msg="Starting provider *traefik.Provider {}"
    time="2020-05-17T22:23:38Z" level=info msg="Starting provider *docker.Provider {\"watch\":true,\"endpoint\":\"unix:///var/run/docker.sock\",\"defaultRule\":\"Host({{  normalize .Name }})\",\"swarmModeRefreshSeconds\":15000000000}"
    time="2020-05-17T22:23:48Z" level=info msg=Register... providerName=lets-encr.acme

In a browser looking at cert this means working but still stage url: CN=Fake LE Intermediate X1

NOTE: In Azure DNS activity log i can see TXT record was created and deleted. Record will be something like this: _acme-challenge.my.domain

Browser still not showing lock. Test with https://www.whynopadlock.com and in my case was just a hardcoded image on the page making it insecure.

Comments Off on Traefik Wildcard Certificate using Azure DNS
comments

May 08

Using tar and AWS S3


Example of tar straight to object storage and untar back.

$ tar -cP /ARCHIVE/temp/ | gzip | aws s3 cp - s3://sites2-ziparchives.ls-al.com/temp.tgz

$ aws s3 ls s3://sites2-ziparchives.ls-al.com | grep temp.tgz
2020-05-07 15:40:28    7344192 temp.tgz

$ aws s3 cp s3://sites2-ziparchives.ls-al.com/temp.tgz - | tar zxvp
tar: Removing leading `/' from member names
/ARCHIVE/temp/
...

$ ls ARCHIVE/temp/
'March 30-April 3 Kinder Lesson Plans.pdf'   RCAT

Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

When using "aws s3 cp" command you need to specify the --expected-size flag.

References

Comments Off on Using tar and AWS S3
comments