FreeBSD 13 on the BeagleBone black

For load balancing reasons a second device is needed in my setup and the BeagleBone Black makes a good candidate because of its built in eMMC flash memory.

Serial Console

The first thing we want to do is to connect a serial console. This makes live easier as the complete boot process can be seen (good for debug) and no network or connected periphery is needed to get to a console.

BeagleBone Black

Booting from SD Card

We download the FreeBSD 13 release for the arm (arm7) architecture from the ftp site and put it onto an SD Card (use the dd tool or balenaEtcher). Getting the BeagleBone to boot from SD Card is done by pressing down onto the boot button when connecting power (see picture above).

Copying the image to eMMC Flash

As we have successfully booted from SD card we can log in with the pre-built username and password (root/root) and take a look at the block device partitions.

Note: the boot device dictates the order for block devices!

Because we booted from SD Card, the device mmcsd0 represents the SD card and mmcsd1 is the eMMC.

root@generic:~ # gpart show
=>      63  31586241  mmcsd0  MBR  (15G)
        63      2016          - free -  (1.0M)
      2079    102312       1  fat32lba  [active]  (50M)
    104391  31475769       2  freebsd  (15G)
  31580160      6144          - free -  (3.0M)

=>     63  7471041  mmcsd1  MBR  (3.6G)
       63  7471041          - free -  (3.6G)

Now we can use the dd tool to copy over the image (in my case I copied the image into the home directory of the freebsd user beforehand).

root@generic:~ # dd if=/home/freebsd/FreeBSD-13.0-RELEASE-arm-armv7-GENERICSD.img of=/dev/mmcsd1 bs=1M
3072+0 records in
3072+0 records out
3221225472 bytes transferred in 332.301124 secs (9693694 bytes/sec)

After the copy process is complete, we will see a changed layout of the eMMC flash memory.

root@generic:~ # gpart show mmcsd1
=>     63  7471041  mmcsd1  MBR  (3.6G)
       63     2016          - free -  (1.0M)
     2079   102312       1  fat32lba  [active]  (50M)
   104391  6187041       2  freebsd  (3.0G)
  6291432  1179672          - free -  (576M)

Booting from eMMC Flash

All there is left to do is to remove the SD Card and re-connect power to the BeagleBone. The first boot process will grow the root partition to fill the device. After logging into the console, the final flash layout will look as follows:

root@generic:~ # gpart show mmcsd0
=>     63  7471041  mmcsd0  MBR  (3.6G)
       63     2016          - free -  (1.0M)
     2079   102312       1  fat32lba  [active]  (50M)
   104391  7366713       2  freebsd  (3.5G)

Again, please note that the boot device dictates the numbering of block devices. Now our eMMC Flash is represented by the mmcsd0 device.

Employing SSH host certificates

Most of us have been in a situation at least once where we need to follow the TOFU (trust on first use) pattern. A popular example would be a new host that we try to login for the first time via SSH. We might see something like this:

andreas@AnDeSu16 ➜  ~ ssh andreas@192.168.1.1
The authenticity of host '192.168.1.1 (192.168.1.1)' can't be established.
ECDSA key fingerprint is SHA256:fTGKcndGlhvoHoNI8HGu6ErQpKer495rMRJrrQok+ok.
Are you sure you want to continue connecting (yes/no/[fingerprint])? 

Just to be clear: this type of warning is exactly the same situation as if our web browser is asking us if we want to continue because the browser cannot verify the authenticity of the website’s certificate (see below).

Solution Approach

The ‘fix’ is actually much easier than one might think. Similar to TLS certificates we can use SSH certificates. The idea is simple: we create ourselves a certificate authority for SSH and sign public host keys with it. As a result we will create host certificates that we can send back to the host. After that hosts will present such certificates to clients. All that clients need to do in order to verify the authenticity of such certificate is to trust our global (internal) SSH Certificate Authority.

While TLS certificates make use of a standardized format (x.509), SSH certificates follow a proprietary format that is widely used. However, the principles behind the certification process are the same. Let’s summarize the process is plain steps briefly…

  • Certificate Authority has a private key and a public key
  • Client trusts the public key of Certificate Authority
  • Host has a key-pair consisting of one private and one public key
  • Host keeps its private key secret but submits its public key to Certificate Authority
  • Certificate Authority uses its private key to sign the host’s public key
  • Certificate Authority’s signature has yielded a host certificate
  • Host imports the host certificate
  • Client is presented with host’s certificate upon login
  • Because Client trusts the Certificate Authority’s public key, it implicitly trusts Host’s certificate

Certificate Authority Layout

Let’s have a look at the directory structure we have in place for our SSH Certificate Authority. Our master keypair is stored in the private and public folder. The certs folder is meant for the certificates we issue.

/
└── ca
	└── SSH
		└── host
			├── certs
			├── private
			│   ├── tinkivity_host_ecdsa_key
			│   ├── tinkivity_host_ed25519_key
			│   └── tinkivity_host_rsa_key
			└── public
				├── tinkivity_host_ecdsa_key.pub
				├── tinkivity_host_ed25519_key.pub
				└── tinkivity_host_rsa_key.pub

Import Host’s public key(s)

Our practical example is creating a certificate for a new host (newhost.tinkivity.home), which we want to be valid for 5 years. The first thing we have to do is to create a directory under the certs folder and to change into it.

andreas@rootca ➜  SSH mkdir /ca/SSH/host/certs/newhost.tinkivity.home
andreas@rootca ➜  SSH cd /ca/SSH/host/router/newhost.tinkivity.home

In order to issue a certificate we need the public keys from the new host. Technically one public key would be enough, but there are 3 common (and considered safe) algorithms out there (ECDSA, RSA and ED25519), which each have their own key-pair and can have a corresponding certificate.

NEVER EVER should private keys leave the machine or host at which they have been generated! Please do not export private keys from your host ever!

Let’s have a quick look into the directory structure after import of the public host keys.

andreas@rootca ➜  newhost.tinkivity.home ls -lah
total 23
drwxr-xr-x  2 root  wheel     5B Jan  4 18:54 .
drwxr-xr-x  7 root  wheel     7B Jan  4 18:42 ..
-r--r-----  1 root  wheel   188B Jan  4 18:54 ssh_host_ecdsa_key.pub
-r--r-----  1 root  wheel   108B Jan  4 18:54 ssh_host_ed25519_key.pub
-r--r-----  1 root  wheel   408B Jan  4 18:54 ssh_host_rsa_key.pub

Issue Host certificate(s)

For each of the 3 keys we can issue a certificate. There are some parameters we will supply to the ssh-keygen command. Let’s go through those parameters one by one:

parametermeaning
-ssays we want to sign a certificate and the next parameter to follow must be the private key of the certificate authority
-hindicates that we are about to sign a host certificate (has no parameter value to follow)
-Isays we want to give the certificate an ID and the next parameter to follow must be the string representing the certificate ID
-nsays we want to set principal name(s) and the next parameter to follow must be a comma separated list of principal names (no white spaces please)
-Vsays we want to set the validity interval and the next parameter must be a validity interval (please read the ssh-keygen man page for format instructions)
our router’s public host key

Having understood the meaning of those parameters, we can get to work and issue our first certificate. We go alphabetically and start with the ECDSA certificate.

andreas@rootca ➜  newhost.tinkivity.home ssh-keygen -s ../../private/tinkivity_host_ecdsa_key -h -I newhost_v01 -n newhost,newhost.tinkivity.home -V 'always:20260131' ssh_host_ecdsa_key.pub
Enter passphrase: 
Signed host key ssh_host_ecdsa_key-cert.pub: id "newhost_v01" serial 0 for newhost,newhost.tinkivity.home valid before 2026-01-31T00:00:00

We repeat the same procedure for ED25519 and RSA.

andreas@rootca ➜  newhost.tinkivity.home ssh-keygen -s ../../private/tinkivity_host_ed25519_key -h -I newhost_v01 -n newhost,newhost.tinkivity.home -V 'always:20260131' ssh_host_ed25519_key.pub
andreas@rootca ➜  newhost.tinkivity.home ssh-keygen -s ../../private/tinkivity_host_rsa_key -h -I newhost_v01 -n newhost,newhost.tinkivity.home -V 'always:20260131' ssh_host_rsa_key.pub

When done we should find 3 certificates in our folder. The names of our certificates are automatically being generated. The pattern is that xxx_key.pub is being expanded to xxx_key-cert.pub.

andreas@rootca ➜  newhost.tinkivity.home ls -lah *-cert.pub
-r--r-----  1 root  wheel   873B Jan  4 18:21 ssh_host_ecdsa_key-cert.pub
-r--r-----  1 root  wheel   521B Jan  4 18:22 ssh_host_ed25519_key-cert.pub
-r--r-----  1 root  wheel   2.0K Jan  4 18:22 ssh_host_rsa_key-cert.pub

Verifying host certificates

We can use the ssh-keygen command to check the content of the certificate.

andreas@rootca ➜  newhost.tinkivity.home ssh-keygen -Lf ssh_host_ecdsa_key-cert.pub 
ssh_host_ecdsa_key-cert.pub:
        Type: ecdsa-sha2-nistp521-cert-v01@openssh.com host certificate
        Public key: ECDSA-CERT SHA256:b17eQwm1UGqUIISx1rulZt7yKypRa8zBuuBBsf7EtwU
        Signing CA: ECDSA SHA256:RElqZXAHlXvULMiwDK1OaYgQtTyxY9iLlhbctQgKRic
        Key ID: "newhost_v01"
        Serial: 0
        Valid: before 2026-01-31T00:00:00
        Principals: 
                newhost
                newhost.tinkivity.home
        Critical Options: (none)
        Extensions: (none)

Installing host certificates

At our host we need to edit the /etc/ssh/sshd_config and insert the following lines.

HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub
HostCertificate /etc/ssh/ssh_host_ecdsa_key-cert.pub
HostCertificate /etc/ssh/ssh_host_ed25519_key-cert.pub

Now the SSH Daemon will present the listed SSH certificates to clients. Don’t forget to restart the SSH Daemon after the configuration change.

Client Trust

As mentioned before, clients now don’t have to trust every certificate, but they only have to trust our Certificate Authority (one time). For that to happen we add the following lines into our ~/.ssh/known_hosts file.

andreas@AnDeSu16 ➜  ~ head -n 3 ~/.ssh/known_hosts 
@cert-authority *.tinkivity.home ecdsa-sha2-nistp521 AAAAE2VjZHNhLXNoYTItbmlzdHA1MjEAAAAIbmlzdHA1MjEAAACFBABmiuAXjy7orTZPxVrzSRe73/Cbd32skx7ESOr/pDx7Sf56uimPrjpj3/iwkx7qdjSOLVNgwyfYftlJl+GOSz/teQFleLuvNOq134YJEYX7dFh5osZTGtzndRQbFOGZ/R4zGgY1I499PdQxzN0r3pWBgR1Ch9fj6PFmu8QaeqjOWXe9Yw== tinkivity ssh ca host key
@cert-authority *.tinkivity.home ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBTLDMY7h06Hcw2O7dyh9jCN+V+g17ZXSE14aSDR25nR tinkivity ssh ca host key
@cert-authority *.tinkivity.home ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCwc0SM2eEDSPE5tV5A/H8ImtTfuZupnN9EObetGvSyTyo8K/qHLm8qpdy0AJbssm5O+Sxcy0TWjV3fEHeVABnn0FS7KGVDu6RcgJSfjszmLe1L+nhYF1jtLm1tco1EMir2iyUwLNQGsBn89auSyYF/K8109ILo06a4DErKcI3hSp/itB55dws2p2XLWtvWPhFH5tp8gCSCc90DRlRiWyyrlMYxQWnJfJbNVxc5g8D9R5JT7Kj+Yj1KyTolrF75N9x53TeOrhbJu4cBkr/Inpp+uI4bz0+ZNJE5KTrtW1NvxEicFw3R2aqjtpBaIY6ZDFW3SM7zHT49MrI4Q8Vd9y4i+q5KruPMSjCUhIPB05yyWJk9vaOyNdyCgluIWOUM81Bey9b13gvSfNMAx+8O29jD5dCBaAHn6lHu+/i67tHxvR5kAgA/XXZfjNNXrAb4PcWlasOXRLNsgJCEb/DDxzNA4dVuhhdV4+KC2qZheGn6YROpBqCHCrL8ITE2hPK+j30DkFTH4jw69tThQjZZZDo9jqoI0kVpDroFskUI8fPZBZY+k7/lTUEPxH+JaDi80fNFWuROYiAwF44NCp1I7GdtqyVdU+WNUaz6NPAaKvFZqGdcwmCJqpi7yCeS4w7vERGTzQ1V4ZJZzgDUPOrthrVP4XBoJtvsBZ3l5KdAO96Ciw== tinkivity ssh ca host key

With the amendment to your ~/.ssh/known_hosts file above you should be done. If you now connect to a host matching the domain filter (*.tinkivity.home in the case above) and still get told the authenticity cannot be verified, you should be concerned for a reason 😉

Exception: create private keys for future host

In exceptional cases we can store the private key of a host in our Certificate Authority. One of those rare exceptions is when we want to build an appliance that we want to setup via an image in a one-stop-shop approach (i.e. using Crochet-FreeBSD or YOCTO). In such cases it makes sense to generate the private key(s) upfront, because the first time such device will actually boot might be somewhere in the field (where we don’t have access). What we want are 3 keys – one for each commonly accepted and considered safe cryptographic algorithms.

Do not set a password if you want the host to have the possibility to work unattended!

andreas@rootca ➜  image123.tinkivity.home ssh-keygen -t ecdsa -b 521 -C "image123 host key" -f ssh_host_ecdsa_key
andreas@rootca ➜  image123.tinkivity.home ssh-keygen -t ed25519 -C "image123 host key" -f ssh_host_ed25519_key
andreas@rootca ➜  image123.tinkivity.home ssh-keygen -t rsa -b 4096 -C "image123 host key" -f ssh_host_rsa_key

SSH config for public key authentication with OSX

Rather than using a username password based SSH login, it is much safer to use SSH certificates as those have an (ideally very close) expiration date. The first step to use public key authentication is to generate a keypair.

andreas@laptop ➜  ~ ssh-keygen -t ecdsa -f ~/.ssh/id_ecdsa

Above command will generate a keypair using an elliptic curve digital signature algorithm. You will be asked to type a passphrase for protection of your private key. You should definitely use a passphrase. Do not leave your key unprotected!

andreas@ laptop ➜  ~ ls -l ~/.ssh/       
total 144
-rw-------  1 andreas  staff    578 Dec 10 10:47 id_ecdsa
-rw-r--r--  1 andreas  staff    193 Dec 10 10:47 id_ecdsa.pub

In a next step you can submit id_ecdsa.pub – the public part of the key, to your SSH CA for obtaining a signed certificate. Anyway, this step is optional. What you will need to do is to create a config file for ssh that dictates when and how to use the key.

andreas@laptop ➜  ~ vim ~/.ssh/config

Now add the following content to ~/.ssh/config and save it.

Match Host *.local
  UseKeychain yes
  AddKeysToAgent yes
  Preferredauthentications publickey
  IdentityFile ~/.ssh/id_ecdsa
# user andreas

Here is what the configuration does on a line by line basis.

  1. a host filter that says apply to block of setting below for every host that ends with .local (i.e.: server1.local, server23.local, …)
  2. advice the ssh agent to use OSX’s keychain
  3. advice the ssh agent to upload private keys into OSX’s keychain once they have been unlocked
  4. use public key authentication
  5. use the private key stored in ~/.ssh/id_ecdsa for public key authentication to hosts with hosts
  6. optional: always use andreas as a username so rather than ‘ssh andreas@host1.local‘ you only have to type ‘ssh host1.local

Finally you need to perform an initial upload of your key into OSX’s keychain (this is a one time thing!).

andreas@laptop ➜  ~ ssh-add -K ~/.ssh/id_ecdsa 

After you have done this, you can login to any host that trusts you without unlocking your private key with your passphrase as long as you don’t reboot your PC.

x509 certificate templates with step ca

Rather that having your step ca issue certificates that only reflect what is in the CSR, you can use certificate templates in order to dynamically add content to the x509 certificates being issued.

CA Configuration

In your /usr/local/etc/step/ca/config/ca.json configuration file you need to add some options to your provisioner. Let’s assume you have an existing ACME provisioner that looks as follows:

                        {
                                "type": "ACME",
                                "name": "24h",
                                "claims": {
                                        "maxTLSCertDuration": "24h0m0s",
                                        "defaultTLSCertDuration": "24h0m0s"
                                }
                        },

After the claims section you need to insert an options block. As usual, don’t forget the comma after the curly braces that end the claims section.

                        {
                                "type": "ACME",
                                "name": "24h",
                                "claims": {
                                        "maxTLSCertDuration": "24h0m0s",
                                        "defaultTLSCertDuration": "24h0m0s"
                                },
                                "options": {
                                        "x509": {
                                                "templateFile": "/usr/local/etc/step/ca/templates/certs/x509/acme.tpl",
                                                "templateData": {
                                                        "TDCountry": "DE",
                                                        "TDStateOrProvince": "Saxony",
                                                        "TDLocality": "Dresden",
                                                        "TDStreetAddress": "Musterstrasse 1, 01234 Dresden, Germany",
                                                        "TDOrganization": "Tinkivity",
                                                        "TDOrganizationalUnit": "web server team"
                                                }
                                        }
                                }
                        },

Line 10 points to a template file that we will look at in just a few seconds. Line 11 introduces a data section that we use to inject some dynamic data. The data items (line 12 until line 17) make more sense when looking at the actual acme.tpl template file as referenced in line 11.

Template file

Below shows our acme.tpl template file. I will not explain the complete template but just some of the most important aspects.

{
    "subject": {
    {{- if .Insecure.CR.Subject.CommonName }}
        "commonName": "{{ .Insecure.CR.Subject.CommonName }}",
    {{- else }}
        "commonName": "{{ (index .SANs 0).Value }}",
    {{- end }}
        "country": "{{ .TDCountry }}",
        "province": "{{ .TDStateOrProvince }}",
        "locality": "{{ .TDLocality }}",
        "streetAddress": "{{ .TDStreetAddress }}",
        "organization": "{{ .TDOrganization }}",
        "organizationalUnit": "{{ .TDOrganizationalUnit }}"
    },
    "sans": {{ toJson .SANs }},
{{- if typeIs "*rsa.PublicKey" .Insecure.CR.PublicKey }}
    "keyUsage": ["keyEncipherment", "digitalSignature"],
{{- else }}
    "keyUsage": ["digitalSignature"],
{{- end }}
    "extKeyUsage": ["serverAuth", "clientAuth"]
}

Lines 3 until 7 check if the CSR contains a common name in the subject. If the common name exists it will be applied, otherwise the first subject alternative name is used for the common name in the certificate subject. The reason for that logic is that certbot in my case seems to sometimes omit the common name ;-(

Lines 8 until 13 contain a reference to the injected template data and set the according subject fields for the certificate that is being issued.

Certificate issued with applied template

Let’s check how a certificate obtained via certbot looks like after the template is applied.

andreas@testserver ➜  ~ sudo openssl x509 -noout -text -in /usr/local/etc/letsencrypt/live/testserver/fullchain.pem
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            9f:95:31:ed:1f:0b:b8:99:39:e1:64:02:73:89:d1:db
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C = DE, ST = Saxony, O = Tinkivity, OU = Tinkivity Intermediate Certificate Authority, CN = Smallstep Intermediate CA, emailAddress = xxx@xxx.com
        Validity
            Not Before: Dec  7 18:18:07 2020 GMT
            Not After : Dec  8 18:19:07 2020 GMT
        Subject: C = DE, ST = Saxony, L = Dresden, street = "Musterstrasse 1, 01234 Dresden, Germany", O = Tinkivity, OU = web server team, CN = testserver
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    00:c4:74:01:28:bb:26:20:2f:a1:6b:30:44:9e:9b:
                    ...
                    << REDACTED >>
                    ...
                    04:57
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                81:7C:BB:71:7F:02:52:06:71:CF:E2:87:3B:CA:8A:09:6C:81:65:46
            X509v3 Authority Key Identifier: 
                keyid:87:32:28:49:63:29:06:79:96:13:DE:47:14:9F:EF:C0:DD:EC:4D:C3

            X509v3 Subject Alternative Name: 
                DNS:testserver, DNS:testserver.local
            1.3.6.1.4.1.37476.9000.64.1: 
                0
.....24h..
    Signature Algorithm: sha256WithRSAEncryption
         b7:74:16:a4:a1:5a:fb:df:a0:ea:42:5c:cd:70:fc:16:2d:8b:
         ...
         << REDACTED >>
         ...
         c9:9e:db:00:40:56:61:a1

Client Certificates with NGINX

If you want to entirely restrict access to a web server to only those folks that you deem authorized, using client certificates is the way to go. The common expression for such pattern is mutual TLS, or mTLS for short.

Outline

There are 3 components you need for this recipe:

  1. a web server that supports mTLS
  2. a certificate authority that issues a client certificate
  3. a client that will submit such client certificate to the web server as part of a request

In scope of this blog post

The following steps are being explained as part of this blog post:

  • generate a certificate request (CSR) with openssl
  • issue a client certificate with step ca
  • configure NGINX to require mTLS
  • issue an HTML request with curl

Not in scope of this blog post

There is a lot of background knowledge required to fully comprehend how mTLS works in detail. The following topics are not being addressed in this blog post and assumed to be understood to a minimum extent at least.

  • Core concepts of a Public Key Infrastructure (PKI)
  • x509 Certificates
  • OpenSSL
  • step ca
  • Import of x509 client certificates into the operating system you use

Generate a CSR with OpenSSL

For a Certificate Authority (CA) to issue a certificate to a client, a Certificate Signing Request (CSR) from the client is needed if such client wants to keep its private key as a secret to itself. An easy way to generate such is to use OpenSSL with a configuration file. Below configuration file (assumed name: john-csr.cnf) shows the bare minimum for a CSR that can be used by a CA to issue a client certificate. The only 2 net information contained in the CSR are the user name and the user’s email address.

[req]
prompt             = no
distinguished_name = req_dn
req_extensions     = req_ext

[req_dn]
CN                 = John Doe

[req_ext]
subjectAltName     = @alt_names

[alt_names]
email.1            = john.doe@examplemail.com

While we could have put the email address into the subject (req_dn section), it is important to understand that we deliberately not do this but use the subject alternative name extension to place the email address.

To generate the CSR we use openssl’s req command. We use the existing private key that we have generated beforehand and stored in the key.pem file.

andreas@laptop ➜  ~ openssl req -new -config john-csr.cnf -key key.pem -out johndoe.csr

When visualizing the generated CSR we can see the 2 net information reflected in the CSR.

andreas@laptop ➜  ~ openssl req -noout -text -in johndoe.csr                          
Certificate Request:
    Data:
        Version: 1 (0x0)
        Subject: CN = John Doe
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                RSA Public-Key: (2048 bit)
                Modulus:
                    00:e4:10:b6:3d:82:fa:ca:4b:b7:61:20:a0:33:ed:
                    ...
                    <<REDACTED>>
                    ...
                    67:ed
                Exponent: 65537 (0x10001)
        Attributes:
        Requested Extensions:
            X509v3 Subject Alternative Name: 
                email:john.doe@examplemail.com
    Signature Algorithm: sha256WithRSAEncryption
         0a:13:8b:4a:16:ee:c4:4f:23:86:f6:d8:b2:d3:7c:d6:70:d1:
         ...
         <<REDACTED>>
         ...
         3f:61:4d:cf

Issue a certificate with step ca

We will copy the CSR onto our step ca server (maybe under the incoming folder) and issue a step ca sign command. If we do have multiple provisioners, the sign command will prompt a list of all provisioners and ask us to interactively select a provisioner. After selecting the provisioner, we need to input the passphrase that protects the private key of the issuing intermediate CA.

andreas@acme ➜  ~ step ca sign --ca-url https://acme.local:8443 --root /etc/ssl/tinkivity.pem incoming/johndoe.csr issued/johndoe.pem
✔ Provisioner: 1year (JWK) [kid: SlOHMD00B8-WIyUqa1zQxP9xwG4UQCvOorMU02xThUc]
✔ Please enter the password to decrypt the provisioner key: 
✔ CA: https://acme.local:8443
✔ Certificate: issued/johndoe.pem

As an alternative we pre-select the provisioner already along with sign command.

andreas@acme ➜  ~ step ca sign --ca-url https://acme.local:8443 --root /etc/ssl/tinkivity.pem --provisioner 1year incoming/johndoe.csr issued/johndoe.pem
✔ Provisioner: 1year (JWK) [kid: SlOHMD00B8-WIyUqa1zQxP9xwG4UQCvOorMU02xThUc]
✔ Please enter the password to decrypt the provisioner key: 
✔ CA: https://acme.local:8443
✔ Certificate: issued/johndoe.pem

Another alternative is to generate a token upfront as it allows us to pass in the password file and thus make the command completely interaction free (not shown in this blog, but can be see in this previous blog post).

Either way, we can inspect the generated certificate with OpenSSL.

andreas@acme ➜  ~ openssl x509 -noout -text -in issued/johndoe.pem 
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            3f:ed:1f:01:b1:ce:90:66:0f:33:b7:31:fa:ce:b9:8a
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=DE, ST=Saxony, O=Tinkivity, OU=Tinkivity Intermediate Certificate Authority, CN=Smallstep Intermediate CA/emailAddress=xxx@xxx.com
        Validity
            Not Before: Dec  6 12:58:16 2020 GMT
            Not After : Dec  6 12:59:16 2021 GMT
        Subject: CN=John Doe
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:e4:10:b6:3d:82:fa:ca:4b:b7:61:20:a0:33:ed:
                    ...
                    <<REDACTED>>
                    ...
                    67:ed
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Subject Key Identifier: 
                7D:A9:C5:44:49:EC:CC:54:37:64:46:CF:A9:99:85:D6:23:18:9D:F8
            X509v3 Authority Key Identifier: 
                keyid:87:32:28:49:63:29:06:79:96:13:DE:47:14:9F:EF:C0:DD:EC:4D:C3

            X509v3 Subject Alternative Name: 
                email:john.doe@examplemail.com
            1.3.6.1.4.1.37476.9000.64.1: 
                07.....1year.+SlOHMD00B8-WIyUqa1zQxP9xwG4UQCvOorMU02xThUc
    Signature Algorithm: sha256WithRSAEncryption
    Signature Algorithm: sha256WithRSAEncryption
         76:46:dc:d0:c7:81:ab:f3:c0:3c:0f:5c:99:d1:12:ca:97:a1:
         ...
         <<REDACTED>>
         ...
         a7:e7:56:13:79:3d:3c:b0

Setup NGINX for mTLS

Assuming we re-use the NGINX setup from this previous blog post, we only have to add a few lines to the site configuration at /usr/local/etc/nginx/sites/testclient.conf from our NGINX server.

server {
#       listen       80;
        server_name  testclient;

        access_log /var/log/nginx/testclient.access.log;
        error_log /var/log/nginx/testclient.error.log;

        # location of our own root certificate
        ssl_client_certificate  /etc/ssl/certs/97efb5b5.0;
        ssl_verify_client       optional;

        location / {
            root   /usr/local/www/sites/testclient/html;
            index  index.html;

            if ($ssl_client_verify != SUCCESS) {
                    return 403;
            }
        }

        listen 443 ssl;
        ssl_certificate /usr/local/etc/letsencrypt/live/testclient/fullchain.pem;
        ssl_certificate_key /usr/local/etc/letsencrypt/live/testclient/privkey.pem;
        include /usr/local/etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /usr/local/etc/letsencrypt/ssl-dhparams.pem;
}

server {
        if ($host = testclient) {
            return 301 https://$host$request_uri;
        }

        listen       80;
        server_name  testclient;
        return 404;
}

Now we only have NGINX reload the configuration.

andreas@testclient ➜  ~ sudo service nginx reload
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

Issue a request with client authentication

Let’s start by trying to issue a regular request from our laptop. We use a simple curl command to callout to https://testclient. We tell curl where to find our root certificate via the –cacert option, so it will not complain about an unknown root CA.

According to the NGINX configuration, we should immediately receive a 403 error from the web server if we lack client authentication…

andreas@laptop ➜  ~ curl --cacert /etc/ssl/certs/97efb5b5.0 https://testclient
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.18.0</center>
</body>
</html>

Now, let’s apply our user certificate and the private key we’ve used to generate the CSR.

andreas@laptop ➜  ~ curl --cacert /etc/ssl/certs/97efb5b5.0 --cert johndoe.pem --key key.pem https://testclient
<html>
    <head>
        <title>TESTCLIENT</title>
    </head>
    <body>
        <h1>Hello World!</h1>
    </body>
</html>

Manually provisioning x509 certificates with step ca

Based on the step ca setup as described in Running your own ACME Server, we can add another provisioner that allows us to manually sign CSRs from web servers that do not support Certbot. One example for such use case would be system solutions like TrueNAS or Proxmox, which by now have ACME support but do not support easy customization or override of the ACME server URL. In fact, many system solutions with ACME support assume that only Let’s Encrypt is used when obtaining certificates via ACME protocol.

In this blog post we look at how to add further provisioners for smallstep’s step ca.

Adding a simple provisioner via command line

More or less only one command is needed to add a provisioner. We need to pass in the name of the provisioner (4weeks), the location of the ca config file, the location of the password file and the create command.

andreas@acme ➜  ~ sudo step ca provisioner add 4weeks --ca-config /usr/local/etc/step/ca/config/ca.json --password-file /usr/local/etc/step/password.txt --create

Looking at the /usr/local/etc/step/ca/config/ca.json configuration file we can find the following new block next to our existing ACME provisioner.

                        {
                                "type": "JWK",
                                "name": "4weeks",
                                "key": {
                                        "use": "sig",
                                        "kty": "EC",
                                        "kid": "WsxEssolEVj1TpF-nfXpSuY2jL8pLQgpCtgVj5Qq3Ls",
                                        "crv": "P-256",
                                        "alg": "ES256",
                                        "x": "5b9f1pk6VVM5CCIHUOpbw6SV8lC-rAxEQtiScRZUopE",
                                        "y": "hxRrUPm7M6S7HBm9LZV5JUbBLP7l2aG4CKr1vY20csw"
                                },
                                "encryptedKey": "eyJh... <<REDACTED>> ...no1w"
                        }

We called our provisioner 4weeks for a reason – we want certificates be valid for 4 weeks (672 hours). That said, we need to add a claims section to the provisioner that clarifies the validity. When adding the claims section, do not forget to comply with JSON and make sure to append a comma to the last line before the new section.

                        {
                                "type": "JWK",
                                "name": "4weeks",
                                "key": {
                                        "use": "sig",
                                        "kty": "EC",
                                        "kid": "WsxEssolEVj1TpF-nfXpSuY2jL8pLQgpCtgVj5Qq3Ls",
                                        "crv": "P-256",
                                        "alg": "ES256",
                                        "x": "5b9f1pk6VVM5CCIHUOpbw6SV8lC-rAxEQtiScRZUopE",
                                        "y": "hxRrUPm7M6S7HBm9LZV5JUbBLP7l2aG4CKr1vY20csw"
                                },
                                "encryptedKey": "eyJh... <<REDACTED>> ...no1w",
                                "claims": {
                                        "minTLSCertDuration": "24h0m0s",
                                        "maxTLSCertDuration": "672h0m0s",
                                        "defaultTLSCertDuration": "672h0m0s",
                                        "disableRenewal": false
                                }
                        }

To make the change effective, the service needs to be restarted.

andreas@acme ➜  ~ sudo service step-ca restart                  
Stopping step_ca.
Starting step_ca.
step_ca is running as pid 96773.

Import a CSR

While this blog post will not cover how to create a CSR, we start by copying a CSR onto our step ca server. For the remainder of this blog post we assume request.csr to be the name of that CSR.

Issue a Certificate

The step ca command line tools available allow us to issue a certificate with a token. The first step is to create such token, which we export into an environment variable for later use.

andreas@acme ➜  ~ export TOKEN=`step ca token 'newserver.local' --ca-url https://acme.local:8443 --root /etc/ssl/tinkivity.pem`
✔ Provisioner: 4weeks (JWK) [kid: WsxEssolEVj1TpF-nfXpSuY2jL8pLQgpCtgVj5Qq3Ls]
✔ Please enter the password to decrypt the provisioner key: 

The command is interactive and will first ask us to select one provisioner from the list of all available provisioners. After we select our provisioner (4weeks), we are being asked for the password to decrypt the provisioner key.

As an alternative to the interactive password input, we could add the –pasword-file directive to the command. That way we don’t have to input our password, but we would need to run the command as sudo in order to get read access to the password file.

andreas@acme ➜  ~ export TOKEN=`step ca token 'newserver.local' --ca-url https://acme.local:8443 --root /etc/ssl/tinkivity.pem --password-file /usr/local/etc/step/password.txt`

Either way the command should complete without error and our token should be available.

andreas@acme ➜  ~ echo $TOKEN                                                                                                           
eyJhbGciOiJFUzI1NiIsImtpZCI6IldzeEVzc29sRVZqMVRwRi1uZlhwU3VZMmpMOHBMUWdwQ3RnVmo1UXEzTHMiLCJ0eXAiOiJKV1QifQ.eyJhdWQiOiJodHRwczovL2FjbWUudGlua2l2aXR5LmhvbWU6ODQ0My8xLjAvc2lnbiIsImV4cCI6MTYwNzAxNTA1NywiaWF0IjoxNjA3MDE0NzU3LCJpc3MiOiI0d2Vla3MiLCJqdGkiOiI3ZDRlZjViZWI3YWQ2NzA1ZTkzNDY3NmMwMjhjMmY3OWFjMmRmNzMxODIwZDg3NDc0NWJmYzMyNzIwMmIyOTNjIiwibmJmIjoxNjA3MDE0NzU3LCJzYW5zIjpbIkFuZHJlYXMgU3RyYXVjaCJdLCJzaGEiOiIwMzUxMWI1YzRjZmNlZWJlNjI5YzJjMjQ2YTIzMjMwYjhhYzQxNDQyOTI0MjliOGMzN2ZhM2FjMGE3MmUwZmM5Iiwic3ViIjoiQW5kcmVhcyBTdHJhdWNoIn0.gaOwV7nEVq8cOL4uVvp1Y4-c3NUMs0YKMri0N9Q9MQRAWnvCg8BKuntSxThIeywvM0gMO2QND_9iz9VObFRULg

All that’s left to do now is to sign the certificate with our token.

andreas@acme ➜  ~ step ca sign --token $TOKEN incoming/request.csr issued/certificate.csr       
✔ CA: https://acme.local:8443/1.0/sign
✔ Certificate: issued/certificate.csr

That’s it. The certificate is ready to be deployed.

Using Certbot with your own ACME server

In the last blog post Running your own ACME Server we have successfully installed our own PKI with an ACME provisioner. In this blog post we want to look at the client side and automatically obtain and renew a client certificate for a web server.

NGINX

From an ACME point of view the type of web server doesn’t matter at all. In this example we will use NGINX as a web server, because it is lightweight and popular.

andreas@testclient ➜  ~ sudo pkg install nginx

As we want NGINX to run as a service we will append one line to our /etc/rc.conf and then start the service.

andreas@testclient ➜  ~ sudo sh -c 'echo nginx_enable=\"YES\" >> /etc/rc.conf'
andreas@testclient ➜  ~ sudo service nginx start                              
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
Starting nginx.

Now we apply a pattern that extracts web server configuration and contents into separate file system locations. Each web server (or virtual domain) will get its own content folder. In our case we want to put all content into a testclient folder.

andreas@testclient ➜  ~ sudo mkdir -p /usr/local/www/sites/testclient/html

We will create a simple html file at /usr/local/www/sites/testclient/html/index.html with the following content.

<html>
    <head>
        <title>TESTCLIENT</title>
    </head>
    <body>
        <h1>Hello World!</h1>
    </body>
</html>

The webserver configuration will be put into a .conf file the follows the same name. Although we only need one webserver in our example we will still create a sites subfolder for good housekeeping.

andreas@testclient ➜  ~ sudo mkdir /usr/local/etc/nginx/sites
andreas@testclient ➜  ~ sudo touch /usr/local/etc/nginx/sites/testclient.conf

Our site configuration at /usr/local/etc/nginx/sites/testclient.conf will have the following content.

server {
        listen       80;
        server_name  testclient;

        access_log /var/log/nginx/testclient.access.log;
        error_log /var/log/nginx/testclient.error.log;

        location / {
            root   /usr/local/www/sites/testclient/html;
            index  index.html;
        }
}

At last we clean up /usr/local/etc/nginx/nginx.conf by removing the complete server section as we don’t need it anymore. Instead we will add an include statement before the last { of the http section. That will make sure our /usr/local/etc/nginx/sites/testclient.conf configuration file will be parsed.

Based on a fresh installation the config file would most likely look like the following.

# This default error log path is compiled-in to make sure configuration parsing
# errors are logged somewhere, especially during unattended boot when stderr
# isn't normally logged anywhere. This path will be touched on every nginx
# start regardless of error log location configured here. See
# https://trac.nginx.org/nginx/ticket/147 for more info. 
#
#error_log  /var/log/nginx/error.log;
#

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

    include "sites/*.conf";
}

Now we check the configuration and reload the config.

andreas@testclient ➜  ~ sudo nginx -t                                      
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful
andreas@testclient ➜  ~ sudo service nginx reload                          
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

Adding our own Root Certificate to the trust store

To make things easier we will add our own root certificate into the trust store of our client. we will copy the certificate into the /usr/share/certs/trusted folder and then apply a rehash operation that is limited to only the one certificate we just copied.

andreas@testclient ➜  ~ openssl x509 -hash -noout -in /usr/share/certs/trusted/tinkivity.pem 
97efb5b5
andreas@testclient ➜  ~ sudo ln -s /usr/share/certs/trusted/tinkivity.pem /etc/ssl/certs/97efb5b5.0

We can see if we have been successful if openssl’s s_client command can verify the certificate from our ACME server.

andreas@testclient ➜  ~ openssl s_client -connect acme.local:8443 --quiet      
depth=1 C = DE, ST = Saxony, O = Tinkivity, OU = Tinkivity Intermediate Certificate Authority, CN = Smallstep Intermediate CA, emailAddress = xxx@xxx.com
verify return:1
depth=0 CN = Step Online CA
verify return:1

Certbot

Now, as we have setup a new web server, we can install Certbot and have it obtain a certificate from our ACME server. The first step is installing the packages for Certbot itself and its NGINX plugin.

andreas@testclient ➜  ~ sudo pkg install py37-certbot py37-certbot-nginx

Before we move on to the next step of registration of our domain at the ACME server we need to find out if python can successfully integrate the trust store. We issue a simple python command to check SSL verification.

andreas@testclient ➜  ~ python3.7 -c "import requests; print(requests.get('https://acme.local:8443').text)"
404 page not found

If we receive real HTML content (above 404 page not found is actually HTML and thus success), we are good for ‘regular’ Certbot usage. If we receive a lengthy exception that somewhere contains a line like below, our python installation doesn’t include the trust store correctly and we will need to operate Certbot with the –no-verify-ssl option for further requests.

requests.exceptions.SSLError: HTTPSConnectionPool(host='acme.local', port=8443): Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))

Above error happens on FreebSD 12.2-RC3 with python3.7 and seems to be a deeper issue, because python claims to look at the correct trust store location:

andreas@testclient ➜  ~ python3.7 -c "import ssl; print(ssl.get_default_verify_paths())"                                                         
DefaultVerifyPaths(cafile='/etc/ssl/cert.pem', capath='/etc/ssl/certs', openssl_cafile_env='SSL_CERT_FILE', openssl_cafile='/etc/ssl/cert.pem', openssl_capath_env='SSL_CERT_DIR', openssl_capath='/etc/ssl/certs')

The next step already is the registration of our domain at the ACME server. We use the following command:

andreas@testclient ➜  ~ sudo certbot --nginx --agree-tos --non-interactive --no-verify-ssl --email xxx@xxx.com --server https://acme.local:8443/acme/acme-smallstep/directory --domain testclient
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
Obtaining a new certificate
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
Performing the following challenges:
http-01 challenge for testclient
Using default address 80 for authentication.
nginx: [warn] conflicting server name "testclient" on 0.0.0.0:80, ignored
Waiting for verification...
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
Cleaning up challenges
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py:988: InsecureRequestWarning: Unverified HTTPS request is being made to host 'acme.local'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning,
Could not automatically find a matching server block for testclient. Set the `server_name` directive to use the Nginx installer.

IMPORTANT NOTES:
 - Unable to install the certificate
 - Congratulations! Your certificate and chain have been saved at:
   /usr/local/etc/letsencrypt/live/testclient/fullchain.pem
   Your key file has been saved at:
   /usr/local/etc/letsencrypt/live/testclient/privkey.pem
   Your cert will expire on 2020-12-01. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /usr/local/etc/letsencrypt. You should
   make a secure backup of this folder now. This configuration
   directory will also contain certificates and private keys obtained
   by Certbot so making regular backups of this folder is ideal.

Although the ACME part worked completely fine, we still get an error from Certbot’s NGINX plugin. It turns out that the plugin cannot locate the server_name directive in our NGINX configuration. That is driven by the fact that we have extracted parts of the NGINX configuration into a separate configuration file (/usr/local/etc/nginx/sites/testclient.conf). We have two options now:

  1. go back to a single NGINX configuration file
  2. manually enter the Certbot configuration snippets into our separate NGINX configuration file

We will go with the latter option and put in the Certbot configuration snippets ourselves. The configuration at /usr/local/etc/nginx/sites/testclient.conf will now look as follows.

server {
#       listen       80;
        listen       443 ssl;
        server_name  testclient;

        access_log /var/log/nginx/testclient.access.log;
        error_log /var/log/nginx/testclient.error.log;

        location / {
            root   /usr/local/www/sites/testclient/html;
            index  index.html;
        }

        ssl_certificate /usr/local/etc/letsencrypt/live/testclient/fullchain.pem;
        ssl_certificate_key /usr/local/etc/letsencrypt/live/testclient/privkey.pem;
        include /usr/local/etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /usr/local/etc/letsencrypt/ssl-dhparams.pem;
}

server {
        if ($host = testclient) {
            return 301 https://$host$request_uri;
        }

        listen       80;
        server_name  testclient;
        return 404;
}
  1. our existing server block has been updated to not listen on port 80 any more, but on port 443 via SSL instead
  2. the locations for the certificate obtained from the ACME server, the private key, ssl options (cipher suite etc.) as well as Diffie-Hellman parameters have been included into the configuration
  3. a new server block has with the goal of listening on port 80 and redirection to port 443 SSL has been added

In order to apply the configuration changes, we have to reload the NGINX configuration.

andreas@testclient ➜  ~ sudo service nginx reload                                 
Performing sanity check on nginx configuration:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful

Auto Renew

Last but not least we need to insert a cron task so that Certbot will automatically renew the certificate on a regular schedule.

andreas@testclient ➜  ~ echo "0       0,12    *       *       *       root    python3.7 -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew --no-verify-ssl --quiet" | sudo tee -a /etc/crontab > /dev/null

Above entry will run Certbot’s renew command at midnight and high noon. Without further parameters (i.e. domain) above command will renew all certificates managed by Certbot. If you want to see which certificates are being managed by Certbot you can run the following command.

andreas@testclient ➜  ~ sudo certbot certificates
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Cannot extract OCSP URI from /usr/local/etc/letsencrypt/live/testclient/cert.pem

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Found the following certs:
  Certificate Name: testclient
    Serial Number: 8a98c881442ed7de1d460dee5a97fb6
    Domains: testclient
    Expiry Date: 2020-12-02 18:51:33+00:00 (VALID: 23 hour(s))
    Certificate Path: /usr/local/etc/letsencrypt/live/testclient/fullchain.pem
    Private Key Path: /usr/local/etc/letsencrypt/live/testclient/privkey.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

That’s it. If you put your web browser onto http://testclient in your local network, you should see a Hello World page with a valid certificate.

Running your own ACME Server

While some of us might have heard from Let’s Encrypt and how it uses ACME for complete automation of certificate management, a few of us might even ask themselves: ‘Can I also run my own private ACME server in my home network?‘. The basic answer is yes, because ACME is a standardized and open protocol. As in many ‘make vs. buy‘ decisions a more detailed look will reveal that writing your own implementation of ACME is a lot of effort and thus not the right approach for a home project. Luckily there is smallstep – a company from in the bay area that provides an open-source certificate authority & PKI toolkit that we can use.

Installing step-certificates

There are two packages you need to install in order to start working: the step-certificates package provides the certificate authority (server) and the step-cli package provides a command line client.

andreas@acme ➜  ~ sudo pkg install step-certificates step-cli

After installation there will be a service script available.

andreas@acme ➜  ~ ls -lah /usr/local/etc/rc.d/step-ca
-rwxr-xr-x  1 root  wheel   2.5K Oct  5 10:56 /usr/local/etc/rc.d/step-ca

Looking into the service script that will reveal a number of interesting findings:

  1. the rcvar we need to add to our /etc/rc.conf for service management has a value of step_ca_enable
  2. the directory that will contain all configuration (including the password) defaults to /usr/local/etc/step and after fresh installation this directory is completely empty
  3. the actual configuration file defining our step ca will be /usr/local/etc/step/config/ca.json
  4. the master password will be stored in plain text under /usr/local/etc/step/password.txt
  5. the service script implements a start_precmd that will interact with the command line in order to initialize a template config and password upon service start

First Time (Auto) Setup

We will append the step_ca_enable rcvar into our /etc/rc.conf so that we can use the service command to start and stop the step-ca service.

# Enable Step CA
step_ca_enable="YES"

Now, what we need to understand is that the start_precmd section of the service script (see last finding in above list) will simply call the step ca init command and then interactively collect a password for storing it in the password.txt file. Having said that, we will make use of that mechanism and let the command line guide us through creation of our PKI.

andreas@acme ➜  ~ sudo service step-ca start
No configured Step CA found.
Creating new one....
✔ What would you like to name your new PKI? (e.g. Smallstep): acme
✔ What DNS names or IP addresses would you like to add to your new CA? (e.g. ca.smallstep.com[,1.1.1.1,etc.]): acme.local,192.168.1.2
✔ What address will your new CA listen at? (e.g. :443): :8443
✔ What would you like to name the first provisioner for your new CA? (e.g. you@smallstep.com): firstprovisioner
✔ What do you want your password to be? [leave empty and we'll generate one]: 

Generating root certificate... 
all done!

Generating intermediate certificate... 
all done!

✔ Root certificate: /usr/local/etc/step/ca/certs/root_ca.crt
✔ Root private key: /usr/local/etc/step/ca/secrets/root_ca_key
✔ Root fingerprint: 97f4728d915d001e51ceaab3e7343a60807625ca5d5d588c52b739b202fb0164
✔ Intermediate certificate: /usr/local/etc/step/ca/certs/intermediate_ca.crt
✔ Intermediate private key: /usr/local/etc/step/ca/secrets/intermediate_ca_key
✔ Database folder: /usr/local/etc/step/ca/db
✔ Default configuration: /usr/local/etc/step/ca/config/defaults.json
✔ Certificate Authority configuration: /usr/local/etc/step/ca/config/ca.json

Your PKI is ready to go. To generate certificates for individual services see 'step help ca'.

FEEDBACK 😍 🍻
      The step utility is not instrumented for usage statistics. It does not
      phone home. But your feedback is extremely valuable. Any information you
      can provide regarding how you’re using `step` helps. Please send us a
      sentence or two, good or bad: feedback@smallstep.com or join
      https://gitter.im/smallstep/community.
Step CA Password file for auto-start not found
Creating it....
Please enter the Step CA Password:

Starting step_ca.
step_ca is running as pid 58450.

Obviously a template config that is ready to go has been created and the service already has been started. Let’s have a look at the directory structure in place, so we can better understand what has been done here.

andreas@acme ➜  ~ sudo tree /usr/local/etc/step
/usr/local/etc/step
├── ca
│   ├── certs
│   │   ├── intermediate_ca.crt
│   │   └── root_ca.crt
│   ├── config
│   │   ├── ca.json
│   │   └── defaults.json
│   ├── db
│   │   ├── 000000.vlog
│   │   ├── LOCK
│   │   └── MANIFEST
│   ├── secrets
│   │   ├── intermediate_ca_key
│   │   └── root_ca_key
│   └── templates
└── password.txt

6 directories, 10 files

The certs subfolder contains a root certificate as well as an intermediate certificate, which the keys for both are stored in the secrets subfolder. Both keys are encrypted with the same password that we’ve interactively provided at the command line when running our initial service start. That password has been stored as plain text in the password.txt file.

The config subfolder contains two json files. One file (ca.json) contains a list of all provisioners and the other file (defaults.json) contains some general information as to where the step ca can be reached and where the root certificate is located.

The db folder contains a NoSQL database with meta information on issued certificates.

The secrets folder contains the private keys for at least the intermediate certificate.

The templates folder will be empty upon initial setup but can be filled later on with certificate templates (very useful later on!).

Running a quick test

Of course we want to find out if our PKI is really running and visible from the outside. On a local command line (not the actual server running the PKI) we use openssl’s s_client command to check things out.

andreas@laptop ➜  ~ openssl s_client -connect acme.local:8443 -showcerts
CONNECTED(00000005)
depth=1 CN = myownlittleca Intermediate CA
verify error:num=20:unable to get local issuer certificate
verify return:0
---
Certificate chain
 0 s:/CN=Step Online CA
   i:/CN=myownlittleca Intermediate CA
-----BEGIN CERTIFICATE-----
MIIB2DCCAX+gAwIBAgIRAP9nSxkc+5TzPw9R3mUwtfIwCgYIKoZIzj0EAwIwKDEm
MCQGA1UEAxMdbXlvd25saXR0bGVjYSBJbnRlcm1lZGlhdGUgQ0EwHhcNMjAxMTI2
MTAzNzQzWhcNMjAxMTI3MTAzODQzWjAZMRcwFQYDVQQDEw5TdGVwIE9ubGluZSBD
QTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABO7yVcVv1KLZ7e1QntLaSqPuFtGf
8aDuvYuoeP3KAsmcSGYbuukdIcXdL5VhRn10lXOIwGDnAxv+EzirHa94X46jgZgw
gZUwDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcD
AjAdBgNVHQ4EFgQUtiU+/65AZJE7CAgRDK4QK/F6YgowHwYDVR0jBBgwFoAUQyq5
oSctWu9k7KSnAz2P5rtKz9UwJAYDVR0RBB0wG4ITYWNtZS50aW5raXZpdHkuaG9t
ZYcEwKgcDzAKBggqhkjOPQQDAgNHADBEAiABBBGCV2x2zKm/6ja3inn9/u8QKx+G
BTuCkGcj1XZzEwIgTO+r7KTh2nuaN+uQsJOb51ASqLD2GDfH47CKBfd03Wo=
-----END CERTIFICATE-----
 1 s:/CN=myownlittleca Intermediate CA
   i:/CN=myownlittleca Root CA
-----BEGIN CERTIFICATE-----
MIIBrTCCAVOgAwIBAgIRAKn1KuHAPtPlKVmfI0G8NQMwCgYIKoZIzj0EAwIwIDEe
MBwGA1UEAxMVbXlvd25saXR0bGVjYSBSb290IENBMB4XDTIwMTEyNjEwMzgzMVoX
DTMwMTEyNDEwMzgzMVowKDEmMCQGA1UEAxMdbXlvd25saXR0bGVjYSBJbnRlcm1l
ZGlhdGUgQ0EwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARYANusH97/11XzMIYf
7pgI1LEY8UpWVBiVF4/1m5rsaFg//kvkFklI7FjZ4nR4Ard7mqlrCDc16lseVMKl
mFNPo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBADAdBgNV
HQ4EFgQUQyq5oSctWu9k7KSnAz2P5rtKz9UwHwYDVR0jBBgwFoAUhArGpAX7JUjc
tn/PGaEkJkJ1tOMwCgYIKoZIzj0EAwIDSAAwRQIgbF/kVS7j+TFTZYpIoA3El+ty
rxRsD61qcT/UHEQSNSgCIQDFhRXerzwvQYz4BbpST2NfCdMvJaFVxrU99wTf4eUQ
bA==
-----END CERTIFICATE-----
---
Server certificate
subject=/CN=Step Online CA
issuer=/CN=myownlittleca Intermediate CA
---

...

Next, we could install a server somewhere and use acme.sh or certbot or similar to automatically retrieve SSL certificates. However, at this point we don’t want to do this because the auto generated setup is not exactly what we want (or need).

Custom Setup

As stated above, we do not want to use the auto-generated certificate authorities. We already have our own CA in place that we’d like to use. Also, we will issue an exclusive intermediate CA for our PKI off-band and import that. In addition we want to have multiple provisioners with different policies as to how long certificates issued are valid.

In this article I will not describe what a Root CA is and how it is being created, but just assume that we have setup one already that is ready for import. Still, if you want to learn more about how to setup a CA please read here.

Importing our own Root CA

What we need to do is to import our existing root certificate. The same holds true for the Intermediate CA. We can either put that into the certs folder or have our configuration point to a central location.

In either event we will not need the private key for from our Root CA!

In this example we will copy our root certificate into a central location under /etc/ssl and make it readable for everybody via a quick chmod 444 command.

andreas@acme ➜  ~ sudo ls -lah /etc/ssl/
total 45
drwxr-xr-x   2 root  wheel     5B Nov 26 19:26 .
drwxr-xr-x  27 root  wheel   109B Nov 26 11:35 ..
lrwxr-xr-x   1 root  wheel    43B Oct 17 03:09 cert.pem -> ../../usr/local/share/certs/ca-root-nss.crt
-rw-r--r--   1 root  wheel    11K Jun 12 20:29 openssl.cnf
-r--r--r--   1 root  wheel   2.2K Nov 26 19:26 tinkivity.pem

For the next step, we need the 32-bit fingerprint from our certificate. Obviously the fingerprint below is redacted and you will not get any of the xx values as a reply on your command line.

andreas@acme ➜  ~ openssl x509 -fingerprint -sha256 -noout -in /etc/ssl/tinkivity.pem                       
SHA256 Fingerprint=00:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:99

We need to update /usr/local/etc/step/ca/config/defaults.json configuration file to reflect the fingerprint of our new root certificate. Please make sure to remove all colons (“:”) from the fingerprint in your defaults.json config. Again, below fingerprint is redacted and instead of the 30 pairs of xx you need to put the middle-30 actual bytes from your actual fingerprint. Also, make sure to update the location of the root certificate accordingly.

{
   "ca-url": "https://acme.local:8443",
   "ca-config": "/usr/local/etc/step/ca/config/ca.json",
   "fingerprint": "01xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx99",
   "root": "/etc/ssl/tinkivity.pem"
}

The other configuration we need to update is /usr/local/etc/step/ca/config/ca.json as it also needs to know where our root certificate lives. The attribute for the root certificate location is most likely the first attribute at the top of the json configuration.

{
   "root": "/etc/ssl/tinkivity.pem",
   "federatedRoots": [],
...

Importing our Intermediate CA

Again, we have created our Intermediate CA off-band and only import it into our ACME server environment in this step. As our Intermediate CA will actually be used to issue certificates, we need both the x509 certificate as well as the RSA private key for the Intermediate CA. We will delete possibly existing certificates and keys from the certs and secrets folder and import our Intermediate CA instead.

andreas@acme ➜  ~ sudo tree /usr/local/etc/step            
/usr/local/etc/step
├── ca
│   ├── certs
│   │   └── intermediate.cert.pem
│   ├── config
│   │   ├── ca.json
│   │   └── defaults.json
│   ├── db
│   │   ├── 000000.vlog
│   │   ├── LOCK
│   │   └── MANIFEST
│   ├── secrets
│   │   └── intermediate.key.pem
│   └── templates
└── password.txt

6 directories, 8 files

The x509 certificate (the public part) shall only be readable, but doesn’t need to be restricted. Thus, it is ok if everybody can read the file.

andreas@acme ➜  ~ sudo ls -lah /usr/local/etc/step/ca/certs/intermediate.cert.pem
-r--r--r--  1 step  step   2.2K Nov 28 14:39 /usr/local/etc/step/ca/certs/intermediate.cert.pem

The RSA private key on the other hand should be restricted. Nobody other the our step ca service user shall be allowed to read its contents.

andreas@acme ➜  ~ sudo ls -lah /usr/local/etc/step/ca/secrets/intermediate.key.pem
-r--------  1 step  step   3.2K Nov 28 14:37 /usr/local/etc/step/ca/secrets/intermediate.key.pem

Another and even more important line of defense is the passphrase that encrypts the RSA key. Even though somebody would come into possession of the RSA key file it couldn’t be decrypted without the proper passphrase. At the same time, the step ca service user needs to know that passphrase in order to sign new certificates. We have two options how to provide the passphrase to the step ca service:

  1. interactive command line prompt upon service start
  2. persistence in a text file

Obviously only the latter option allows unattended service starts (i.e. b/c of reboot) and we will use that option. The location for the password.txt file is manifested in the service script and by default points to the step ca root folder. In any case we must make sure that nobody else but the step ca service user can read the contents of that file.

andreas@acme ➜  ~ sudo ls -lah /usr/local/etc/step/password.txt
-rw-------  1 step  step    12B Nov 29 12:56 /usr/local/etc/step/password.txt

The last step for setup of our Intermediate CA is to configure its location in the /usr/local/etc/step/ca/config/ca.json configuration.

{
   "root": "/etc/ssl/tinkivity.pem",
   "federatedRoots": [],
   "crt": "/usr/local/etc/step/ca/certs/intermediate.cert.pem",
   "key": "/usr/local/etc/step/ca/secrets/intermediate.key.pem",
...

Delete existing provisioners

When running the automatically guided setup in the beginning, we also created a provisioner named firstprovisioner which we actually don’t want to have any more. There is a step command that allows to manage provisioners – including listing of those.

andreas@acme ➜  ~ sudo step ca provisioner list --ca-url https://acme.local:8443 --root /etc/ssl/tinkivity.pem
[
   {
      "type": "JWK",
      "name": "firstprovisioner",
      "key": {
         "use": "sig",
         "kty": "EC",
         "kid": "TRmwwSxlqIBSPDj6K5pAYrbcbCbkKPIWvPwDhuuqeWI",
         "crv": "P-256",
         "alg": "ES256",
         "x": "EgXHqunMX0k3GbPkbCcrCN44wKcYgHaIKx6TZvGwAXk",
         "y": "iGb2ToEVDC6yBgRxZoNa1MG1RAZUDrFokvim8Ugj9fg"
      },
      "encryptedKey": "eyJhbGciOiJQQkVTMi1IUzI1NitBMTI4S1ciLCJjdHkiOiJqd2sranNvbiIsImVuYyI6IkEyNTZHQ00iLCJwMmMiOjEwMDAwMCwicDJzIjoiWTdTU2kxaTJJRGpMQkY2cF9lNkFrQSJ9.6BhnTrakC_yUC1AMwIJ0pVW_spZode1Np8mba3ONk9NwCTErGb8upQ.tBP0pRs8ha6lijLz.pKHgHq6VChULDNvNWvHBYQMBeeGEJSOrVDU-9gA-soETOf4eLqjqy8OATp3pP3_TQ6y00E2ZziEnfJk58f3cbLT1lldas1yP0XYkc3gHitEwTfbFxppyp9ptjRzIPGby5ucVOzj0j9O8QiIetOc6Cri7rq9bpuTMyazAQlKJ84x1CeZz_hqBf3vxwHZHYODPaxG3u2nsWmjhFA8uJXPSHyic_sgZBi-sc5JGPVa2_4rG8EzM1yx2l0mUZLdVprAFZ0ciWvKRdqObXcbO_DiLn3p6aECFnLfEnvi0T8deoHhU0t5F28T4GNV_E9aq9h46A0O4rcLrXi9kgqs2g_k.eItQ0VITv702y3bFFkNnFQ"
   }
]

More or less the command only dumps out the provisioner section of the configuration at /usr/local/etc/step/ca/config/ca.json which doesn’t seem very helpful when listing existing provisioners. However, the command becomes more helpful when modifying provisioners. First we will delete our existing provisioner. We can use the step ca provisioner command to do this.

andreas@acme ➜  ~ sudo step ca provisioner remove firstprovisioner --ca-config /usr/local/etc/step/ca/config/ca.json
Success! Your `step-ca` config has been updated. To pick up the new configuration SIGHUP (kill -1 <pid>) or restart the step-ca process.

As an alternative to the above command we can directly edit the configuration file at /usr/local/etc/step/ca/config/ca.json and replace the provisioners section by a NULL statement.

Below is the complete /usr/local/etc/step/ca/config/ca.json file matching our current progress.

{
        "root": "/etc/ssl/tinkivity.pem",
        "federatedRoots": [],
        "crt": "/usr/local/etc/step/ca/certs/intermediate.cert.pem",
        "key": "/usr/local/etc/step/ca/secrets/intermediate.key.pem",
        "address": ":8443",
        "dnsNames": [
                "acme.local",
                "192.168.1.2"
        ],
        "logger": {
                "format": "text"
        },
        "db": {
                "type": "badger",
                "dataSource": "/usr/local/etc/step/ca/db",
                "badgerFileLoadingMode": ""
        },
        "authority": {
                "provisioners": null
        },
        "tls": {
                "cipherSuites": [
                        "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
                        "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
                ],
                "minVersion": 1.2,
                "maxVersion": 1.2,
                "renegotiation": false
        }
}

Configuring a separate log facility

Let’s configure a separate log facility that logs to /var/log/step.log so that we have an easier job in following the log activities (other than to filter /var/log/messages all the time). We start by inserting the following two lines into the /etc/syslog.conf configuration.

...
# !devd
# *.>=notice                                    /var/log/devd.log
!step_ca
*.*                                             /var/log/step.log
!ppp
*.*                                             /var/log/ppp.log
!*
include                                         /etc/syslog.d
include                                         /usr/local/etc/syslog.d

Next we create an empty log file under /var/log/step.log and make sure it has the same ownership and permissions than other log files under /var/log.

andreas@acme ➜  ~ sudo ls -lah /var/log/messages
-rw-r--r--  1 root  wheel    14K Nov 29 14:35 /var/log/messages
andreas@acme ➜  ~ sudo touch /var/log/step.log
andreas@acme ➜  ~ sudo ls -lah /var/log/step.log
-rw-r--r--  1 root  wheel     0B Nov 29 15:46 /var/log/step.log

Now, we restart the syslog daemon so that the new configuration is applied.

andreas@acme ➜  ~ sudo service syslogd restart
Stopping syslogd.
Waiting for PIDS: 38133.
Starting syslogd.

Finally, we can (re)start the step ca service and make sure the newly configured log file is being used. Assuming we have not made any errors in our configuration approach so far, our step ca should start without errors and be responsive at port 8443 already.

andreas@acme ➜  ~ sudo service step-ca restart
Stopping step_ca.
Starting step_ca.
step_ca is running as pid 39809.
andreas@acme ➜  ~ cat /var/log/step.log 
Nov 29 15:48:34 acme step_ca[39809]: 2020/11/29 15:48:34 Serving HTTPS on :8443 ...

Running a quick smoke test

We could now run openssl’s s_client command again (see above) from a remote host or simply point a web browser at https://acme.local:8443. In both cases we should receive a reply that is including a correctly setup certificate chain.

andreas@acme ➜  ~ cat /var/log/step.log
Nov 29 15:48:34 acme step_ca[39809]: 2020/11/29 15:48:34 Serving HTTPS on :8443 ...
Nov 29 15:53:13 acme step_ca[39809]: time="2020-11-29T15:53:13+01:00" level=warning duration="38.366µs" duration-ns=38366 fields.time="2020-11-29T15:53:13+01:00" method=GET name=ca path=/ protocol=HTTP/2.0 referer= remote-address=192.168.1.205 request-id=bv1rbmajnji9n0kqlm10 size=19 status=404 user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.1 Safari/605.1.15" user-id=

Looking at /var/log/step.log again shows our step ca being responsive. Although the client only receives a 404 error in return, the meta data around that HTTPS request contains the proof that our setup works. It becomes even more clear when looking at the reply from openssl’s s_client command that we can run of our local laptop.

andreas@testclient ➜  ~ openssl s_client -connect acme.local:8443 --quiet      
depth=1 C = DE, ST = Saxony, O = Tinkivity, OU = Tinkivity Intermediate Certificate Authority, CN = Smallstep Intermediate CA, emailAddress = xxx@xxx.com
verify return:1
depth=0 CN = Step Online CA
verify return:1

Of course, this is only a somewhat synthetic test, but it will show us that we’re well on track.

Adding a new ACME provisioner

This is a rather easy step because only two commands are involved. The first command adds a new provisioner of type ACME and the second command restarts the service.

andreas@acme ➜  ~ sudo step ca provisioner add acme-smallstep --type acme --ca-config /usr/local/etc/step/ca/config/ca.json
Success! Your `step-ca` config has been updated. To pick up the new configuration SIGHUP (kill -1 <pid>) or restart the step-ca process.
andreas@acme ➜  ~ sudo service step-ca restart
Stopping step_ca.
Starting step_ca.
step_ca is running as pid 41017.

Looking at the provisioners section in /usr/local/etc/step/ca/config/ca.json we can see that not that much has been added actually.

...
                "provisioners": [
                        {
                                "type": "ACME",
                                "name": "acme-smallstep"
                        }
                ]
...

Such default configuration would start to pass out certificates that adhere to smallstep’s default settings. One setting that we want to change is the validity of the certificates being issued. We actually like certificates to be valid as short as possible while still not adding too much stress to the infrastructure. We will thus agree to certificates being valid for 24 hours.

...
                "provisioners": [
                        {
                                "type": "ACME",
                                "name": "acme-smallstep",
                                "claims": {
                                        "maxTLSCertDuration": "24h0m0s",
                                        "defaultTLSCertDuration": "24h0m0s"
                                }
                        }
                ]
...

ZFS Send & Receive – Part 2

Receiving data from another host

After successfully importing a dataset from a usb disk, we now want to import a dataset from another host via network. Let’s assume you’re on the source server and there is a dataset that you would like to send to a remote server. There is a specific snapshot that you would like to send, and after a while you might even want to update the dataset on the remote server with a further (more fresh) snapshot. Assuming that we don’t control the network and would like to not spill the beans on what we’re sending, we will use SSH as channel.

ZFS dataset on the receiving host (remote)

On the receiving end there is a ZFS pool that we want to send our dataset into it. We should make sure that there is enough free space on the receiving pool.

root@nas[~]# zfs list tank       
NAME   USED  AVAIL     REFER  MOUNTPOINT
tank  11.1T  9.70T      170K  /mnt/tank

At this point we need to be aware of the fact that receiving a dataset will override an existing dataset with identical name (if such dataset already exists). So let’s be really sure and check that no dataset by the name of media exists already.

root@nas[~]# zfs list -rt all tank/media
cannot open 'tank/media': dataset does not exist

More or less this isn’t anything else that we need to check from a ZFS point of view. Of course we need to make sure that the firewall will let us through, but given the fact that we will send data via SSH and we probably already logged in via SSH we should be good to go.

O.K. – “just one more thing…” we need to be able to access our remote host via SSH without a password. The authorized_keys file on the remote host should thus contain the sending host’s public key.

root@nas[~]# ls -lah /root/.ssh 
total 5
drwxr-xr-x  2 root  wheel     4B Jul 18 14:33 .
drwxr-xr-x  5 root  wheel    16B Nov 12 22:26 ..
-rw-------  1 root  wheel   805B Jul 13 20:00 authorized_keys
-rw-r--r--  1 root  wheel   179B Jul 18 14:33 known_hosts

ZFS dataset on the sending host (local)

On the sender side there is a ZFS dataset that we would like to send. To be more precise there is a snapshot that belongs to a dataset we want to send.

root@jeojang[~]# zfs list -rt all tank/incoming/media
NAME                                             USED  AVAIL     REFER  MOUNTPOINT
tank/incoming/media                             1.31T  6.21T     1.31T  /mnt/tank/incoming/media
tank/incoming/media@manual_2020-07-04_10-45-00  3.01M      -     1.31T  -

Very similar to sending/receiving a dataset between local host and an attached USB disk we use the same command, but add SSH into the command pipeline.

As the command will run for a while, it makes sense to use a screen or tmux session to protect the command from breaking when closing your SSH session.

root@jeojang[~]# zfs send tank/incoming/media@manual_2020-07-04_10-45-00 | pv | ssh root@nas zfs receive tank/media
1.32TiB 4:13:09 [91.3MiB/s] [                                            <=>                                                  ]

While the above command runs, let’s take some time to dissect the command. Left of the pipe we have:

zfs send tank/incoming/media@manual_2020-07-04_10-45-00

What it means is that we are sending the snapshot named media@manual_2020-07-04_10-45-00 that is located inside the incoming dataset, which in turn is underneath the pool called tank.

Between the pipes we have the pv command which gives us some progress indication.

Right of the pipe we have:

ssh root@nas zfs receive tank/media

What happens here is that we login to the host nas using the root user. Because the ssh command can accept parameters that in turn will be executed as command on the remote host, we append zfs receive tank/media as a command. Basically what ever is sent from ZFS on our local host through the pipe will be received by ZFS on the other (remote) side. The received dataset will be placed under the tank pool on the remote host and be stored as a new dataset by the name of media. Again, if the receiving host already has a media dataset under the tank pool, that dataset will be overridden by our receive command.

Checking the result and cleanup on the receiving host (remote)

After the command has finished, we should see both the dataset and its snapshot in the receiving pool.

root@nas[~]# zfs list -rt all tank/media
NAME                                    USED  AVAIL     REFER  MOUNTPOINT
tank/media                             1.31T  8.60T     1.31T  /mnt/tank/media
tank/media@manual_2020-07-04_10-45-00     0B      -     1.31T  -

If we don’t have any further use for the snapshot, we can clean it up via the zfs destroy command. Deleting the one and only snapshot of a dataset will not lead to any data loss. If there would be anything depending on such snapshot (i.e. a clone), ZFS would not allow for the snapshot to be deleted and indicate the situation with an appropriate message.

root@nas[~]# zfs destroy tank/media@manual_2020-07-04_10-45-00

If desired we can check the dataset and its sub-contents recursively again…

root@nas[~]# zfs list -rt all tank/media                      
NAME         USED  AVAIL     REFER  MOUNTPOINT
tank/media  1.31T  8.60T     1.31T  /mnt/tank/media

All done.

ZFS Send & Receive – Part 1

Receiving data from a USB disk

Think of the scenario were you have stored a ZFS dataset on a USB disk for safekeeping and you want to reimport the dataset back to your server. Let’s further assume that you don’t remember much details from back when exporting the dataset and all you know is that the dataset had previously been exported to that USB disk you found in our desk drawer.

Determining USB device and ZFS pool details

The first thing you should do is have a look at your USB devices before you connect the disk. We can use the usbconfig, the camcontrol and zpool commands for that. Let’s start with the USB configuration.

root@jeojang[~]# usbconfig         
ugen0.1: <Intel EHCI root HUB> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.1: <Intel EHCI root HUB> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.2: <vendor 0x8087 product 0x0024> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.2: <vendor 0x8087 product 0x0024> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.3: <vendor 0x05e3 USB Storage> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)

Now let’s have a look at the list of devices known to the FreeBSD CAM subsystem.

root@jeojang[~]# camcontrol devlist
<ST3000DM001-9YN166 CC4C>          at scbus0 target 0 lun 0 (pass0,ada0)
<ST3000DM001-1CH166 CC27>          at scbus1 target 0 lun 0 (pass1,ada1)
<ST3000DM001-1ER166 CC25>          at scbus2 target 0 lun 0 (pass2,ada2)
<ST3000DM001-1ER166 CC25>          at scbus3 target 0 lun 0 (pass3,ada3)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus4 target 0 lun 0 (pass4,ses0)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 0 (pass5,da0)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 1 (pass6,da1)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 2 (pass7,da2)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 3 (pass8,da3)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 4 (pass9,da4)

Last but not least let’s see which ZFS pools we already have.

root@jeojang[~]# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:59 with 0 errors on Tue Jul 14 03:45:59 2020
config:

	NAME        STATE     READ WRITE CKSUM
	freenas-boot  ONLINE       0     0     0
	  da4p2     ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
  scan: resilvered 6.21M in 0 days 00:00:04 with 0 errors on Tue Nov 10 11:11:55 2020
config:

	NAME                                            STATE     READ WRITE CKSUM
	tank                                            ONLINE       0     0     0
	  raidz1-0                                      ONLINE       0     0     0
	    gptid/0130909f-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0
	    gptid/017b7353-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0
	    gptid/01a6574e-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0
	    gptid/01b57eb4-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0

errors: No known data errors

Plugging in the USB disk

Time to connect the USB disk and to see what happens.

SPOILER-ALERT: looking at the dmesg output already tells us a lot, but still – let’s go through usbconfig, the camcontrol and zpool step by step.

root@jeojang[~]# usbconfig         
ugen0.1: <Intel EHCI root HUB> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.1: <Intel EHCI root HUB> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.2: <vendor 0x8087 product 0x0024> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen1.2: <vendor 0x8087 product 0x0024> at usbus1, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=SAVE (0mA)
ugen0.3: <vendor 0x05e3 USB Storage> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)
ugen0.4: <Western Digital My Passport 0748> at usbus0, cfg=0 md=HOST spd=HIGH (480Mbps) pwr=ON (500mA)

As can be seen above, the output of usbconfig has grown by one more entry and ugen0.4 shows a Western Digital My Passport USB device introduced to the kernel. Let’s look at the CAM subsystem to find out more about device mapping.

root@jeojang[~]# camcontrol devlist
<ST3000DM001-9YN166 CC4C>          at scbus0 target 0 lun 0 (pass0,ada0)
<ST3000DM001-1CH166 CC27>          at scbus1 target 0 lun 0 (pass1,ada1)
<ST3000DM001-1ER166 CC25>          at scbus2 target 0 lun 0 (pass2,ada2)
<ST3000DM001-1ER166 CC25>          at scbus3 target 0 lun 0 (pass3,ada3)
<AHCI SGPIO Enclosure 2.00 0001>   at scbus4 target 0 lun 0 (pass4,ses0)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 0 (pass5,da0)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 1 (pass6,da1)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 2 (pass7,da2)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 3 (pass8,da3)
<Generic STORAGE DEVICE 9744>      at scbus6 target 0 lun 4 (pass9,da4)
<WD My Passport 0748 1019>         at scbus7 target 0 lun 0 (da5,pass10)
<WD SES Device 1019>               at scbus7 target 0 lun 1 (ses1,pass11)

The USB disk has been attached to the kernel as device node da5 and a corresponding SCSI environmental system driver (ses1).

I am not showing the output of the zpool status command because nothing has changed. This is actually expected because the kernel doesn’t trigger the ZFS file system to start importing pools from newly connected USB mass storage devices on its own. We need to do that ourselves.

ZFS pool discovery and import

Actually, ZFS pool discovery is fairly easy. The zpool import command allows for both, discovery and import of ZFS pools.

root@jeojang[~]# zpool import
   pool: wdpool
     id: 6303543710831443128
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	wdpool      ONLINE
	  da5       ONLINE

As can be read in the action field above, we can go ahead and import the pool wdpool, which we do with the following command:

root@jeojang[~]# zpool import wdpool

No output is good news in this case and we can double-check the success by looking at the zpool status command again.

root@jeojang[~]# zpool status                     
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:59 with 0 errors on Tue Jul 14 03:45:59 2020
config:

	NAME        STATE     READ WRITE CKSUM
	freenas-boot  ONLINE       0     0     0
	  da4p2     ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
  scan: resilvered 6.21M in 0 days 00:00:04 with 0 errors on Tue Nov 10 11:11:55 2020
config:

	NAME                                            STATE     READ WRITE CKSUM
	tank                                            ONLINE       0     0     0
	  raidz1-0                                      ONLINE       0     0     0
	    gptid/0130909f-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0
	    gptid/017b7353-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0
	    gptid/01a6574e-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0
	    gptid/01b57eb4-ba99-11ea-8702-f46d04d37d65  ONLINE       0     0     0

errors: No known data errors

  pool: wdpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	wdpool      ONLINE       0     0     0
	  da5       ONLINE       0     0     0

errors: No known data errors

Sure enough, our pool is online and appears free of errors. Finally we should have a quick look at the datasets in the freshly imported pool.

root@jeojang[~]# zfs list -rt all wdpool
NAME                                                                                        USED  AVAIL  REFER  MOUNTPOINT
wdpool                                                                                     1.45T   317G    88K  /wdpool
wdpool/andreas                                                                              112G   317G   112G  /wdpool/andreas
wdpool/andreas@manual_2020-07-04_10-11-00                                                  63.6M      -   112G  -
wdpool/jails                                                                               16.9G   317G   288K  /wdpool/jails
wdpool/jails@manual_2020-07-04_12:58:00                                                        0      -   288K  -
wdpool/jails/.warden-template-stable-11                                                    3.02G   317G  3.00G  /bigpool/jailset/.warden-template-stable-11
wdpool/jails/.warden-template-stable-11@clean                                              13.5M      -  3.00G  -
wdpool/jails/.warden-template-stable-11@manual_2020-07-04_12:58:00                             0      -  3.00G  -
wdpool/jails/.warden-template-standard-11.0-x64                                            3.00G   317G  3.00G  /bigpool/jailset/.warden-template-standard-11.0-x64
wdpool/jails/.warden-template-standard-11.0-x64@clean                                       104K      -  3.00G  -
wdpool/jails/.warden-template-standard-11.0-x64@manual_2020-07-04_12:58:00                     0      -  3.00G  -
wdpool/jails/.warden-template-standard-11.0-x64-20180406194538                             3.00G   317G  3.00G  /bigpool/jailset/.warden-template-standard-11.0-x64-20180406194538
wdpool/jails/.warden-template-standard-11.0-x64-20180406194538@clean                        104K      -  3.00G  -
wdpool/jails/.warden-template-standard-11.0-x64-20180406194538@manual_2020-07-04_12:58:00      0      -  3.00G  -
wdpool/jails/.warden-template-standard-11.0-x64-20190107155553                             3.00G   317G  3.00G  /bigpool/jailset/.warden-template-standard-11.0-x64-20190107155553
wdpool/jails/.warden-template-standard-11.0-x64-20190107155553@clean                        104K      -  3.00G  -
wdpool/jails/.warden-template-standard-11.0-x64-20190107155553@manual_2020-07-04_12:58:00      0      -  3.00G  -
wdpool/jails/ca                                                                            1.30G   317G  4.19G  /wdpool/jails/ca
wdpool/jails/ca@manual_2020-07-04_12:58:00                                                     0      -  4.19G  -
wdpool/jails/ldap                                                                          1.55G   317G  4.22G  /wdpool/jails/ldap
wdpool/jails/ldap@manual_2020-07-04_12:58:00                                                176K      -  4.22G  -
wdpool/jails/wiki                                                                          2.03G   317G  4.66G  /wdpool/jails/wiki
wdpool/jails/wiki@manual_2020-07-04_12:58:00                                                200K      -  4.66G  -
wdpool/media                                                                               1.32T   317G  1.32T  /wdpool/media
wdpool/media@manual_2020-07-04_10-45-00                                                     104K      -  1.32T  -
wdpool/rsynch                                                                               260M   317G   260M  /wdpool/rsynch
wdpool/rsynch@manual_2020-07-04_12-52-00                                                       0      -   260M  -

At this point we could already access the data via the mount points that are being displayed in the right most column (beware of line-breaks in the above text box!). However, what we want is to receive the complete datasets which allows for receiving snapshots entirely or even incremental.

ZFS Receive

We use a piped communication with zfs send on one side and zfs receive on the other side. Because we want to see progress with pipe everything through the pv command in the middle.

ATTENTION: depending on the size of the dataset the command will run for a long (as in hours) time and you should execute the command from a screen or tmux.

root@jeojang[~]# zfs send wdpool/media@manual_2020-07-04_10-45-00 | pv | zfs receive tank/incoming/media
 438MiB 0:00:15 [35.7MiB/s] [                                                            <=>            ]

For the next hours you can glean at the progress via zfs list or zpool iostat.

root@jeojang[~]# zpool iostat tank 10
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
tank         196G  10.7T     10     37  85.1K   606K
tank         196G  10.7T      0    332      0  35.9M
tank         197G  10.7T      0    331      0  35.3M
tank         197G  10.7T      0    333  5.20K  35.8M
tank         198G  10.7T     16    358   146K  36.3M
tank         198G  10.7T     23    359   143K  34.8M
tank         199G  10.7T     31    377   178K  35.5M

ZFS pool export and USB disk ejection

After the zfs receive command has finished and everything has been imported without errors you should export the ZFS pool using the zpool export command. This is to make sure that any mounted file systems are being unmounted before continuing.

root@jeojang[~]# zpool export wdpool

As far as the zpool export command is concerned, no news is good news and if there is no output from the command you can assume that no errors have occured. To double check you can issue a zpool status command to see for yourself that the pool is gone.

Ejecting the USB disk can be done using the camcontrol eject command. Make sure you eject the correct device as very bad things can happen if you eject the wrong device.

root@jeojang[~]# camcontrol eject /dev/da5
Unit stopped successfully, Media ejected