Thoughts on Security

Key Takeaways

  • SSH certificates can be used with the Apple T2 chip on macOS as an alternative to external smart cards, authenticated with a fingerprint per session.
  • The hardware-based Apple T2 chip serves as an extra security layer by creating private keys in the secure enclave.
  • The CA can be stored on an external smartcard, only signing for access in a limited period, which limits exposure.

Introduction

Over the past days I have been going down a deep, deep rabbit hole of SSH proxy jumping and SSH certificates combined with smartcards.

After playing around with smart cards for SSH, I recognized that not only external smart cards such as the Yubikey or Nitrokey is a possible lane to go down.

New Apple devices comes with a security chip called T2 built-in. This chip is known to host something Apple has named Secure Enclave [1]. In the Secure Enclave you can store secret keys.

A secure enclave will not serve as an equally secure solution as with external smart cards, but it is a better balance for usability.

The T2 is permanently stored in hardware on one host only, so the access needs to be signed on a per-host basis. In such I would say the T2 and external smart cards complement each other depending on situation.

Always having the key available will bring two additional vulnerabilities:

  • If compromised, the key will be logically available
  • Separation of equipment and key is not possible e.g. in a travel situation

With a central pubkey directory tied to an identity (automated), the T2 can be of better use for an enterprise setup.

Setting up a Private Key in Secure Enclave

While fiddling around I found Secretive on Github [2].

The short and easy setup are:

$ brew cask install secretive
$ echo "export SSH_AUTH_SOCK=/Users/[USERNAME]/Library/Containers/com.maxgoedjen.Secretive.SecretAgent/Data/socket.ssh" >> ~/.zshrc
$ source ~/.zshrc

A keypair can now be generated in the secure enclave by opening the Secretive app and pressing the + (perhaps there is some way to use ssh-keygen as well?)

The public key of the curve generated on-chip is available in a container directory on disk. Check out the Public Key Path in Secretive to find where.

Using the trick we found in our recent venture into using smart cards for signing the key, we can used PCKS#11 without compromising security [3]. In this case I use a Nitrokey:

$ brew cask install opensc
$ PKCS11_MODULE_PATH=/usr/local/lib/opensc-pkcs11.so
$ ssh-keygen -D $PKCS11_MODULE_PATH -e > ca.pub
$ ssh-keygen -D $PKCS11_MODULE_PATH -s ca.pub -I example -n zone-web -V +1h -z 1 id_ecdsa.pub
> Enter PIN for 'OpenPGP card (User PIN)': *****
> Signed user key id_ecdsa-cert.pub: id "example" serial 1 for zone-web valid from 2020-10-14T20:26:00 to 2020-10-14T21:27:51

$ cp id_ecdsa-cert.pub ~/.ssh/

If you now try to ssh into a server using the given certificate authority as shown in the SSH-CA post [3], access should be granted with a fingerprint.

A Word of Caution

The T2 has some vulnerabilities shown recently. Make sure to include these in your risk assessment of using it. If you won't go down the smart card route it will still be better than storing the key on disk.

[1] https://support.apple.com/guide/security/secure-enclave-overview-sec59b0b31ff/web [2] https://github.com/maxgoedjen/secretive [3] https://secdiary.com/2020-10-13-ssh-ca-proxyjump.html

Key Takeaways

  • While market dominance was formerly an issue discussed for operating systems, the modern equivalent occurs in form of cloud services, primarily from Microsoft, Amazon and Google.
  • Data from the Norwegian business registry mapped to email records shows that Microsoft Office 365 has become a dominating force amongst Norwegian private businesses and 61% of the government.
  • Microsoft being a significant actor for email indicates that Norwegian organisations are putting a lot more faith in Microsoft. Today email as a service is bundled with direct messaging and wikis.
Read more...

Key Takeaways

  • SSH certificates can be used with the Apple T2 chip on macOS as an alternative to external smart cards, authenticated with a fingerprint per session.
  • The Mac T2 chip serves as an extra security layer by creating private keys in the secure enclave.
  • The CA can be stored on an external smartcard, only signing for access in a limited period, which limits exposure.

Introduction

Over the past days I have been going down a deep, deep rabbit hole of SSH proxy jumping and SSH certificates combined with smartcards.

After playing around with smart cards for SSH, I recognized that not only external smart cards such as the Yubikey or Nitrokey is a possible lane to go down.

New Apple devices comes with a security chip called T2 built-in. This chip is known to host something Apple has named Secure Enclave [1]. In the Secure Enclave you can store secret keys.

A secure enclave will not serve as an equally secure solution as with external smart cards, but it is a better balance for usability.

The T2 is permanently stored in hardware on one host only, so the access needs to be signed on a per-host basis. In such I would say the T2 and external smart cards complement each other depending on situation.

Always having the key available will bring two additional vulnerabilities:

  • If compromised, the key will be logically available
  • Separation of equipment and key is not possible e.g. in a travel situation

With a central pubkey directory tied to an identity (automated), the T2 can be of better use for an enterprise setup.

Setting up a Private Key in Secure Enclave

While fiddling around I found Secretive on Github [2].

The short and easy setup are:

$ brew cask install secretive
$ echo "export SSH_AUTH_SOCK=/Users/[USERNAME]/Library/Containers/com.maxgoedjen.Secretive.SecretAgent/Data/socket.ssh" >> ~/.zshrc
$ source ~/.zshrc

A keypair can now be generated in the secure enclave by opening the Secretive app and pressing the + (perhaps there is some way to use ssh-keygen as well?)

The public key of the curve generated on-chip is available in a container directory on disk. Check out the Public Key Path in Secretive to find where.

Using the trick we found in our recent venture into using smart cards for signing the key, we can used PCKS#11 without compromising security [3]. In this case I use a Nitrokey:

$ brew cask install opensc
$ PKCS11_MODULE_PATH=/usr/local/lib/opensc-pkcs11.so
$ ssh-keygen -D $PKCS11_MODULE_PATH -e > ca.pub
$ ssh-keygen -D $PKCS11_MODULE_PATH -s ca.pub -I example -n zone-web -V +1h -z 1 id_ecdsa.pub
> Enter PIN for 'OpenPGP card (User PIN)': *****
> Signed user key id_ecdsa-cert.pub: id "example" serial 1 for zone-web valid from 2020-10-14T20:26:00 to 2020-10-14T21:27:51

$ cp id_ecdsa-cert.pub ~/.ssh/

If you now try to ssh into a server using the given certificate authority as shown in the SSH-CA post [3], access should be granted with a fingerprint.

A Word of Caution

The T2 has some vulnerabilities shown recently. Make sure to include these in your risk assessment of using it. If you won't go down the smart card route it will still be better than storing the key on disk.

[1] https://support.apple.com/guide/security/secure-enclave-overview-sec59b0b31ff/web [2] https://github.com/maxgoedjen/secretive [3] https://secdiary.com/2020-10-13-ssh-ca-proxyjump.html

Key Takeaways

  • SSH has a key-signing concept that in combination with a smartcard provides a lean, off-disk process
  • A SSH-CA provides the possibility of managing access without a central point of failure
  • The use of SSH Jumphost is an easier way to tunnel sessions end-to-end encrypted, while still maintaining visibility and control through a central point

Introduction

This post is an all-in-one capture of my recent discoveries with SSH. It is an introduction for a technical audience.

It turns out that SSH is ready for a zero trust and microsegmentation approach, which is important for management of servers. Everything described in this post is available as open source software, but some parts require a smartcard or two, such as a Yubikey (or a Nitrokey if you prefer open source. I describe both).

I also go into detail on how to configure the CA key without letting the key touch the computer, which is an important principle.

The end-result should be a more an architecture providing a better overview of the infrastructure and a second logon-factor independent of phones and OATH.

SSH-CA

My exploration started when I read a 2016-article by Facebook engineering [1]. Surprised, but concerned with the configuration overhead and reliability I set out to test the SSH-CA concept. Two days later all my servers were on a new architecture.

SSH-CA works predictably like follows:

                                           [ User generates key on Yubikey ]
                                                            |
                                                            |
                                                            v
    [ ssh-keygen generates CA key ] --------> [ signs pubkey of Yubikey ]
                    |                           - for a set of security zones
                    |                           - for users
                    |                                       |
                    |                                       |
                    |                                       v
                    v                         pubkey cert is distributed to user
    [ CA cert and zones pushed to servers ]     - id_rsa-cert.pub
      - auth_principals/root (root-everywhere)
      - auth_principals/web (zone-web)

The commands required in a nutshell:

# on client
$ ssh-keygen -t rsa

# on server
$ ssh-keygen -C CA -f ca
$ ssh-keygen -s ca -I <id-for-logs> -n zone-web -V +1w -z 1 id_ecdsa.pub

# on client
cp id_ecdsa-cert.pub ~/.ssh/

Please refer to the next section for a best practice storage of your private key.

On the SSH server, add the following to the SSHD config:

TrustedUserCAKeys /etc/ssh/ca.pub  AuthorizedPrincipalsFile /etc/ssh/auth_principals/%u

What was conceptually new for me was principals and authorization files per server. This is how it works:

  1. Add a security zone, like zone-web, during certificate signing – “ssh-keygen * -n zone-web *“. Local username does not matter
  2. Add a file per user on the SSH server, where zone-web is added where applicable – e.g. “/etc/ssh/auth_principals/some-user” contains “zone-web”
  3. Login with the same user as given in the zone file – “ssh some-user@server”

This is the same as applying a role instead of a name to the authorization system, while something that IDs the user is added to certificate and logged when used.

This leaves us with a way better authorization and authentication scheme than the authorized_keys that everyone use. Read on to get the details for generating the CA key securely.

## Keeping Private Keys Off-disk

An important principle I have about private keys is to rather cross-sign and encrypt two keys than to store one on disk. This was challenged for the SSH-CA design. Luckily I found an article describing the details of PKCS11 with ssh-keygen [2]:

If you're using pkcs11 tokens to hold your ssh key, you may need to run ssh-keygen -D $PKCS11MODULEPATH > ~/.ssh/id_rsa.pub so that you have a public key to sign. If your CA private key is being held in a pkcs11 token, you can use the -D parameter, in this case the -s parameter has to point to the public key of the CA.

Yubikeys on macOS 11 (Big Sur) requires the yubico-piv-tool to provide PKCS#11 drivers. It can be installed using Homebrew:

$ brew install yubico-piv-tool  $ PKCS11_MODULE_PATH=/usr/local/lib/libykcs11.dylib

Similarly the procedure for Nitrokey are:

$ brew cask install opensc  $ PKCS11_MODULE_PATH=/usr/local/lib/opensc-pkcs11.so

Generating a key on-card for Yubikey:

$ yubico-piv-tool -s 9a -a generate -o public.pem

For the Nitrokey:

$ pkcs11-tool -l --login-type so --keypairgen --key-type RSA:2048

Using the exported CA pubkey and the private key on-card a certificate may now be signed and distributed to the user.

$ ssh-keygen -D $PKCS11_MODULE_PATH -e > ca.pub

$ ssh-keygen -D $PKCS11_MODULE_PATH -s ca.pub -I example -n zone-web -V +1w -z 1 id_rsa.pub  Enter PIN for 'OpenPGP card (User PIN)': Signed user key .ssh/id_rsa-cert.pub: id "example" serial 1 for zone-web valid from 2020-10-13T15:09:00 to 2020-10-20T15:10:40

The same concept goes for a user smart-card, except that is a plug and play as long as you have the gpg-agent running. When the id_rsa-cert.pub (the signed certificate of e.g. a Yubikey) is located in ~/.ssh, SSH will find the corresponding private key automatically. The workflow will be something along these lines:

    [ User smartcard ] -----------> [ CA smartcard ]
             ^          id_rsa.pub          |
             |                              | signs
             |------------------------------|
               sends back id_rsa-cert.pub

A Simple Bastion Host Setup

The other thing I wanted to mention was the -J option of ssh, ProxyJump.

ProxyJump allows a user to confidentially, without risk of a man-in-the-middle (MitM), to tunnel the session through a central bastion host end-to-end encrypted.

Having end-to-end encryption for an SSH proxy may seem counter-intuitive since it cannot inspect the content. I believe it is the better option due to:

  • It is a usability compromise, but also a security compromise in case the bastion host is compromised.
  • Network access and application authentication (and even authorization) goes through a hardened point.
  • In addition the end-point should also log what happens on the server to a central syslog server.
  • A bastion host should always be positioned in front of the server segments, not on the infrastructure perimeter.

A simple setup looks like the following:

[ client ] ---> [ bastion host ] ---> [ server ]

Practically speaking a standalone command will look like follows:

ssh -J jump.example.com dest.example.com

An equivalent .ssh/config will look like:

Host j.example.com 
HostName j.example.com 
User sshjump 
Port 22

Host dest.example.com 
HostName dest.example.com 
ProxyJump j.example.com 
User some-user 
Port 22

With the above configuration the user can compress the ProxyJump SSH-command to “ssh dest.example.com”.

Further Work

The basic design shown above requires one factor which is probably not acceptable in larger companies: someone needs to manually sign and rotate certificates. There are some options mentioned in open sources, where it is normally to avoid having certificates on clients and having an authorization gateway with SSO. This does however introduce a weakness in the chain.

I am also interested in using SSH certificates on iOS, but that has turned out to be unsupported in all apps I have tested so far. It is however on the roadmap of Termius, hopefully in the near-future. Follow updates on this subject on my Honk thread about it [4].

For a smaller infrastructure like mine, I have found the manual approach to be sufficient so far.

[1] Scalable and secure access with SSH: https://engineering.fb.com/security/scalable-and-secure-access-with-ssh/
[2] Using a CA with SSH: https://www.lorier.net/docs/ssh-ca.html
[3] Using PIV for SSH through PKCS #11: https://developers.yubico.com/PIV/Guides/SSH_with_PIV_and_PKCS11.html
[4] https://cybsec.network/u/tommy/h/q1g4YC31q45CT4SPK4

This is a post about workflow and what I think about distractions in modern computing.

As I write this piece it is on a small screen, the background is solarized light and I see plain text. There are no push notifications, I see no tempting elements to click. My mind is generally all about the issue at hand. I have a top-notch Macbook Pro right beside me, but still I am here on this Lenovo x395 that represents a something else.

While I in some ways envy the worry-free computing our growing generation encounter, I also feel compassion for the conventional computing they will never be exposed to in the way that I have been. Like my generation never experienced the transition from transistors to modern CPUs in the 70-80s and can with few exceptions understand it, most of the upcoming generation will never experience “slow computing”. The reason I think this is in some ways a shame is that slow computing leaves space to think. About why you interact with the system, what to do and how and when to do it.

Combined with people indoctrinated on Microsoft Office, modern computing have left workers as robots with their primary mission to answer emails and managing calendar appointments. In some ways the art of communicating with each other and interacting without a digital reference has become lost. Freedom of defining your digital workspace should not and cannot be different to the freedom of organising your physical space.

Modern computing comes with bells and whistles, and I think that companies seeking cost-efficient and standardised computing environments are to blame. I think pragmatism is to blame for those environments. I also think “the cloud” is an attempt at creating a walled garden. Bureacracy are also to blame. An attempt at cost effiency where the end-user is under-estimated and the systems dumbed down. We have built a digital world that imprints the use of products, generic in themselves, with little to no options for automation besides what the author meant for. All depending on a few global companies.

When I open my laptop lid, I log in and see a terminal, or crashed screen as some likes to describe it. It is like a blank canvas with no outputs, just waiting for a command about what I would like to do next. At this point I might navigate to a blog directory and open a document with my text editor of choice: emacs [1]. When done writing this post I will add it to git, my text versioning system. After this I do whatever I please with the text file. I might push it to my central blog repository where a static HTML file generates on a public area or I may pipe it to some other program. This is the Unix philosophy [2].

After writing this post I may choose to check my mailbox for new messages that I am expecting. My electronic mail system runs decentralised and works for me, and me only. The reason for it is that I like to control my own data. I do not want my letters read by others, neither prying commercial or government eyes. For this I use neomutt, notmuch and muchsync.

Occasionally I like to communicate remotely with others, and for this purpose I use Riot, based on the distributed Matrix-protocol.

Even though much is best formulated in words, The Multitasking Mind by Salvucci and Taatgen, and Edward Tufte thaught me about the power of visualisation and automation [3,4]. This is also why I program in Python and Nim, and sometimes design illustrations to get my message across.

I may have been an avid macOS user once, but in the future I will seek to come as close as possible to using my computing platform as a tool, rather than to become a tool for those who seek to profit on others in cyberspace. I know it won't be easy because computing also has a social component, and that is the real challenge.

I will leave you with a link to Make Time [5] which have in some ways helped my journey.

[1]

[2] https://en.wikipedia.org/wiki/Unix_philosophy

[3] The Multitasking Mind, Tufte and Taatgen, 2010, ISBN: 9780199733569

[4] https://www.edwardtufte.com/tufte/

[5] https://maketime.blog/articles/