I’ve been playing with LetsEncrypt for a bit now, and on the whole, I like it. However, it’s still early days, and I wanted to share my experiences.

For the examples I use, I’m looking at my own set-up (or more precisely, NO2ID’s, which I superintend/am Tech Director of) — I find real examples are useful.

Debian & Python

I started using wheezy but took this as an opportunity to upgrade to jessie; I wanted Apache 2.4, as well as a more modern version of Python.

(You might want to upgrade first; it’ll make your life easier.)

Python versions are quite important, too: if you start on an older version, it’s probably advisable to zap (rm) your virtualenv (/root/.local/…), after upgrading — it saves a lot of time and frustration.

You don’t want to run into the urllib3/OpenSSL issues that are loosely documented, but not always obvious; use Python 2.7.9 (or more recent) and save yourself some ballache.

A REQUIREMENTS file might be useful (and even better, say, a REQUIREMENTS.{debian,centos,freebsd,macos,ubuntu} file).

Rate limits

There are rate limits.

However, it seems that the documentation (and the recommended tool) doesn’t deign to mention this. Nor does letsencrypt-auto default to using the staging infra (use letsencrypt-auto --server https://acme-staging.api.letsencrypt.org/directory)

In the real-world, we should accept that people will go for the easy option, and no one reads long documents like terms and conditions. Especially not when one needs to view a PDF and is working entirely in a console.

The rate limits really hit me, and prevented an earlier adoption, as I created too many in live, not staging, especially at first, and when I had a “broken” (not to what letsencrypt could parse) configuration.

Error messages

Aren’t always that helpful or very useful, and not helped by information being scattered around in:

  • the RTD
  • the GitHub README
  • Python module docs (see below)
  • Source code comments (which wretched part’s causing this so I can dig more)
  • Your letsencrypt log file, even with --verbose being passed as an ARG
  • IRC
  • A few StackOverflow questions + answers
  • The forum (and issues with how up to date is this; is it canonical; gah, searching; it’s not stackoverflow)

— or not existing at all.

This (lack of documentation, and left to fend for oneself), I think represents a real problem: especially if ordinary users (including the sort who screen grab terminal outputs (rather than just fucking copy and pasting — or even pipe-ing to pastebin/gists)) are going to use things.

At least there’s not a Mailing list, as well as the forum, or worse, the mailing list (archive) being syndicated to seventeen different forums.

Apache

In this case, I still use Apache. At some point, I’ll be switching over to nginx, as I prefer nginx, and find it faster, but ho-hum. At time of writing, nginx support isn’t as fully fledged as Apache, at least according to the docs (I have a hatred of forums).

There are some gotchas to be aware of, that are loosely documented, but have atrocious error messages that are, quite plainly misleading.

Ports

You may need to bind Apache to use an IPv4 address of your interface to get things working.

	<IfModule ssl_module>
		Listen 443
	</IfModule>

	<IfModule mod_gnutls.c>
		Listen 443
	</IfModule>

	<IfModule mod_ssl.c>
		Listen 443
	</IfModule>

becomes

	<IfModule ssl_module>
		Listen 93.93.131.141:443
	</IfModule>

	<IfModule mod_gnutls.c>
		Listen 93.93.131.141:443
	</IfModule>

	<IfModule mod_ssl.c>
		Listen 127.0.0.1:443
	</IfModule>

(a plain Listen 443 in ports.conf didn’t help)

Config files and quoting

If you have Rewrites in your configs, make sure they’re quoted.

e.g, apachectl configtest will let you have something like

    RewriteRule ^index\.php$ - [L]
    RewriteRule ^wp-admin$ wp-admin/ [R=301,L]
    RewriteCond %{REQUEST_FILENAME} -f [OR]
    RewriteCond %{REQUEST_FILENAME} -d
    RewriteRule ^ - [L]
    RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
    RewriteRule ^(.*\.php)$ $1 [L]
    RewriteRule . index.php [L]

but to get letsencrypt-auto (or letsencrypt --apache) to run, you’ll need

    RewriteRule "^index\.php$" "-" [L]
    RewriteRule "^wp-admin$" "wp-admin/" [R=301,L]
    RewriteCond "%{REQUEST_FILENAME}" "-f" [OR]
    RewriteCond "%{REQUEST_FILENAME}" "-d"
    RewriteRule "^" "-" [L]
    RewriteRule "^(wp-(content|admin|includes).*)" "$1" [L]
    RewriteRule "^(.*\.php)$" "$1" [L]
    RewriteRule "." "index.php" [L]

(for WordPress — why on earth one would put these in a .htaccess file when one can chuck ‘em in the vhost config…)

The error message here, is something along the lines of ‘Apache not found’.

Rewrites

There’s an optional setting to define a rewrite to take insecure visitors to the secure version of the site.

This balks if you already have some, and isn’t particularly good; as well as not leaving things in what I’d say is not an ideal state, especially if you have a lot of Rewrites (and you’ll want to check in your -le-ssl.conf that these now point to https:// not http://).

In an ideal world, something like

<VirtualHost *:80>
	ServerName foo.example.com
	[…]
	Redirect		/foo	/baa
	Redirect		/one	/two
	Redirect		/join	http://www.no2id.net/get-involved/join
	RewritePermanent	/baz	/downloads/baz
</VirtualHost>

becoming

<VirtualHost *:443>
	ServerName foo.example.com
	[…]		
	Redirect		/foo	/baa
	Redirect		/one	/two
	Redirect		/join	https://www.no2id.net/get-involved/join
	RewritePermanent	/baz	/downloads/baz
</VirtualHost>

<VirtualHost *:80>
	ServerName foo.example.com
	
	RewriteEngine on
	RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [L,QSA,R=permanent]
</VirtualHost>

would be awesome.

Gotchas

Constant update

On every run. So annoying. I wish this would timestamp, and just check once an hour/day/week…

Revoking certificates

There’s very little documentation on this, and I didn’t find it particularly useful / working.

Which cert file should you pass as your parameter? (discovered via IRC)

./letsencrypt-auto revoke --cert-path /etc/letsencrypt/archive/pressreleases.wp.no2id.net/fullchain1.pem

seemed to revoke, but the certificates were still offered when I tried to create again. A --delete option might be useful in future versions.

You may want to do something involving find and xargs (in /etc/letsencrypt/), but y’know, YMMV.

In testing, I wasn’t too bothered, so did the LazyThing™ and issued and rm -r /etc/letsencrypt but you don’t just copy and paste things you read on the intertubes, do you?

It’d be nice if there were a curses interface for revocation.

No wildcard support (yet)

But see below…

Defining the ‘master’ certificate

Let’s say you’re mitigating against the wildcard lack-of-support; and you want to have various sites on the same cert; no problem, except you might want to use the root domain, not a subdomain as the main factor here; I found a two-step approach works here:

  1. Create using apex and www (e.g. no2id.net and www.no2id.net)
  2. When that’s done, re-run and extend the certificate to include others (newsblog.wp.no2id.net, wp.no2id.net, pressreleases.wp.no2id.net); but note, if you don’t want punters to see you have a cert for admin.no2id.net don’t include that; of course, security through obscurity is not especially useful; YMMV. Create an extra cert (but note ratelimit) for say dev and administrative things.

I wanted to move typefaces which had been installed to my ‘user’
collection over to the ‘system’ collection. There didn’t seem to be a
quick and simple top-google-juice rated article for this.

How did I do it? Well… drag and drop.

https://support.apple.com/kb/PH5955?locale=en_US details the locations
of the fonts directories, so a simple Finder/Go to Folder for
~/Library/Fonts/ and a new window for /Library/Fonts and a
drag and drop later, bingo.

One catch: if you’ve previously run the ‘Look for enabled duplicates’
option, you may need to re-enable those fonts.

Nice and simple. And it works. Even on Yosemite.

On the off-chance you’ve came here via my Twitter link, I’m dreadfully sorry to say that Twitter doesn’t let me (often) follow new people: I’m at a limit, and it takes about 20 new followers before I can follow one new person.

Feel free to at-me, though.

I failed to find a good example of something that worked to pull a spreadsheet from google-docs using cURL. All that I found didn’t work, in one shape or another.

A bit of playing, and quite a bit of reading got this

#!/bin/bash
PASS=`cat /path/to/0600/google-password-file`
SHEET="https://spreadsheets.google.com/feeds/download/spreadsheets/Exportkey=addyourownsheetIDhere&#038;exportFormat=csv&#038;gid="</p>
<p>AUTH_TOKE=`curl --silent https://www.google.com/accounts/ClientLogin -d \
    Email=foo@example.org -d \
    Passwd=${PASS} -d \
    accountType=HOSTED -d \
    source=cURL-SpreadPull -d \
    service=wise | grep Auth\= | sed 's/Auth/auth/'`</p>
<p>curl --silent --output /path/to/file --header "GData-Version: 3.0" --header "Authorization: GoogleLogin ${AUTH_TOKE}" "${SHEET}${TAB}"

seemed to do the trick

Exportkey could be defined in the script, as a variable, thinking about it. You’ll need to supply that; I typically grab it from the web-based URI, but there is a warning in the docs about that:

*To determine the URL of a cell-based feed for a given worksheet, get the worksheets metafeed and examine the element in which rel is http://schemas.google.com/spreadsheets/2006#cellsfeed. The href value in that element is the cell feed’s URI.*

YMMV.

I’ve added in &#038;exportFormat=csv&#038;gid= because I wanted CSV outputs, and gid’s value is provided via a for … and case deviation.

--header "GData-Version: 3.0" was needed to avoid the redirection.

Hopefully, this might be of benefit — as a working (when written) example of using curl and google docs/google spreadsheets.

Having finally got fed up with logging in, individually, to upgrade each of the no2id machines and jails, a bit ago, I decided to write a script to do the ‘hard work’ for me.

This worked fine, until today, when I noticed apt-listbugs complaining, and causing the script to fail to dist-upgrade.

Not a problem, thought I. I’m sure others have had this issue too. Being lazy, I thought first point of call would be the internets. I’d have thought something like:

"DEBIAN_FRONTEND=noninteractive" "apt-listbugs"

might have done the trick. It didn’t (that I could find).

So I went back to doing what a lot of the new-breed of ‘devops’ fail to do, and what I’m quite hypocritical of; looking at the manpage.

The manpage provides us with this gem:

ENVIRONMENT VARIABLES
o APT_LISTBUGS_FRONTEND If this variable is set to “none”
apt-listbugs will not execute at all, this might be useful if
you would like to script the use of a program that calls
apt-listbugs.

So there we go.

 for M in $MACHINES
 do
     echo "Connecting to ${M}.no2id.net"
-    ssh root@${M}.no2id.net 'export TERM=xterm; export DEBIAN_FRONTEND=noninteractive; apt-get update &#038;&#038; echo "" &#038;&#038; echo "" &#038;&#038; echo "This is "'${M}'".no2id.net" &#038;&#038; echo "" &#038;&#038; echo "" &#038;&#038; apt-get dist-upgrade'
+    ssh root@${M}.no2id.net 'export TERM=xterm; export DEBIAN_FRONTEND=noninteractive; export APT_LISTBUGS_FRONTEND=none; apt-get update &#038;&#038; echo "" &#038;&#038; echo "" &#038;&#038; echo "This is "'${M}'".no2id.net" &#038;&#038; echo "" &#038;&#038; echo "" &#038;&#038; apt-get dist-upgrade'
done

hopefully, this will help others, whose first port of call is the internets, and not manpages.

You may, however be sensible — and have had the time to roll out Puppet (ugh, when did they change their website! Why‽) or Chef though.