Shell By Mail

28 Aug 2019
Tags: mail virtualization

What if the only way to interact with a remote server would be via SMTP?

Here’s an attempt at implementing such a system. Keep in mind this is intended as a proof of concept, not for serious usage.

Setting up a mail server

The main functionality of this server is routing and delivering mails, which is provided by a Mail Transfer Agent (MTA), e.g. postfix. To simplify its configuration, I picked a docker container. While there are far more comprehensive solutions1, I preferred to build upon a simpler base, to avoid dealing with unneeded interacting components.

Sending and evaluating shell commands

Initially I thought of having the body of the message be the command, while attachments could be files passed as input. To simplify, I figured that commands taking files could just as well take a <(printf 'foo'). In the end, the request was entirely contained in the body.

To which address is it sent? This container is running as root, and the hostname is mailsh.localdomain, so the sender needs to associate that name with the docker container’s IP:

mailsh_ip=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mailsh)
echo "$mailsh_ip mailsh.localdomain" >> /etc/hosts

Now we can send a mail with mailx:

echo 'To: root <root@mailsh.localdomain>
From: foo <foo@localhost>
Subject: Test '"$(date +%s)"'

This is a test email message' | \
    mailx \
        -v \
        -S 'smtp=smtp://mailsh.localdomain' \
        -S 'smtp-auth-user=test' \
        -S 'smtp-auth-password=test' \
        -S 'from=foo@localhost' \
        -t

We can confirm it hits postfix with docker exec -it mailsh tail -f /var/log/syslog:

mailsh postfix/qmgr[151]: EAA3D207C96: from=<foo@localhost>, size=563, nrcpt=1 (queue active)

To deal with binary contents, we can encode the command with uuencode, which takes a file and outputs ASCII text:

(
echo 'To: root <root@mailsh.localdomain>
From: foo <foo@localhost>
Subject: Test '"$(date +%s)"'

Please run me :)'

uuencode "$request_command_file" "$request_attachment_name"
) | \
    mailx \
    # ...

On the server side, we want to:

Since postfix stores this user’s mailbox as a file at /var/mail/root, we simply need to keep track of filesystem events, in this case file writes.

A common solution is inotifywatch, but I prefer to use entr. It has a more robust handling of events when compared to the former, such as interpreting a file delete followed by a new file as a file save.

Decoding and evaluating will be done by our script watch.sh:

# Temporary storage for decoded commands
tmp_mail_dir=$(mktemp -d)
tmp_mail_name=$(mktemp --tmpdir="$tmp_mail_dir")
cleanup() {
  err=$?
  sudo rm -rf "$tmp_mail_dir"
  trap '' EXIT
  exit $err
}
trap cleanup EXIT INT QUIT TERM

(
  cd "$tmp_mail_dir"

  # Retrieve request (i.e. most recent mail)
  echo "w $ $tmp_mail_name" | mailx
  uudecode "$tmp_mail_name"

  # Evaluate request
  bash request.txt > response-stdout.txt

  # Send response
  printf '%s\n' \
    'replysender $' \
    "$(cat response-stdout.txt)" | mailx
)

Which will be activated like this:

# `entr` exits if file doesn't exist
touch /var/mail/root

echo /var/mail/root | entr /opt/mailsh-watch.sh

Given that our base image already uses supervisord to launch and monitor processes, we might as well make use of it:

supervisor_program=watch
cat > "/etc/supervisor/conf.d/$supervisor_program.conf" <<EOF
[program:$supervisor_program]
command=/bin/bash -c 'echo /var/mail/root | entr /opt/watch.sh'
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
EOF

Validating the user making a request

Right now anyone can send a command and have it be evaluated by the superuser. We need some way of making sure that the user making the request is trusted.

I settled on using gpg: The user cryptographically signs their request with their own private key, and the server checks the signature against one of it’s stored public keys. If:

The sender can generate their gpg key pair with the following script:

. ./request.env

tmp_parameters_file=$(mktemp)
cleanup() {
  err=$?
  sudo rm -f "$tmp_parameters_file"
  trap '' EXIT
  exit $err
}
trap cleanup EXIT INT QUIT TERM

cat >"$tmp_parameters_file" <<EOF
Key-Type: RSA
Key-Length: 4096
Subkey-Type: ELG-E
Subkey-Length: 4096
Name-Real: $REQUEST_USER
Name-Comment: Test
Name-Email: $REQUEST_MAIL
Expire-Date: 0
Passphrase: test
EOF

gpg --batch --yes --gen-key "$tmp_parameters_file"
gpg --output test.gpg --armor --export "$REQUEST_MAIL"

The public key is copied over to the container during build and added with gpg --import during execution.

In the request script, the command is signed:

request_name=$(basename "$request_command_file")
request_attachment_name="$request_name.asc"

rm -f "$request_attachment_name"
gpg \
  --output "$request_attachment_name" \
  --local-user "$REQUEST_USER" \
  --armor \
  --sign "$request_name"

# ...

uuencode "$request_attachment_name" "$request_attachment_name"

Finally, on watch.sh, the signature is verified:

# Validate and evaluate request
if gpg --output script.sh --decrypt request.txt.asc; then
    bash script.sh > response-stdout.txt
else
    echo "[ERROR] Invalid signature in request." > response-stdout.txt
fi

Enforcing email authentication to pass spam filtering

This is a cross-cutting concern that has to be accounted for in mail servers, otherwise our sent mails will be blocked or disposed in the spam folder.

Usually this means configuring the following: SMTP over TLS, SPF, DKIM, DMARC…

Unfortunately, this is impractical to accomplish in a “free as in free beer” manner on your local system:

These challenges require you to have your own Virtual Private Server (VPS). Nevertheless, our docker image has everything set up so that most remaining configuration will be confined to DNS records.

TLS

Covered by Let’s Encrypt. To generate certificates we used dehydrated. The DNS challenge is preferred because it can be done in your local system, since all you need is a dynamic DNS service that allows setting a TXT record.

Generating the SSL certificates (and renaming them with suffixes expected by postfix) was automated as part of a Makefile:

ssl-generated-dir := dehydrated/certs/$(MAILSH_DOMAIN)
ssl-dir := $(shell readlink -f assets/ssl)
ssl-obj := \
	$(ssl-dir)/$(MAILSH_DOMAIN).key \
	$(ssl-dir)/$(MAILSH_DOMAIN).fullchain.crt
$(ssl-obj):
	rm -rf dehydrated
	git clone --depth=1 https://github.com/lukas2511/dehydrated
	echo "$(MAILSH_DOMAIN)" > assets/dehydrated/domains.txt
	cp assets/dehydrated/* dehydrated/
	# `|| true`: Ignoring unknown hook errors
	cd dehydrated && \
		chmod 755 hook.sh && \
		chmod +x dehydrated && \
		./dehydrated --register  --accept-terms && \
		./dehydrated -c || true
	mkdir -p $(ssl-dir)
	cp $(ssl-generated-dir)/privkey.pem $(ssl-dir)/$(MAILSH_DOMAIN).key
	cp $(ssl-generated-dir)/fullchain.pem $(ssl-dir)/$(MAILSH_DOMAIN).fullchain.crt

For SSL verification, we need to serve HTTPS with a web server at port 443. We used caddy, configuring it to serve TLS with our previously generated certificates:

tls /etc/postfix/certs/mailsh.duckdns.org.fullchain.crt /etc/postfix/certs/mailsh.duckdns.org.key

caddy is also managed by supervisord:

supervisor_program=caddy
cat > "/etc/supervisor/conf.d/$supervisor_program.conf" <<EOF
[program:$supervisor_program]
command=/opt/caddy -agree=true -conf /opt/Caddyfile -log stdout -port 443
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
EOF

We can test if our certificates have been successfully applied with openssl s_client -connect mailsh.duckdns.org:443 -servername mailsh.duckdns.org:

CONNECTED(00000003)
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = mailsh.duckdns.org
verify return:1
---
Certificate chain
 0 s:CN = mailsh.duckdns.org
   i:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
 1 s:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
   i:O = Digital Signature Trust Co., CN = DST Root CA X3

SPF

Applied with the following TXT record:

v=spf1 a include:_spf.google.com ~all

Verified in the following reply mail header field:

Authentication-Results: mx.google.com;
   spf=pass (google.com: domain of root@mailsh.duckdns.org designates 148.69.37.212 as permitted sender) smtp.mailfrom=root@mailsh.duckdns.org

DKIM

Handled by opendkim, which was accounted for in our base image. The only difference is that generating and copying the domain key is done as part of the container execution:

# Instead of passing a DKIM private key,
# generate it in the container and copy it
# to the target directory checked by
# `install.sh` from `catatnight/postfix`
opendkim-genkey -s mail -d "$MAILSH_DOMAIN"
mkdir -p /etc/opendkim/domainkeys
mv mail.private /etc/opendkim/domainkeys
mv mail.txt /opt/

We can retrieve the value of the DKIM DNS record from the container (in this example, it is set in DuckDNS):

txt=$(docker exec -it mailsh cat /opt/mail.txt | \
    sed 's/.*"\([a-z]=\)/\1/; s/".*//' | \
    tr -d '\r\n' | \
    node -p 'encodeURIComponent(require("fs").readFileSync(0))') && \
    curl "https://www.duckdns.org/update?domains=mail._domainkey.mailsh&token=$TOKEN&txt=$txt&verbose=true"

When the message is signed successfully, postfix logs:

Aug 28 19:23:39 mailsh opendkim[141]: 1ACD52069AA: DKIM-Signature field added (s=mail, d=mailsh.duckdns.org)

Verified in the following reply mail header field:

Authentication-Results: mx.google.com;
   dkim=pass header.i=@mailsh.duckdns.org header.s=mail header.b="J/N1GMIX";

Source code

Available in a git repository.

Further work

References

  1. All-in-one containers for mail servers:

    [return]

  2. Some are even deliberately ambiguous in their support, forcing you to register an account only to then inform you that you need a paid account to create those records. [return]