Adds support for managing certificates manually and for
having the playbook generate self-signed certificates for you.
With this, Let's Encrypt usage is no longer required.
Fixes Github issue #50.
This is provoked by Github issue #46.
No client had made use of the well-known mechanism
so far, so the set up performed by this playbook was not tested
and turned out to be a little deficient.
Even though /.well-known/matrix/client is usually requested with a
simple request (no preflight), it's still considered cross-origin
and [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS)
applies. Thus, the file always needs to be served with the appropriate
`Access-Control-Allow-Origin` header.
Github issue #46 attempts to fix it at the "reverse-proxying" layer,
which may work, but would need to be done for every server.
It's better if it's done "upstream", so that all reverse-proxy
configurations can benefit.
Moving away from using the default bridge network to using our own.
This isolates our services from other Docker containers running
on the default network on the same host.
The benefits are that:
- isolation is a little better - we no longer share a default
bridge network with any other containers that might be running on the host
- there are no longer hard dependencies - we do service discovery
by DNS name, and not via explicit `--link` usage during container start,
so containers can start out of order and fail without bringing down others
with them
(`matrix-nginx-proxy` can continue running, even if one of the other services dies)
In the future, when other services get introduced,
the increased resilience and simplicity will help as well.
When using matrix-nginx-proxy, the file permissions are organized
in a way that matrix-nginx-proxy could read the challenge files
produced by acmetool.
However, when another own/external webserver was used (like nginx
with our generated sample configuration), this could not work.
From on we're proxying the HTTP requests to port :402 in such a case,
which fixes the problem.