This allows people to try out the new Element X clients, which need to
run against the sliding-sync proxy (https://github.com/matrix-org/sliding-sync).
Supersedes https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/2515
The code is based on the existing PR (#2515), but heavily reworked. Major changes:
- lots of internal refactoring and variable renaming
- fixed self-building to support non-amd64 architectures
- changed to talk to the homeserver locally, over the container network (not
publicly)
- no more matrix-nginx-proxy support due to complexity (see below)
- no more `matrix_server_fqn_sliding_sync_proxy` in favor of
`matrix_sliding_sync_hostname` and `matrix_sliding_sync_path_prefix`
- runs on `matrix.DOMAIN/sliding-sync` by default, so it can tried
easily without having to create new DNS records
The variable was necessary when multiple playbooks could have
potentially tried to manage a shared `devture-traefik.serivce` systemd service
and shared `/devture-traefik` directory.
Since adcc6d9723, we use our own `/matrix/traefik`
(`matrix-traefik.service`) installation and no conflicts can arise.
It's safe to always enable the role, just like we do with all the other roles.
The migration is automatic. Existing users should experience a bit of
downtime until the playbook runs to completion, but don't need to do
anything manually.
This change is provoked by https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/2535
While my statements there ("Traefik is a shared component among
sibling/related playbooks and should retain its global
non-matrix-prefixed name and path") do make sense, there's another point
of view as well.
With the addition of docker-socket-proxy support in bf2b540807,
we potentially introduced another non-`matrix-`-prefixed systemd service
and global path (`/devture-container-socket-proxy`). It would have
started to become messy.
Traefik always being called `devture-traefik.service` and using the `/devture-traefik` path
has the following downsides:
- different playbooks may write to the same place, unintentionally,
before you disable the Traefik role in some of them.
If each playbook manages its own installation, no such conflicts
arise and you'll learn about the conflict when one of them starts its
Traefik service and fails because the ports are already in use
- the data is scattered - backing up `/matrix` is no longer enough when
some stuff lives in `/devture-traefik` or `/devture-container-socket-proxy` as well;
similarly, deleting `/matrix` is no longer enough to clean up
For this reason, the Traefik instance managed by this playbook
will now be called `matrix-traefik` and live under `/matrix/traefik`.
This also makes it obvious to users running multiple playbooks, which
Traefik instance (powered by which playbook) is the active one.
Previously, you'd look at `devture-traefik.service` and wonder which
role was managing it.
* healthchecks.io integration
* mutex on forwarding messages into thread
* fix in prefixes handling
* send error messages as thread reply when possible
We don't need these 2 roughly-the-same settings related to the
traefik-certs-dumper role.
For Traefik, it makes sense, because it's a component used by the
various related playbooks and they could step onto each other's toes
if the role is enabled, but Traefik is disabled (in that case, uninstall
tasks will run).
As for Traefik certs dumper, the other related playbooks don't have it,
so there's no conflict. Even if they used it, each one would use its own
instance (different `devture_traefik_certs_dumper_identifier`), so there
wouldn't be a conflict and uninstall tasks can run without any danger.
This makes it consistent with the rest of the playbook:
- there's a default config which has various variables controlling
settings
- there's also an `_extension_yaml` variable, which lets you override it
The newly extracted role also has native Traefik support,
so we no longer need to rely on `matrix-nginx-proxy` for
reverse-proxying to Ntfy.
The new role uses port `80` inside the container (not `8080`, like
before), because that's the default assumption of the officially
published container image. Using a custom port (like `8080`), means the
default healthcheck command (which hardcodes port `80`) doesn't work.
Instead of fiddling to override the healthcheck command, we've decided
to stick to the default port instead. This only affects the
inside-the-container port, not any external ports.
The new role also supports adding the network ranges of the container's
multiple additional networks as "exempt hosts". Previously, only one
network's address range was added to "exempt hosts".
Previously, it had to go through matrix-nginx-proxy.
It's exposed to Traefik directly via container labels now
Serving at a path other than `/` doesn't work well yet.
We were mounting our own configuration to
`/usr/share/nginx/html/config.json`, which is a symlink to
`/tmp/config.json`. So we effectively mount our file to
`/tmp/config.json`.
When starting:
- if Hydrogen sees a `CONFIG_OVERRIDE` environment variable,
it will try to save it into our read-only config file and fail.
- if Hydrogen doesn't see a `CONFIG_OVERRIDE` environment variable (the
path we go through, because we don't pass such a variable),
it will try to copy its bundled configuration (`/config.json.bundled`)
to `/tmp/config.json`. Because our configuration is mounted as read-only, it will
fail.
In both cases, it will fail with:
> cp: can't create '/tmp/config.json': File exists
Source: 3720de36bb/docker/dynamic-config.sh
We work around this by mounting our configuration on top of the bundled
one (`/config.json.bundled`). We then let Hydrogen's startup script copy
it to `/tmp/config.json` (a tmpfs we've mounted into the container) and use it from there.
* fix: only add element related entries to client well-known if element is enabled
* Fix matrix-base/defaults/main.yml syntax
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
This gets us started on adding a Traefik role and hooking Traefik:
- directly to services which support Traefik - we only have a few of
these right now, but the list will grow
- to matrix-nginx-proxy for most services that integrate with
matrix-nginx-proxy right now
Traefik usage should be disabled by default for now and nothing should
change for people just yet.
Enabling these experiments requires additional configuration like this:
```yaml
devture_traefik_ssl_email_address: '.....'
matrix_playbook_traefik_role_enabled: true
matrix_playbook_traefik_labels_enabled: true
matrix_ssl_retrieval_method: none
matrix_nginx_proxy_https_enabled: false
matrix_nginx_proxy_container_http_host_bind_port: ''
matrix_nginx_proxy_container_federation_host_bind_port: ''
matrix_nginx_proxy_trust_forwarded_proto: true
matrix_nginx_proxy_x_forwarded_for: '$proxy_add_x_forwarded_for'
matrix_coturn_enabled: false
```
What currently works is:
reverse-proxying for all nginx-proxy based services **except** for the Matrix homeserver
(both Client-Server an Federation traffic for the homeserver don't work yet)
Switching from doing "post-start" loop hacks to running the container
in 3 steps: `create` + potentially connect to additional networks + `start`.
This way, the container would be connected to all its networks even at
the very beginning of its life.
Related to https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/2427
`metrics_enabled` should only expose the metrics locally, on the
container network, so that a local Prometheus can consume them.
Exposing them publicly should be done via a separate toggle (`metrics_proxying_enabled`).
This is how all other roles work, so this makes these mautrix roles consistent with the rest.
This helps large deployments which need to open up thousands of ports
(matrix_coturn_turn_udp_min_port, matrix_coturn_turn_udp_min_port)
On a test VM, opening 1k ports takes 17 seconds for Docker to "publish"
all of these ports (setting up forwarding rules with the firewall, etc),
so service startup and shutdown take a long amount of time.
If host-networking is used, there's no need to open any ports at all
and startup/shutdown can be quick.
This container needs a writable $HOME, and will fail at startup if
there isn't one.
Provide one by pointing HOME to a path under the mounted /data
directory.
* Allow the mautrix whatsapp relaybot to be enable with a variable
This allows a user to enable the relaybot by setting a variable in
`vars.yml` in the same way that the mautrix signal relaybot is
configured.
* Correct default values for mautrix whatsapp relaybot variables
* Add documentation for using the relaybot with mautrix whatsapp
* Adjust variable names to better reflect what they do
* Set default variables properly and use to_json in template
This extends the collection with support for seamless authentication at the Jitsi server using Matrix OpenID.
1. New role for installing the [Matrix User Verification Service](https://github.com/matrix-org/matrix-user-verification-service)
2. Changes to Jitsi role: Installing Jitsi Prosody Mods and configuring Jitsi Auth
3. Changes to Jitsi and nginx-proxy roles: Serving .well-known/element/jitsi from jitsi.DOMAIN
4. We updated the Jitsi documentation on authentication and added documentation for the user verification service.
Please note that This Mjolnir version bump technnically is missing some extra stuff that mjolnir claims we should do but it didnt work when i tried it and well my mjolnir deployment has been running this since release day almost and its fine. No errors in log that are unexpected. (Mjolnir throws errors in the log for anyone who wonders for various things that are fine. Like if a protection is off that is an error. Its due to how matrix-bot-lib works.)
Without these:
- `--tags=install-synapse` and `--tags=install-all` would be incomplete
and will not contain Synapse worker configuration
- `--tags=install-synapse-reverse-proxy-companion` and
`--tags=setup-synapse-reverse-proxy-companion` would not contain
Synapse worker configuration
* Changes to allow a user to set the max participants on a jitsi
conference
* changed var name from jitsi_max_participants to matrix_prosody_jitsi_max_participants
* add prometheus-nginxlog-exporter role
* Rename matrix_prometheus_nginxlog_exporter_container_url to matrix_prometheus_nginxlog_exporter_container_hostname
* avoid referencing variables from other roles, handover info using group_vars/matrix_servers
* fix: stop service when uninstalling
fix: typo
move available arch's into a var
fix: text
* fix: prometheus enabled condition
Co-authored-by: ikkemaniac <ikkemaniac@localhost>
Running with a user (like `matrix:matrix`) fails if Etherpad is enabled,
because `/matrix/etherpad` is owned by `matrix_etherpad_user_uid`/`matrix_etherpad_user_gid` (`5001:5001`).
The `matrix` user can't acccess the Etherpad directory for this reason
and Borgmatic fails when trying to make a backup.
There may be other things under `/matrix` which similarly use
non-`matrix:matrix` permissions.
Another workaround might have been to add `/matrix/etherpad` (and
potentially other things) to `matrix_backup_borg_location_exclude_patterns`, but:
- that means Etherpad won't be backed up - not great
- only excluding Etherpad may not be enough. There may be other files we
need to exclude as well
---
Running with `root` is still not enough though.
We need at least the `CAP_DAC_OVERRIDE` capability, or we won't be able to read the
`/etc/borgmatic.d/config.yaml` configuration file (owned by
`matrix:matrix` with `0640` permissions).
---
Additionally, it seems like the backup process tries to write to at least a few directories:
- `/root/.borgmatic`
- `/root/.ssh`
- `/root/.config`
> [Errno 30] Read-only file system: '/root/.borgmatic'
> Error while creating a backup.
> /etc/borgmatic.d/config.yaml: Error running configuration file
We either need to stop mounting the container filesystem as readonly
(remove `--read-only`) or to allow writing via a `tmpfs`.
I've gone the `tmpfs` route which seems to work.
In any case, the mounted source directories (`matrix_backup_borg_location_source_directories`)
are read-only regardless, so our actual source files are protected from unintentional changes.
Without this, it's a string and borg says:
> At 'hooks.postgresql_databases[INDEX_HERE].port': '5432' is not of type 'integer'
> /etc/borgmatic/config.yaml /etc/borgmatic.d /tmp/.config/borgmatic/config.yaml /tmp/.config/borgmatic.d: No valid configuration files found
.. and fails to do anything.
This role is usable on its own and it's not tied to Matrix, so
extracting it out into an independent role that we install via
ansible-galaxy makes sense.
This also fixes the confusion from the other day, where
`matrix_postgres_*` had to be renamed to `devture_postgres_*`
(unless it was about `matrix_postgres_backup_*`).
We now can safely say that ALL `matrix_postgres_*` variables need to be
renamed.
Fixes https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/2305
More details about the new key type can be found here:
https://eff-certbot.readthedocs.io/en/stable/using.html#rsa-and-ecdsa-keys
Existing RSA-based keys will continue to renew as RSA until manual
action is taken. Example from the documentation above:
> certbot renew --key-type ecdsa --cert-name example.com --force-renewal
In the future, we may add a command which does this automatically for
all domains.
* added dendrite captcha options
* added hcaptcha doc
* proper url
* Apply suggestions from code review
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update main.yml
* renamed captcha vars to new naming scheme
* change vars to new format
* Rename back some incorrect renamed variables
These variables are either not just part of the `client_api` subsection,
or are not even part of that section at all. They shouldn't have been
renamed in baaef2ed616e2645550d9
* Fix up naming inconsistencies
Some of these variables had been renamed in one place,
but not in other places, so it couldn't have worked that way.
* Add validation/deprecation for renamed Dendrite variables
Related to 4097898f885cf4c73, baaef2ed616e2645550, 68f4418092fa8ad
and a0b4a0ae6b2f1f18
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
- forego removing Docker images - it's not effective anyway, because it
only removes the last version.. which is a drop in the bucket, usually
- do not reload systemd - it's none of our business. `--tags=start`,
etc., handle this
- combine all uninstall tasks under a single block, which only runs if
we detect traces (a leftover systemd .service file) of the component.
If no such .service is detected, we skip them all. This may lead to
incorect cleanup in rare cases, but is good enough for the most part.
* Exempt Matrix server from ntfy rate limit
Add the matrix fqdn and localhost to ntfy's exemption list.
Also allow all ntfy rate limits to be configured through Ansible
variables.
* Fix names and formatting
* fixes
* tabs not spaces
* Lint
* Use raw tags instead of bracket soup
We had checks to avoid stopping/deleting systemd services for workers
that used to exist and will continue to exist, but we were deleting
config files for workers each time.. Only to recreate them again later.
This lead to:
- too many misleading "changed" tasks
- too much unnecessary work
- potential failures during playbook execution possibly leaving the
system in a bad state (no worker config files)
We need this to control whether `('matrix-' + matrix_homeserver_implementation + '.service')`
would get injected into `devture_systemd_service_manager_services_list_auto`
This was useful when the order of these roles in relation to Synapse
mattered (when we were injecting stuff into Synapse variables during
runtime). This is no longer the case since 0ea7cb5d18, so all of
this can be removed.
These `init.yml` (now `inject_into_nginx_proxy.yml`) tasks do not need
to `always` run. They only need to run for `setup-all` and
`setup-nginx-proxy`. Unless we're dealing with these 2 tags, we can
spare ourselves a lot of work.
This patch also moves the `when` statement from `init.yml` into
`main.yml` in an effort to further optimize things by potentially
avoiding the extra file include.
These are not even caused by Archlinux, but by running buggy Ansible on old Ubuntu
while targeting modern servers (like Archlinux, but also others, ..).
We shouldn't employ ugly workarounds like this. We should tell people to
avoid running buggy Ansible or bad distros like Ubuntu, even.
* Enable location sharing in Element
* Update roles/custom/matrix-client-element/tasks/validate_config.yml
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update roles/custom/matrix-client-element/tasks/setup_install.yml
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Rename location sharing vars to be consistent with other vars
* Rename style.json to map_style.json
* Add m.tile_server section to /.well-known/matrix/client
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Add task to configure a standalone JVB on a different server
* add missing file
* set nginx config
* update prosody file and expose port 5222
* change variable name to server id
* formatting change
* use server id of jvb-1 for the main server
* adding documentation
* adding more jvbs
* rename variable
* revert file
* fix yaml error
* minor doc fixes
* renaming tags and introducing a common tag
* remove duplicates
* add mapping for jvb to hostname/ip
* missed a jvb_server
* Update roles/matrix-nginx-proxy/templates/nginx/conf.d/matrix-jitsi.conf.j2
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* PR review comments and additional documentation
* iterate on dict items
* Update docs/configuring-playbook-jitsi.md
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update docs/configuring-playbook-jitsi.md
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update docs/configuring-playbook-jitsi.md
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update docs/configuring-playbook-jitsi.md
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update docs/configuring-playbook-jitsi.md
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update docs/configuring-playbook-jitsi.md
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Update docs/configuring-playbook-jitsi.md
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* adding documentation around the xmpp setting
* add common after
* reduce the number of services during init of the additional jvb
* remove rogue i
* revert change to jitsi init as it's needed
* only run the jvb service on the additional jvb host
* updating docs
* reset default and add documentation about the websocket port
* fix issue rather merge with master
* add missing role introduced in master
* this role is required too
* Adding new jitsi jvb playbook, moving setup.yml to matrix.yml and creating soft link
* updating documentation
* revert accidental change to file
* add symlink back to roles to aid running of the jitsi playbook
* Remove extra space
* Delete useless playbooks/roles symlink
* Remove blank lines
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Jitsi control sentry dns using vars
* renaming variables
* Revert "renaming variables"
This reverts commit 4146c48f6a.
* set to connection string or 0 to disable
* Update comments
* Use empty string for default Sentry DSN variables
Both should work identically, but an empty string seems better
Co-authored-by: Slavi Pantaleev <slavi@devture.com>