4.95-r0-1 is the first container image tag that is available as a multi-arch image
with support for linux/amd64, linux/arm64/v8 (arm64) and linux/arm/v7 (arm32),
so self-building is no longer necessary on all these platforms.
2.2.3 is the first container image tag that is available as a multi-arch image
with support for linux/amd64, linux/arm64/v8 (arm64) and linux/arm/v7 (arm32),
so self-building is no longer necessary on all these platforms.
The goal is to have a single variable which tells us which homeserver
software is in use. Much simpler than having if/elif/elif checks for
variables like (`matrix_synapse_enabled` and `matrix_dendrite_enabled`, etc.)
everywhere.
This change forces ansible to decrypt the variable with ansible-vault if encrypted, to avoid the error '{"msg": "Unexpected templating type error occurred on ({{ matrix_synapse_macaroon_secret_key | password_hash('sha512') }}): secret must be unicode or bytes, not ansible.parsing.yaml.objects.AnsibleVaultEncryptedUnicode"}'
Every other variable in the playbook was found to have no problems with encryption.
The change has no negative impact on non-encrypted matrix_synapse_macaroon_secret_key.
* fix for string concatenation on matrix_synapse_account_threepid_delegates_email and matrix_synapse_account_threepid_delegates_msisdn
* .editorconfig should not be ignored
* Restore .gitignore
Co-authored-by: b <b@b>
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
This bridge doesn't support SQLite anyway, so it's not necessary
to carry around configuration fields and code for migration from SQLite
to Postgres. There's nothing to migrate.
This commit introduces a new role that downloads and installs the
prometheus community postgres exporter https://github.com/prometheus-community/postgres_exporter.
A new credential is added to matrix_postgres_additional_databases that
allows the exporter access to the database to gather statistics.
A new dashboard was added to the grafana role, with some refactoring
to enable the dashboard only if the new role is enabled.
I've included some basic instructions for how to enable the role in
the Docs section.
In terms of testing, I've tested enabling the role, and disabling
it to make sure it cleans up the container and systemd role.
Expected to have regressed after https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/1008
This patch comes with its own downsides (as described in the comments
for matrix_prometheus_node_exporter_container_http_host_bind_port),
but at least there's:
- no security issue
- metrics remain readable from matrix-prometheus (even if the network metrics are inaccurate)
A better patch is certainly welcome.
Not sure why this had been done in the first place.
It doesn't make any sense.
There's no relation between matrix-nginx-proxy and
prometheus-node-exporter.
This variable was previously undefined in the role and was only getting
defined via `group_vars/matrix_servers`.
We now properly initialize it (and its good default value) in the role
itself.
- add matrix_postgres_backup_databases to be build on top of matrix_postgres_additional_databases
- POSTGRES_DB is now directly set from matrix_postgres_backup_databases while building the templates/env-postgres-backup.j2
People who were disabling matrix-nginx-proxy (in favor of their own
nginx webserver) and also overriding `matrix_federation_public_port`,
found that the generated nginx configuration still hardcoded `8448`,
which forced their nginx server to use that, regardless of the fact
that `matrix_federation_public_port` was pointing elsewhere.
We now allow for the in-container federation port to be configurable,
and also automatically wire things properly.
Also includes the dashboards for Synapse and for Node Exporter.
Again has only been tested on debian amd64 so far, but the grafana docker image is available for arm64 and arm32. Nice.
Basic system stats, to show stuff the synapse metrics
can't show such as resource usage by bridges, etc
Seems to work fine as well.
This too has only been tested on debian amd64 so far
I felt that adding another variable was probably going to be the easiest way to do this. I may end up adding another variable to enable this feature, for consistency with some of the other things.
We do this by creating one more layer of indirection.
First we reach some generic vhost handling matrix.DOMAIN.
A bunch of override rules are added there (capturing traffic to send to
ma1sd, etc). nginx-status and similar generic things also live there.
We then proxy to the homeserver on some other vhost (only Synapse being
available right now, but repointing this to Dendrite or other will be
possible in the future).
Then that homeserver-specific vhost does its thing to proxy to the
homeserver. It may or may not use workers, etc.
Without matrix-corporal, the flow is now:
1. matrix.DOMAIN (matrix-nginx-proxy/matrix-domain.conf)
2. matrix-nginx-proxy/matrix-synapse.conf
3. matrix-synapse
With matrix-corporal enabled, it becomes:
1. matrix.DOMAIN (matrix-nginx-proxy/matrix-domain.conf)
2. matrix-corporal
3. matrix-nginx-proxy/matrix-synapse.conf
4. matrix-synapse
(matrix-corporal gets injected at step 2).
There was a `matrix_nginx_proxy_enabled|default(False)` check, but:
- it didn't seem to work reliably for some reason (hmm)
- referring to a `matrix_nginx_proxy_*` variable from within the
`matrix-synapse` role is not ideal
- exposing always happened on `127.0.0.1`, which may not be good enough
for some rarer setups (where the own webserver is external to the host)
I guess it didn't hurt to do it until now, but it's not great serving
federation APIs on the client-server API port, etc.
matrix-corporal doesn't work yet (still something to be solved in the
future), but its firewalling operations will also be sabotaged
by Client-Server APIs being served on the federation port (it's a way to get around its firewalling).
In short, this makes Synapse a 2nd class citizen,
preparing for a future where it's just one-of-many homeserver software
options.
We also no longer have a default Postgres superuser password,
which improves security.
The changelog explains more as to why this was done
and how to proceed from here.
I had intentionally held it back in 39ea3496a4
until:
- it received more testing (there were a few bugs during the
migration, but now it seems OK)
- this migration guide was written
The answer to these is: it's good to have them in both places.
The role defines the obvious things it depends on (not knowing
what setup it will find itself into), and then
`group_vars/matrix_servers` "extends" it based on everything else it
knows (the homeserver being Synapse, whether or not the internal
Postgres server is being used, etc.)
Otherwise the postgres upgrade fails with the following error:
Unexpected templating type error occurred on ({{
[matrix_postgres_connection_username]
+
matrix_postgres_additional_databases|map(attribute='username')
}}
): can only concatenate list (not "generator") to list
Now that 0.7.2 is out, the Docker image supports Postgres
and we can do the (SQLite -> Postgres) migration.
I've also found out that we needed to fix up the `tokens.ex_date` column
data type a bit to prevent matrix-registration from raising exceptions
when comparing `datetime.now()` with `ex_date` coming from the database.
Example:
> File "/usr/local/lib/python3.8/site-packages/matrix_registration/tokens.py", line 58, in valid
> expired = self.ex_date < datetime.now()
> TypeError: can't compare offset-naive and offset-aware datetimes
We were running into conflicts, because having initialized
the roles (users) and databases, trying to import leads to
errors (role XXX already exists, etc.).
We were previously ignoring the Synapse database (`homeserver`)
when upgrading/importing, because that one gets created by default
whenever the container starts.
For our additional databases, it's a similar situation now.
It's not created by default as soon as Postgres starts with an empty
database, but rather we create it as part of running the playbook.
So we either need to skip those role/database creation statements
while upgrading/importing, or to avoid creating the additional database
and rely on the import for that. I've gone for the former, because
it's already similar to what we were doing and it's simpler
(it lets `setup_postgres.yml` be the same in all scenarios).
Auto-migration and everything seems to work. It's just that
matrix-registration cannot load the Python modules required
for talking to a Postgres database.
Tracked here: https://github.com/ZerataX/matrix-registration/issues/44
Until this gets fixed, we'll continue default to 'sqlite'.
I was thinking that it makes sense to be more specific,
and using `_postgres_` also separated these variables
from the `_database_` variables that ended up in bridge configuration.
However, @jdreichmann makes a good point
(https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/740#discussion_r542281102)
that we don't need to be so specific and can allow for other engines (like MySQL) to use these variables.
The only one that remains is `matrix_synapse_database_password`, but
that's something old and should be dealt with separately in the future
(unless it remains as it is).
Using the result of `password_hash` works for creating them,
but authentication seems to be failing with some tools like pgloader.
It's possible that we're not escaping things properly somewhere.
Ideally, it'd be nice to solve that. But the easier (and still
relatively safe/good) solution is to just turn that password hash
into a UUID that's safe for passing around without worrying about
escaping.
People can toggle between them now. The playbook also defaults
to using SQLite if an external Postgres server is used.
Ideally, we'd be able to create databases/users in external Postgres
servers as well, but our initialization logic (and `docker run` command,
etc.) hardcode too many things right now.
If a service is enabled, a database for it is created in postgres with a uniqque password. The service can then use this database for data storage instead of relying on sqlite.
ma1sd requires the openid endpoints for certain functionality.
Example: 90b2b5301c/src/main/java/io/kamax/mxisd/auth/AccountManager.java (L67-L99)
If federation is disabled, we still need to expose these openid APIs on the
federation port.
Previously, we were doing similar magic for Dimension.
As per its documentation, when running unfederated, one is to enable
the openid listener as well. As per their recommendation, people
are advised to do enable it on the Client-Server API port
and use the `federationUrl` variable to override where the federation
port is (making federation requests go to the Client-Server API).
Because ma1sd always uses the federation port (unless you do some
DNS overwriting magic using its configuration -- which we'd rather not
do), it's better if we just default to putting the `openid` listener
where it belongs - on the federation port.
With this commit, we retain the "automatically enable openid APIs" thing
we've been doing for Dimension, but move it to the federation port instead.
We also now do the same thing when ma1sd is enabled.
We'd like the roles to be self-contained (as much as possible).
Thus, the `matrix-nginx-proxy` shouldn't reference any variables from
other roles. Instead, we rely on injection via
`group_vars/matrix_servers`.
Related to #681 (Github Pull Request)
While v1.22.0 supposedly has multi-arch Docker images
(thanks to https://github.com/matrix-org/synapse/pull/7921),
I can't them on Docker Hub yet, so I'm backing out of this change
for now and letting people fall back to self-building there.
Hopefully fixes an error like this (which I haven't been able to
reproduce, but..):
> [modules/xmpp/strophe.util.js] <Object.i.Strophe.log>: Strophe: Error: Failed to construct 'RTCPeerConnection': 'matrix.DOMAIN' is not one of the supported URL schemes 'stun', 'turn' or 'turns'.
We do this for 2 reasons:
- so we can control things which are not controllable using environment
variables (for example `stunServers` in jitsi/web, since we don't wish
to use the hardcoded Google STUN servers if our own Coturn is enabled)
- so playbook variable changes will properly rebuild the configuration.
When using Jitsi environment variables, the configuration is only built
once (the first time) and never rebuilt again. This is not the
consistent with the rest of the playbook and with how Ansible operates.
We're not perfect at it (yet), because we still let the Jitsi containers
generate some files on their own, but we are closer and it should be
good enough for most things.
Related to #415 (Github Pull Request).
This keeps the roles cleaner and more independent of matrix-base,
which may be important for people building their own playbook
out of the individual roles and not using the matrix-base role.
Can you double check that the way I have this set only exposes it locally? It is important that the manhole is not available to the outside world since it is quite powerful and the password is hard coded.
Well, `config.yaml` has been playbook-managed for a long time.
It's now extended to match the default sample config of the Discord
bridge.
With this patch, we also make `registration.yaml` playbook-managed,
which leads us to consistency with all other bridges.
Along with that, we introduce `./config` and `./data` separation,
like we do for the other bridges.
According to
https://passlib.readthedocs.io/en/stable/lib/passlib.hash.sha512_crypt.html:
> salt (str) – Optional salt string. If not specified, one will be autogenerated (this is recommended).
> If specified, it must be 0-16 characters, drawn from the regexp range [./0-9A-Za-z].
Until now, we were using invalid characters (like `-`). We were also
going over the requested length limit of 16 characters.
This is most likely what was causing `ValueError` exceptions for some people,
as reported in #209 (Github Issue).
Ansible's source code (`lib/ansible/utils/encrypt.py`) shows that Ansible tries
to use passlib if available and falls back to Python's `crypt` module if not.
For Mac, `crypt.crypt` doesn't seem to work, so Ansible always requires passlib.
Looks like crypt is forgiving when length or character requirements are
not obeyed. It would auto-trim a salt string to make it work, which means
that we could end up with the same hash if we call it with salts which aer only
different after their 16th character.
For these reasons (crypt autotriming and passlib downright complaining),
we're now using shorter and more diverse salts.
Related to #193, but for the Facebook bridge.
(other bridges can be changed to do the same later).
This patch makes the bridge configuration entirely managed by the
Ansible playbook. The bridge's `config.yaml` and `registration.yaml`
configuration files are regenerated every time the playbook runs.
This allows us to apply updates to those files and to avoid
people having to manage the configuration files manually on the server.
-------------------------------------------------------------
A deficiency of the current approach to dumping YAML configuration in
`config.yaml` is that we strip all comments from it.
Later on, when the bridge actually starts, it will load and redump
(this time with comments), which will make the `config.yaml` file
change.
Subsequent playbook runs will report "changed" for the
"Ensure mautrix-facebook config.yaml installed" task, which is a little
strange.
We might wish to improve this in the future, if possible.
Still, it's better to have a (usually) somewhat meaningless "changed"
task than to what we had -- never rebuilding the configuration.