`auto_join_mxid_localpart` defines the local part of the user id which is used to create auto-join rooms. The variable needs to be set to invite new users to any auto-join rooms which are set to invite-only.
* feat: auto-accept-invite module and docs
* fix: name typos and some forgot to adjust variables
* fix: accept only direct messages should work now and better wording
* changed: only_direct_messages variable naming
* feat: add logger, add synapse workers config
* Fix typo and add details about synapse-auto-acccept-invite
* Add newline at end of file
* Fix alignment
* Fix logger name for synapse_auto_accept_invite
The name of the logger needs to match the name of the Python module.
Ref: d673c67678/synapse_auto_accept_invite/__init__.py (L20)
* Add missing document start YAML annotation
* Remove trailing spaces
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
We're casting everything it `int`, but since Jinja templates are
involved, these values end up as strings anyway.
Doing `| int | to_json` is good, but we should only cast numbers to
integer, not empty strings, as that (0) may be interpreted differently
by Synapse.
To turn of auto-tuning, one is possibly supposed to pass empty strings:
> This option defaults to off, enable it by providing values for the sub-options listed below.
It could be that `0` is also considered "no value provided", but I
haven't verified that.
Related to https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/3017
* Modify Synapse Cache Factor to use Auto Tune
Synapse has the ability to as it calls in its config auto tune caches.
This ability lets us set very high cache factors and then instead limit our resource use.
Defaults for this commit are 1/10th of what Element apparently runs for EMS stuff and matrix.org on Cache Factor and upstream documentation defaults for auto tune.
* Add vars to Synapse main.yml to control cache related config
This commit adds various cache related vars to main.yml for Synapse.
Some are auto tune and some are just adding explicit ways to control upstream vars.
* Updated Auto Tune figures
Autotuned figures have been bumped in consultation with other community members as to a reasonable level. Please note these defaults are more on the one of each workers side than they are on the monolith Side.
* Fix YML Error
The playbook is not happy with the previous state of this patch so this commit hopefully fixes it
* Add to_json to various Synapse tuning related configs
* Fix incorrect indication in homeserver.yaml.j2
* Minor cleanups
* Synapse Cache Autotuning Documentation
* Upgrade Synapse Cache Autotune to auto configure memory use
* Update Synapse Tuning docs to reflect automatic memory use configuration
* Fix Linting errors in synapses main.yml
* Rename variables for consistency (matrix_synapse_caches_autotuning_* -> matrix_synapse_cache_autotuning_*)
* Remove FIX ME comment about Synapse's `cache_autotuning`
`docs/maintenance-synapse.md` and `roles/custom/matrix-synapse/defaults/main.yml`
already contains documentation about these variables and the default values we set.
* Improve "Tuning caches and cache autotuning" documentation for Synapse
* Announce larger Synapse caches and cache auto-tuning
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* Fix s3-storage migrate and shell: container needs attachment to postgres network also
* Connect to s3-storage-provider migrate to multiple networks in multiple steps
Multiple `--network` calls lead to:
> docker: Error response from daemon: Container cannot be connected to network endpoints: NETWORK_1 NETWORK_2.
* Connect to s3-storage-provider shell to multiple networks in multiple steps
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
After some checking, it seems like there's `/_synapse/client/oidc`,
but no such thing as `/_synapse/oidc`.
I'm not sure why we've been reverse-proxying these paths for so long
(even in as far back as the `matrix-nginx-proxy` days), but it's time we
put a stop to it.
The OIDC docs have been simplified. There's no need to ask people to
expose the useless `/_synapse/oidc` endpoint. OIDC requires
`/_synapse/client/oidc` and `/_synapse/client` is exposed by default
already.
Issues and Pull Requests were not migrated to the new
organization/repository, so `matrix-org/synapse/pull` and
`matrix-org/synapse/issues` references were kept as-is.
`matrix-org/synapse-s3-storage-provider` references were also kept,
as that module still continues living under the `matrix-org` organization.
This patch mainly aims to change documentation-related things, not actual
usage in full yet. For polish that, another more comprehensive patch is coming later.
This moves the comments from being just in Jinja,
to actually ending up in the generated `labels` file,
which makes inspection of the final result easier.
Also, some new lines were added here and there to make labels
more legible.
The generated file may still include weird new-lines due to
various `if` statements yielding content or not, but that's not so ugly
anymore - now that we have proper start/end sections that are visible in
the final `labels` file.
We'd be adding integration with an internal Traefik entrypoint
(`matrix_playbook_internal_matrix_client_api_traefik_entrypoint`),
so renaming helps disambiguate things.
There's no need for deperecation tasks, because the old names
have only been part of this `bye-bye-nginx-proxy` branch and not used by
anyone publicly.
This is an attempt at optimizing service startup.
The effect is most pronounced when many services are restarted one by one.
The systemd service manager role sometimes does this - for example when `just install-service synapse` runs.
In such cases, a 5-second delay for each Synapse worker service
(or other bridge/bot service that waits on the homeserver) quickly adds up to a lot.
When services are all stopped fully and then started, the effect is not so pronounced, because
`matrix-synapse.service` starts first and pulls all worker services (defined as `Wants=` for it).
Later on, when the systemd service manager role "starts" these worker services, they're started already.
Even if they had a 5-second wait each, it would have happened in parallel.