Reference: https://ansible-lint.readthedocs.io/en/latest/default_rules/#git-latest
Our variable naming is not necessarily consistent across roles.
I've tried to follow the naming conventions of each individual role.
All new variables are suffixed with `_version`, but the prefix may be
somewhat different.
Details here: https://docs.ansible.com/ansible/devel/porting_guides/porting_guide_6.html#id36
Basically:
```yaml
- name: Prior to 2.13
debug:
msg: '[1] + {{ [2] }}'
- name: 2.13 and forward
debug:
msg: '{{ [1] + [2] }}'
```
Interestingly, we had been using the new/safe syntax in lofs of places.
We were using the broken one in many others though. Hopefully all
instances were fixed by this patch.
People often report and ask about these "failures".
More-so previously, when the `docker kill/rm` output was collected,
but it still happens now when people do `systemctl status
matrix-something` and notice that it says "FAILURE".
Suppressing to avoid further time being wasted on saying "this is
expected".
The `to_nice_yaml` helper will by default wrap any string YAML values on
the first space after column 80. This can in worst case yield invalid
YAML syntax. More details in Ansible's documentation here:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#formatting-data-yaml-and-json
In short, you need to explicitly provide a custom width argument of a
high number of some kind to avoid the line wrapping.
Reverts b1b4ba501f, 90c9801c56, a3c84f78ca, ..
I haven't really traced it (yet), but on some servers, I'm observing
`ansible-playbook ... --tags=start` completing very slowly, waiting
to stop services. I can't reproduce this on all Matrix servers I manage.
I suspect that either the systemd version is to blame or that some
specific service is not responding well to some `docker kill/rm` command.
`ExecStop` seems to work great in all cases and it's what we've been
using for a very long time, so I'm reverting to that.
Until now, we were leaving services "enabled"
(symlinks in /etc/systemd/system/multi-user.target.wants/).
We clean these up now. Broken symlinks may still exist in older
installations that enabled/disabled services. We're not taking care
to fix these up. It's just a cosmetic defect anyway.
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.