In the process of writing the Draupnir for all role documentation it was forgotten that Draupnir needs to have the ability to write to the main management room policy list that controls who can access the bot. This flaw was overlooked during development as naturally without thinking the bot had these powers.
Upstream Docs had this exact bug also and the author of this commit will have to go and fix upstream docs also to resolve this bug.
* Draupnir for all Role
* Draupnir for all Documentation
* Pin D4A to Develop until D4A patches are in a release.
* Update D4A Docs to mention pros and cons of D4A mode compared to normal
* Change Documentation to mention a fixed simpler provisioning flow.
Use of /plain allows us to bypass the bugs encountered during the development of this role with clients attempting to escape our wildcards causing the grief that led to using curl.
This reworded commit does still explain you can automatically inject stuff into the room if you wanted to.
* Emphasise the State of D4A mode
* Link to Draupnir-for-all docs and tweak the docs some
* Link to Draupnir-for-all from Draupnir documentation page
* Announce Draupnir-for-all
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
* feat: auto-accept-invite module and docs
* fix: name typos and some forgot to adjust variables
* fix: accept only direct messages should work now and better wording
* changed: only_direct_messages variable naming
* feat: add logger, add synapse workers config
* Fix typo and add details about synapse-auto-acccept-invite
* Add newline at end of file
* Fix alignment
* Fix logger name for synapse_auto_accept_invite
The name of the logger needs to match the name of the Python module.
Ref: d673c67678/synapse_auto_accept_invite/__init__.py (L20)
* Add missing document start YAML annotation
* Remove trailing spaces
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
I've just tested Rocky Linux v9 and it seems to work.
I suppose the Docker situation
(https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/300)
on RHEL v8 has improved, so it probably works too.
I see no reason AlmaLinux and other RHEL derivatives wouldn't work,
but I have neither tested them, nor have confirmation from others about
it.
It's mostly a matter of us being able to install:
- Docker, via https://github.com/geerlingguy/ansible-role-docker which
seems to support various distros
- a few other packages (systemd-timesyncd, etc).
The list of supported distros has been reordered alphabetically.
I've heard reports of SUSE Linux working well too, so it may also be added
if confirmed again.
Closes https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/300
Fixup for https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/3017
This reverts 1cd82cf068 and also multiplies results by `1024`
so as to pass bytes to Synapse, not KB (as done before).
1cd82cf068 was correctly documenting what we were doing (passing KB values),
but that's incorrect.
Synapse's Config Conventions
(https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#config-conventions)
are supposed to clear it up, but they don't currently state what happens when you pass a plain number (without a unit suffix).
Thankfully, the source code tells us:
bc1db16086/synapse/config/_base.py (L181-L206)
> If an integer is provided it is treated as bytes and is unchanged.
>
> String byte sizes can have a suffix of ...
> No suffix is understood as a plain byte count.
We were previously passing strings, but that has been improved in 3d73ec887a.
Regardless, non-suffixed values seem to be treated as bytes by Synapse,
so this patch changes the variables to use bytes.
Moreover, we're moving from `matrix_synapse_memtotal_kb` to
`matrix_synapse_cache_size_calculations_memtotal_bytes` as working with
the base unit everywhere is preferrable.
Here, we also introduce 2 new variables to allow for the caps to be
tweaked:
- `matrix_synapse_cache_size_calculations_max_cache_memory_usage_cap_bytes`
- `matrix_synapse_cache_size_calculations_target_cache_memory_usage_cap_bytes`
* Modify Synapse Cache Factor to use Auto Tune
Synapse has the ability to as it calls in its config auto tune caches.
This ability lets us set very high cache factors and then instead limit our resource use.
Defaults for this commit are 1/10th of what Element apparently runs for EMS stuff and matrix.org on Cache Factor and upstream documentation defaults for auto tune.
* Add vars to Synapse main.yml to control cache related config
This commit adds various cache related vars to main.yml for Synapse.
Some are auto tune and some are just adding explicit ways to control upstream vars.
* Updated Auto Tune figures
Autotuned figures have been bumped in consultation with other community members as to a reasonable level. Please note these defaults are more on the one of each workers side than they are on the monolith Side.
* Fix YML Error
The playbook is not happy with the previous state of this patch so this commit hopefully fixes it
* Add to_json to various Synapse tuning related configs
* Fix incorrect indication in homeserver.yaml.j2
* Minor cleanups
* Synapse Cache Autotuning Documentation
* Upgrade Synapse Cache Autotune to auto configure memory use
* Update Synapse Tuning docs to reflect automatic memory use configuration
* Fix Linting errors in synapses main.yml
* Rename variables for consistency (matrix_synapse_caches_autotuning_* -> matrix_synapse_cache_autotuning_*)
* Remove FIX ME comment about Synapse's `cache_autotuning`
`docs/maintenance-synapse.md` and `roles/custom/matrix-synapse/defaults/main.yml`
already contains documentation about these variables and the default values we set.
* Improve "Tuning caches and cache autotuning" documentation for Synapse
* Announce larger Synapse caches and cache auto-tuning
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
Adds a Draupnir mention to the list and as for why we pull from Gnuxie its because that is the official source of docker images as Draupnir used to be Gnuxie/Draupnir before it moved to The Draupnir Project.
The path rule was not working because for federation fo work it needs several endpoints.
Two of them are not under /_matrix/federation :
- /_matrix/key
- /_matrix/media
* Update configuring-playbook-traefik.md
Added docu on how to host another server behind traefik.
* Added MASH and docker options
Added the link to mash and the compatibility adjustments.
Mentioned the prefered method with docker containers.
Some rephrasing to make clear, the intended guide ios for reverse proxying non-docker services.
* Improve wording in configuring-playbook-traefik.md
---------
Co-authored-by: Slavi Pantaleev <slavi@devture.com>
After some checking, it seems like there's `/_synapse/client/oidc`,
but no such thing as `/_synapse/oidc`.
I'm not sure why we've been reverse-proxying these paths for so long
(even in as far back as the `matrix-nginx-proxy` days), but it's time we
put a stop to it.
The OIDC docs have been simplified. There's no need to ask people to
expose the useless `/_synapse/oidc` endpoint. OIDC requires
`/_synapse/client/oidc` and `/_synapse/client` is exposed by default
already.
Issues and Pull Requests were not migrated to the new
organization/repository, so `matrix-org/synapse/pull` and
`matrix-org/synapse/issues` references were kept as-is.
`matrix-org/synapse-s3-storage-provider` references were also kept,
as that module still continues living under the `matrix-org` organization.
This patch mainly aims to change documentation-related things, not actual
usage in full yet. For polish that, another more comprehensive patch is coming later.
The old variables still work. The global lets us avoid
auto-detection logic like we're currently doing for
`matrix_nginx_proxy_proxy_matrix_federation_api_enabled`.
In the future, we'd just be able to reference
`matrix_homeserver_federation_enabled` and know the up-to-date value
regardless of homeserver.
This was meant to serve as an intermediary for services needing to reach
the homeserver. It was used like that for a while in this
`bye-bye-nginx-proxy` branch, but was never actually public.
It has recently been superseded by homeserver-like services injecting
themselves into a new internal Traefik entrypoint
(see `matrix_playbook_internal_matrix_client_api_traefik_entrypoint_*`),
so `matrix-homeserver-proxy` is no longer necessary.
---
This is probably a good moment to share some benchmarks and reasons
for going with the internal Traefik entrypoint as opposed to this nginx
service.
1. (1400 rps) Directly to Synapse (`ab -n 1000 -c 100 http://matrix-synapse:8008/_matrix/client/versions`
2. (~900 rps) Via `matrix-homeserver-proxy` (nginx) proxying to Synapse (`ab -n 1000 -c 100 http://matrix-homeserver-proxy:8008/_matrix/client/versions`)
3. (~1200 rps) Via the new internal entrypoint of Traefik (`matrix-internal-matrix-client-api`) proxying to Synapse (`ab -n 1000 -c 100 http://matrix-traefik:8008/_matrix/client/versions`)
Besides Traefik being quicker for some reason, there are also other
benefits to not having this `matrix-homeserver-proxy` component:
- we can reuse what we have in terms of labels. Services can register a few extra labels on the new Traefik entrypoint
- we don't need services (like `matrix-media-repo`) to inject custom nginx configs into `matrix-homeserver-proxy`. They just need to register labels, like they do already.
- Traefik seems faster than nginx on this benchmark for some reason, which is a nice bonus
- no need to run one extra container (`matrix-homeserver-proxy`) and execute one extra Ansible role
- no need to maintain a setup where some people run the `matrix-homeserver-proxy` component (because they have route-stealing services like `matrix-media-repo` enabled) and others run an optimized setup without this component and everything needs to be rewired to talk to the homeserver directly. Now, everyone can go through Traefik and we can all run an identical setup
Downsides of the new Traefik entrypoint setup are that:
- all addon services that need to talk to the homeserver now depend on Traefik
- people running their own Traefik setup will be inconvenienced - they
need to manage one additional entrypoint
We'd be adding integration with an internal Traefik entrypoint
(`matrix_playbook_internal_matrix_client_api_traefik_entrypoint`),
so renaming helps disambiguate things.
There's no need for deperecation tasks, because the old names
have only been part of this `bye-bye-nginx-proxy` branch and not used by
anyone publicly.