Merge remote-tracking branch 'upstream/master'

master
josiah 3 years ago
commit 0b1ccda267

@ -130,7 +130,7 @@ When updating the playbook, refer to [the changelog](CHANGELOG.md) to catch up w
- Matrix room: [#matrix-docker-ansible-deploy:devture.com](https://matrix.to/#/#matrix-docker-ansible-deploy:devture.com)
- IRC channel: `#matrix-docker-ansible-deploy` on the [Freenode](https://freenode.net/) IRC network (irc.freenode.net)
- IRC channel: `#matrix-docker-ansible-deploy` on the [Libera Chat](https://libera.chat/) IRC network (irc.libera.chat:6697)
- GitHub issues: [spantaleev/matrix-docker-ansible-deploy/issues](https://github.com/spantaleev/matrix-docker-ansible-deploy/issues)

@ -8,9 +8,7 @@ Members can be assigned a server from Digitalocean, or they can connect their ow
The AWX system is arranged into 'members' each with their own 'subscriptions'. After creating a subscription the user enters the 'provision stage' where they defined the URLs they will use, the servers location and whether or not there's already a website at the base domain. They then proceed onto the 'deploy stage' where they can configure their Matrix server.
Ideally this system can manage the updates, configuration, backups and monitoring on it's own. It is an extension of the popular deploy script [spantaleev/matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy).
Warning: This project is currently alpha quality and should only be run by the brave.
This system can manage the updates, configuration, import and export, backups and monitoring on its own. It is an extension of the popular deploy script [spantaleev/matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy).
## Other Required Playbooks
@ -23,6 +21,7 @@ The following repositories allow you to copy and use this setup:
[Ansible Provision Server](https://gitlab.com/GoMatrixHosting/ansible-provision-server) - Used by AWX members to perform initial configuration of their DigitalOcean or On-Premises server.
## Testing Fork For This Playbook
Updates to this section are trailed here:

@ -4,8 +4,6 @@ The playbook can install and configure the [Mjolnir](https://github.com/matrix-o
See the project's [documentation](https://github.com/matrix-org/mjolnir) to learn what it does and why it might be useful to you.
Note: the playbook does not currently support the Mjolnir Synapse module. The playbook does support another antispam module, see [Setting up Synapse Simple Antispam](configuring-playbook-synapse-simple-antispam.md).
## 1. Register the bot account
@ -90,8 +88,21 @@ matrix_bot_mjolnir_access_token: "ACCESS_TOKEN_FROM_STEP_2_GOES_HERE"
matrix_bot_mjolnir_management_room: "ROOM_ID_FROM_STEP_4_GOES_HERE"
```
## 6. Adding mjolnir synapse antispam module (optional)
Add the following configuration to your `inventory/host_vars/matrix.DOMAIN/vars.yml` file (adapt to your needs):
```yaml
matrix_synapse_ext_spam_checker_mjolnir_antispam_enabled: true
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_invites: true
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_messages: false
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_usernames: false
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_ban_lists: []
```
## 6. Installing
## 7. Installing
After configuring the playbook, run the [installation](installing.md) command:

@ -0,0 +1,29 @@
# Enabling metrics and graphs for Postgres (optional)
Expanding on the metrics exposed by the [synapse exporter and the node exporter](configuring-playbook-prometheus-grafana.md), the playbook enables the [postgres exporter](https://github.com/prometheus-community/postgres_exporter) that exposes more detailed information about what's happening on your postgres database.
You can enable this with the following settings in your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`):
```yaml
matrix_prometheus_postgres_exporter_enabled: true
# the role creates a postgres user as credential. You can configure these if required:
matrix_prometheus_postgres_exporter_database_username: 'matrix_prometheus_postgres_exporter'
matrix_prometheus_postgres_exporter_database_password: 'some-password'
```
## What does it do?
Name | Description
-----|----------
`matrix_prometheus_postgres_exporter_enabled`|Enable the postgres prometheus exporter. This sets up the docker container, connects it to the database and adds a 'job' to the prometheus config which tells prometheus about this new exporter. The default is 'false'
`matrix_prometheus_postgres_exporter_database_username`| The 'username' for the user that the exporter uses to connect to the database. The default is 'matrix_prometheus_postgres_exporter'
`matrix_prometheus_postgres_exporter_database_password`| The 'password' for the user that the exporter uses to connect to the database.
## More information
- [The PostgresSQL dashboard](https://grafana.com/grafana/dashboards/9628) (generic postgres dashboard)

@ -43,6 +43,7 @@ With such a configuration, the playbook would expect you to drop the SSL certifi
- `<matrix_ssl_config_dir_path>/live/<domain>/fullchain.pem`
- `<matrix_ssl_config_dir_path>/live/<domain>/privkey.pem`
- `<matrix_ssl_config_dir_path>/live/<domain>/chain.pem`
where `<domain>` refers to the domains that you need (usually `matrix.<your-domain>` and `element.<your-domain>`).

@ -6,8 +6,6 @@ It's a web UI tool you can use to **administrate users and rooms on your Matrix
See the project's [documentation](https://github.com/Awesome-Technologies/synapse-admin) to learn what it does and why it might be useful to you.
**Warning**: Synapse Admin will likely not work with Synapse v1.32 for now. See [this issue](https://github.com/Awesome-Technologies/synapse-admin/issues/132). If you insist on using Synapse Admin before there's a solution to this issue, you may wish to downgrade Synapse (adding `matrix_synapse_version: v1.31.0` or `matrix_synapse_version_arm64: v1.31.0` to your `inventory/host_vars/matrix.DOMAIN/vars.yml` file).
## Adjusting the playbook configuration

@ -1,3 +1,7 @@
> **Note**: This migration guide is applicable if you migrate from one server to another server having the same CPU architecture (e.g. both servers being `amd64`).
>
> If you're trying to migrate between different architectures (e.g. `amd64` --> `arm64`), simply copying the complete `/matrix` directory is not possible as it would move the raw PostgreSQL data between different architectures. In this specific case, you can use the guide below as a reference, but you would also need to dump the database on your current server and import it properly on the new server. See our [Backing up PostgreSQL](maintenance-postgres.md#backing-up-postgresql) docs for help with PostgreSQL backup/restore.
# Migrating to new server
1. Prepare by lowering DNS TTL for your domains (`matrix.DOMAIN`, etc.), so that DNS record changes (step 4 below) would happen faster, leading to less downtime

@ -99,6 +99,8 @@ Example: `--extra-vars="postgres_dump_name=matrix-postgres-dump.sql"`
PostgreSQL can be tuned to make it run faster. This is done by passing extra arguments to Postgres with the `matrix_postgres_process_extra_arguments` variable. You should use a website like https://pgtune.leopard.in.ua/ or information from https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server to determine what Postgres settings you should change.
**Note**: the configuration generator at https://pgtune.leopard.in.ua/ adds spaces around the `=` sign, which is invalid. You'll need to remove it manually (`max_connections = 300` -> `max_connections=300`)
### Here are some examples:
These are not recommended values and they may not work well for you. This is just to give you an idea of some of the options that can be set. If you are an experienced PostgreSQL admin feel free to update this documentation with better examples.
@ -106,11 +108,33 @@ These are not recommended values and they may not work well for you. This is jus
Here is an example config for a small 2 core server with 4GB of RAM and SSD storage:
```
matrix_postgres_process_extra_arguments: [
"-c 'shared_buffers=128MB'",
"-c 'effective_cache_size=2304MB'",
"-c 'effective_io_concurrency=100'",
"-c 'random_page_cost=2.0'",
"-c 'min_wal_size=500MB'",
"-c shared_buffers=128MB",
"-c effective_cache_size=2304MB",
"-c effective_io_concurrency=100",
"-c random_page_cost=2.0",
"-c min_wal_size=500MB",
]
```
Here is an example config for a 4 core server with 8GB of RAM on a Virtual Private Server (VPS); the paramters have been configured using https://pgtune.leopard.in.ua with the following setup: PostgreSQL version 12, OS Type: Linux, DB Type: Mixed type of application, Data Storage: SSD storage:
```
matrix_postgres_process_extra_arguments: [
"-c max_connections=100",
"-c shared_buffers=2GB",
"-c effective_cache_size=6GB",
"-c maintenance_work_mem=512MB",
"-c checkpoint_completion_target=0.9",
"-c wal_buffers=16MB",
"-c default_statistics_target=100",
"-c random_page_cost=1.1",
"-c effective_io_concurrency=200",
"-c work_mem=5242kB",
"-c min_wal_size=1GB",
"-c max_wal_size=4GB",
"-c max_worker_processes=4",
"-c max_parallel_workers_per_gather=2",
"-c max_parallel_workers=4",
"-c max_parallel_maintenance_workers=2",
]
```

@ -1053,6 +1053,8 @@ matrix_jitsi_web_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_ena
matrix_jitsi_jvb_container_colibri_ws_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:13090' }}"
matrix_jitsi_prosody_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:5280' }}"
matrix_jitsi_jibri_xmpp_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'jibri') | to_uuid }}"
matrix_jitsi_jicofo_auth_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'jicofo') | to_uuid }}"
matrix_jitsi_jvb_auth_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'jvb') | to_uuid }}"
@ -1464,6 +1466,13 @@ matrix_postgres_additional_databases: |
'username': matrix_sygnal_database_username,
'password': matrix_sygnal_database_password,
}] if (matrix_sygnal_enabled and matrix_sygnal_database_engine == 'postgres' and matrix_sygnal_database_hostname == 'matrix-postgres') else [])
+
([{
'name': matrix_prometheus_postgres_exporter_database_name,
'username': matrix_prometheus_postgres_exporter_database_username,
'password': matrix_prometheus_postgres_exporter_database_password,
}] if (matrix_prometheus_postgres_exporter_enabled and matrix_prometheus_postgres_exporter_database_hostname == 'matrix-postgres') else [])
}}
matrix_postgres_import_roles_to_ignore: |
@ -1764,6 +1773,10 @@ matrix_prometheus_scraper_synapse_rules_synapse_tag: "{{ matrix_synapse_docker_i
matrix_prometheus_scraper_node_enabled: "{{ matrix_prometheus_node_exporter_enabled }}"
matrix_prometheus_scraper_node_targets: "{{ ['matrix-prometheus-node-exporter:9100'] if matrix_prometheus_node_exporter_enabled else [] }}"
matrix_prometheus_scraper_postgres_enabled: "{{ matrix_prometheus_postgres_exporter_enabled }}"
matrix_prometheus_scraper_postgres_targets: "{{ ['matrix-prometheus-postgres-exporter:'+ matrix_prometheus_postgres_exporter_port|string] if matrix_prometheus_scraper_postgres_enabled else [] }}"
######################################################################
#
# /matrix-prometheus
@ -1771,6 +1784,27 @@ matrix_prometheus_scraper_node_targets: "{{ ['matrix-prometheus-node-exporter:91
######################################################################
######################################################################
#
# matrix-prometheus-postgres-exporter
#
######################################################################
matrix_prometheus_postgres_exporter_enabled: false
matrix_prometheus_postgres_exporter_database_password: "{{ matrix_synapse_macaroon_secret_key | password_hash('sha512', 'prometheus.pg.db') | to_uuid }}"
matrix_prometheus_postgres_exporter_systemd_required_services_list: |
{{
['docker.service']
+
(['matrix-postgres.service'] if matrix_postgres_enabled else [])
}}
######################################################################
#
# /matrix-prometheus-postgres-exporter
#
######################################################################
######################################################################
#
@ -1785,6 +1819,14 @@ matrix_grafana_enabled: false
# Grafana's HTTP port to the local host.
matrix_grafana_container_http_host_bind_port: "{{ '' if matrix_nginx_proxy_enabled else '127.0.0.1:3000' }}"
matrix_grafana_dashboard_download_urls_all: |
{{
matrix_grafana_dashboard_download_urls
+
(matrix_prometheus_postgres_exporter_dashboard_urls if matrix_prometheus_postgres_exporter_enabled else [])
}}
######################################################################
#
# /matrix-grafana

@ -0,0 +1,28 @@
import sys
import requests
import json
janitor_token = sys.argv[1]
synapse_container_ip = sys.argv[2]
# collect total amount of rooms
rooms_raw_url = 'http://' + synapse_container_ip + ':8008/_synapse/admin/v1/rooms'
rooms_raw_header = {'Authorization': 'Bearer ' + janitor_token}
rooms_raw = requests.get(rooms_raw_url, headers=rooms_raw_header)
rooms_raw_python = json.loads(rooms_raw.text)
total_rooms = rooms_raw_python["total_rooms"]
# build complete room list file
room_list_file = open("/tmp/room_list_complete.json", "w")
for i in range(0, total_rooms, 100):
rooms_inc_url = 'http://' + synapse_container_ip + ':8008/_synapse/admin/v1/rooms?from=' + str(i)
rooms_inc = requests.get(rooms_inc_url, headers=rooms_raw_header)
room_list_file.write(rooms_inc.text)
room_list_file.close()
print(total_rooms)

@ -17,136 +17,132 @@
file: '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/matrix_vars.yml'
no_log: True
- name: Collect size of Synapse database
- name: Collect before shrink size of Synapse database
shell: du -sh /matrix/postgres/data
register: db_size_before_stat
when: (purge_mode.find("Perform final shrink") != -1)
no_log: True
- name: Print before size of Synapse database
debug:
msg: "{{ db_size_before_stat.stdout.split('\n') }}"
when: db_size_before_stat is defined
- name: Collect the internal IP of the matrix-synapse container
shell: "/usr/bin/docker inspect --format '{''{range.NetworkSettings.Networks}''}{''{.IPAddress}''}{''{end}''}' matrix-synapse"
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
register: synapse_container_ip
- name: Collect access token for janitor user
shell: |
curl -X POST -d '{"type":"m.login.password", "user":"janitor", "password":"{{ matrix_awx_janitor_user_password }}"}' "{{ synapse_container_ip.stdout }}:8008/_matrix/client/r0/login" | jq '.access_token'
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
register: janitors_token
no_log: True
- name: Collect total number of rooms
- name: Copy build_room_list.py script to target machine
copy:
src: ./roles/matrix-awx/scripts/matrix_build_room_list.py
dest: /usr/local/bin/matrix_build_room_list.py
owner: matrix
group: matrix
mode: '0755'
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Run build_room_list.py script
shell: |
curl -X GET --header "Authorization: Bearer {{ janitors_token.stdout[1:-1] }}" '{{ synapse_container_ip.stdout }}:8008/_synapse/admin/v1/rooms' | jq '.total_rooms'
when: purge_rooms|bool
runuser -u matrix -- python3 /usr/local/bin/matrix_build_room_list.py {{ janitors_token.stdout[1:-1] }} {{ synapse_container_ip.stdout }}
register: rooms_total
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Print total number of rooms
debug:
msg: '{{ rooms_total.stdout }}'
when: purge_rooms|bool
- name: Calculate every 100 values for total number of rooms
delegate_to: 127.0.0.1
shell: |
seq 0 100 {{ rooms_total.stdout }}
when: purge_rooms|bool
register: every_100_rooms
- name: Fetch complete room list from target machine
fetch:
src: /tmp/room_list_complete.json
dest: "/tmp/{{ subscription_id }}_room_list_complete.json"
flat: yes
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Ensure room_list_complete.json file exists
delegate_to: 127.0.0.1
- name: Remove complete room list from target machine
file:
path: /tmp/{{ subscription_id }}_room_list_complete.json
state: touch
when: purge_rooms|bool
- name: Build file with total room list
include_tasks: purge_database_build_list.yml
loop: "{{ every_100_rooms.stdout_lines | flatten(levels=1) }}"
when: purge_rooms|bool
path: /tmp/room_list_complete.json
state: absent
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Generate list of rooms with no local users
delegate_to: 127.0.0.1
shell: |
jq 'try .rooms[] | select(.joined_local_members == 0) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_no_local_users.txt
when: purge_rooms|bool
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Count number of rooms with no local users
delegate_to: 127.0.0.1
shell: |
wc -l /tmp/{{ subscription_id }}_room_list_no_local_users.txt | awk '{ print $1 }'
register: rooms_no_local_total
when: purge_rooms|bool
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Setting host fact room_list_no_local_users
set_fact:
room_list_no_local_users: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_no_local_users.txt') }}"
no_log: True
when: purge_rooms|bool
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Purge all rooms with no local users
include_tasks: purge_database_no_local.yml
loop: "{{ room_list_no_local_users.splitlines() | flatten(levels=1) }}"
when: purge_rooms|bool
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Collect epoche time from date
delegate_to: 127.0.0.1
shell: |
date -d '{{ purge_date }}' +"%s"
when: purge_rooms|bool
when: (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
register: purge_epoche_time
- name: Generate list of rooms with more then N users
delegate_to: 127.0.0.1
shell: |
jq 'try .rooms[] | select(.joined_members > {{ purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_joined_members.txt
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of users [slower]") != -1
- name: Count number of rooms with more then N users
delegate_to: 127.0.0.1
shell: |
wc -l /tmp/{{ subscription_id }}_room_list_joined_members.txt | awk '{ print $1 }'
register: rooms_join_members_total
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of users [slower]") != -1
- name: Setting host fact room_list_joined_members
delegate_to: 127.0.0.1
set_fact:
room_list_joined_members: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_joined_members.txt') }}"
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of users [slower]") != -1
no_log: True
- name: Purge all rooms with more then N users
include_tasks: purge_database_users.yml
loop: "{{ room_list_joined_members.splitlines() | flatten(levels=1) }}"
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of users [slower]") != -1
- name: Generate list of rooms with more then N events
delegate_to: 127.0.0.1
shell: |
jq 'try .rooms[] | select(.state_events > {{ purge_metric_value }}) | .room_id' < /tmp/{{ subscription_id }}_room_list_complete.json > /tmp/{{ subscription_id }}_room_list_state_events.txt
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of events [slower]") != -1
- name: Count number of rooms with more then N users
- name: Count number of rooms with more then N events
delegate_to: 127.0.0.1
shell: |
wc -l /tmp/{{ subscription_id }}_room_list_state_events.txt | awk '{ print $1 }'
register: rooms_state_events_total
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of events [slower]") != -1
- name: Setting host fact room_list_state_events
delegate_to: 127.0.0.1
set_fact:
room_list_state_events: "{{ lookup('file', '/tmp/{{ subscription_id }}_room_list_state_events.txt') }}"
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of events [slower]") != -1
no_log: True
- name: Purge all rooms with more then N events
include_tasks: purge_database_events.yml
loop: "{{ room_list_state_events.splitlines() | flatten(levels=1) }}"
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of events [slower]") != -1
- name: Collect AWX admin token the hard way!
delegate_to: 127.0.0.1
@ -155,75 +151,162 @@
register: tower_token
no_log: True
- name: Adjust 'Deploy/Update a Server' job template
delegate_to: 127.0.0.1
awx.awx.tower_job_template:
name: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
description: "Creates a new matrix service with Spantaleev's playbooks"
extra_vars: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/extra_vars.json') }}"
job_type: run
job_tags: "rust-synapse-compress-state"
inventory: "{{ member_id }}"
project: "{{ member_id }} - Matrix Docker Ansible Deploy"
playbook: setup.yml
credential: "{{ member_id }} - AWX SSH Key"
state: present
verbosity: 1
tower_host: "https://{{ tower_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}"
validate_certs: yes
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
- name: Execute rust-synapse-compress-state job template
delegate_to: 127.0.0.1
awx.awx.tower_job_launch:
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
tags: "rust-synapse-compress-state"
wait: yes
tower_host: "https://{{ tower_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}"
validate_certs: yes
register: job
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
- name: Stop Synapse service
shell: systemctl stop matrix-synapse.service
- name: Revert 'Deploy/Update a Server' job template
delegate_to: 127.0.0.1
awx.awx.tower_job_template:
name: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
description: "Creates a new matrix service with Spantaleev's playbooks"
extra_vars: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/extra_vars.json') }}"
job_type: run
job_tags: "setup-all,start"
inventory: "{{ member_id }}"
project: "{{ member_id }} - Matrix Docker Ansible Deploy"
playbook: setup.yml
credential: "{{ member_id }} - AWX SSH Key"
state: present
verbosity: 1
tower_host: "https://{{ tower_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}"
validate_certs: yes
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1) or (purge_mode.find("Skip purging rooms [faster]") != -1)
- name: Ensure matrix-synapse is stopped
service:
name: matrix-synapse
state: stopped
daemon_reload: yes
when: (purge_mode.find("Perform final shrink") != -1)
- name: Re-index Synapse database
shell: docker exec -i matrix-postgres psql "host=127.0.0.1 port=5432 dbname=synapse user=synapse password={{ matrix_synapse_connection_password }}" -c 'REINDEX (VERBOSE) DATABASE synapse'
when: (purge_mode.find("Perform final shrink") != -1)
- name: Ensure matrix-synapse is started
service:
name: matrix-synapse
state: started
daemon_reload: yes
when: (purge_mode.find("Perform final shrink") != -1)
- name: Adjust 'Deploy/Update a Server' job template
delegate_to: 127.0.0.1
awx.awx.tower_job_template:
name: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
description: "Creates a new matrix service with Spantaleev's playbooks"
extra_vars: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/extra_vars.json') }}"
job_type: run
job_tags: "run-postgres-vacuum,start"
inventory: "{{ member_id }}"
project: "{{ member_id }} - Matrix Docker Ansible Deploy"
playbook: setup.yml
credential: "{{ member_id }} - AWX SSH Key"
state: present
verbosity: 1
tower_host: "https://{{ tower_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}"
validate_certs: yes
when: (purge_mode.find("Perform final shrink") != -1)
- name: Execute run-postgres-vacuum job template
delegate_to: 127.0.0.1
awx.awx.tower_job_launch:
job_template: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
tags: "run-postgres-vacuum,start"
wait: yes
tower_host: "https://{{ tower_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}"
validate_certs: yes
register: job
when: (purge_mode.find("Perform final shrink") != -1)
- name: Revert 'Deploy/Update a Server' job template
delegate_to: 127.0.0.1
awx.awx.tower_job_template:
name: "{{ matrix_domain }} - 0 - Deploy/Update a Server"
description: "Creates a new matrix service with Spantaleev's playbooks"
extra_vars: "{{ lookup('file', '/var/lib/awx/projects/clients/{{ member_id }}/{{ subscription_id }}/extra_vars.json') }}"
job_type: run
job_tags: "setup-all,start"
inventory: "{{ member_id }}"
project: "{{ member_id }} - Matrix Docker Ansible Deploy"
playbook: setup.yml
credential: "{{ member_id }} - AWX SSH Key"
state: present
verbosity: 1
tower_host: "https://{{ tower_host }}"
tower_oauthtoken: "{{ tower_token.stdout }}"
validate_certs: yes
when: (purge_mode.find("Perform final shrink") != -1)
- name: Cleanup room_list files
delegate_to: 127.0.0.1
shell: |
rm /tmp/{{ subscription_id }}_room_list*
when: purge_rooms|bool
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
ignore_errors: yes
- name: Collect size of Synapse database
- name: Collect after shrink size of Synapse database
shell: du -sh /matrix/postgres/data
register: db_size_after_stat
when: (purge_mode.find("Perform final shrink") != -1)
no_log: True
- name: Print total number of rooms processed
debug:
msg: '{{ rooms_total.stdout }}'
when: purge_rooms|bool
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Print the number of rooms purged with no local users
debug:
msg: '{{ rooms_no_local_total.stdout }}'
when: purge_rooms|bool
when: (purge_mode.find("No local users [recommended]") != -1) or (purge_mode.find("Number of users [slower]") != -1) or (purge_mode.find("Number of events [slower]") != -1)
- name: Print the number of rooms purged with more then N users
debug:
msg: '{{ rooms_join_members_total.stdout }}'
when: (purge_metric.find("Number of users") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of users") != -1
- name: Print the number of rooms purged with more then N events
debug:
msg: '{{ rooms_state_events_total.stdout }}'
when: (purge_metric.find("Number of events") != -1) and (purge_rooms|bool)
when: purge_mode.find("Number of events") != -1
- name: Print before purge size of Synapse database
debug:
msg: "{{ db_size_before_stat.stdout.split('\n') }}"
when: db_size_before_stat is defined
when: (db_size_before_stat is defined) and (purge_mode.find("Perform final shrink") != -1)
- name: Print after purge size of Synapse database
debug:
msg: "{{ db_size_after_stat.stdout.split('\n') }}"
when: db_size_after_stat is defined
when: (db_size_after_stat is defined) and (purge_mode.find("Perform final shrink") != -1)
- name: Set boolean value to exit playbook
set_fact:

@ -7,7 +7,7 @@ matrix_appservice_irc_container_self_build: false
matrix_appservice_irc_docker_repo: "https://github.com/matrix-org/matrix-appservice-irc.git"
matrix_appservice_irc_docker_src_files_path: "{{ matrix_base_data_path }}/appservice-irc/docker-src"
matrix_appservice_irc_version: release-0.25.0
matrix_appservice_irc_version: release-0.26.0
matrix_appservice_irc_docker_image: "{{ matrix_container_global_registry_prefix }}matrixdotorg/matrix-appservice-irc:{{ matrix_appservice_irc_version }}"
matrix_appservice_irc_docker_image_force_pull: "{{ matrix_appservice_irc_docker_image.endswith(':latest') }}"

@ -26,10 +26,16 @@
become: false
when: "matrix_postgres_service_start_result.changed|bool"
- name: Check existence of matrix-appservice-irc service
stat:
path: "{{ matrix_systemd_path }}/matrix-appservice-irc.service"
register: matrix_appservice_irc_service_stat
- name: Ensure matrix-appservice-irc is stopped
service:
name: matrix-appservice-irc
state: stopped
when: "matrix_appservice_irc_service_stat.stat.exists"
- name: Import appservice-irc NeDB database into Postgres
command:

@ -3,7 +3,7 @@ matrix_client_element_enabled: true
matrix_client_element_container_image_self_build: false
matrix_client_element_container_image_self_build_repo: "https://github.com/vector-im/riot-web.git"
matrix_client_element_version: v1.7.28
matrix_client_element_version: v1.7.29
matrix_client_element_docker_image: "{{ matrix_client_element_docker_image_name_prefix }}vectorim/element-web:{{ matrix_client_element_version }}"
matrix_client_element_docker_image_name_prefix: "{{ 'localhost/' if matrix_client_element_container_image_self_build else matrix_container_global_registry_prefix }}"
matrix_client_element_docker_image_force_pull: "{{ matrix_client_element_docker_image.endswith(':latest') }}"

@ -64,7 +64,7 @@
mode: 0440
owner: "{{ matrix_user_username }}"
group: "{{ matrix_user_groupname }}"
with_items: "{{ matrix_grafana_dashboard_download_urls }}"
with_items: "{{ matrix_grafana_dashboard_download_urls_all }}"
when: matrix_grafana_enabled|bool
- name: Ensure matrix-grafana.service installed

@ -176,6 +176,8 @@ matrix_jitsi_prosody_container_extra_arguments: []
# List of systemd services that matrix-jitsi-prosody.service depends on
matrix_jitsi_prosody_systemd_required_services_list: ['docker.service']
# Neccessary Port binding for those disabling the integrated nginx proxy
matrix_jitsi_prosody_container_http_host_bind_port: ''
matrix_jitsi_jicofo_docker_image: "{{ matrix_container_global_registry_prefix }}jitsi/jicofo:{{ matrix_jitsi_container_image_tag }}"
matrix_jitsi_jicofo_docker_image_force_pull: "{{ matrix_jitsi_jicofo_docker_image.endswith(':latest') }}"

@ -16,6 +16,9 @@ ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }}
ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-jitsi-prosody \
--log-driver=none \
--network={{ matrix_docker_network }} \
{% if matrix_jitsi_prosody_container_http_host_bind_port %}
-p {{ matrix_jitsi_prosody_container_http_host_bind_port }}:5280 \
{% endif %}
--env-file={{ matrix_jitsi_prosody_base_path }}/env \
--mount type=bind,src={{ matrix_jitsi_prosody_config_path }},dst=/config \
--mount type=bind,src={{ matrix_jitsi_prosody_plugins_path }},dst=/prosody-plugins-custom \

@ -1,5 +1,5 @@
matrix_nginx_proxy_enabled: true
matrix_nginx_proxy_version: 1.20.0-alpine
matrix_nginx_proxy_version: 1.21.0-alpine
# We use an official nginx image, which we fix-up to run unprivileged.
# An alternative would be an `nginxinc/nginx-unprivileged` image, but
@ -287,6 +287,26 @@ matrix_nginx_proxy_proxy_domain_additional_server_configuration_blocks: []
# Of course, a better solution is to just stop using browsers (like Chrome), which participate in such tracking practices.
matrix_nginx_proxy_floc_optout_enabled: true
# HSTS Preloading Enable
#
# In its strongest and recommended form, the [HSTS policy](https://www.chromium.org/hsts) includes all subdomains, and
# indicates a willingness to be “preloaded” into browsers:
# `Strict-Transport-Security: max-age=31536000; includeSubDomains; preload`
# For more information visit:
# - https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security
# - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
# - https://hstspreload.org/#opt-in
matrix_nginx_proxy_hsts_preload_enabled: false
# X-XSS-Protection Enable
# Stops pages from loading when they detect reflected cross-site scripting (XSS) attacks.
# Note: Not applicable for grafana
#
# Learn more about it is here:
# - https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection
# - https://portswigger.net/web-security/cross-site-scripting/reflected
matrix_nginx_proxy_xss_protection: "1; mode=block"
# Specifies the SSL configuration that should be used for the SSL protocols and ciphers
# This is based on the Mozilla Server Side TLS Recommended configurations.
#
@ -337,6 +357,18 @@ matrix_nginx_proxy_self_check_validate_certificates: true
# so we default to not following redirects as well.
matrix_nginx_proxy_self_check_well_known_matrix_client_follow_redirects: none
# For OCSP purposes, we need to define a resolver at the `server{}` level or `http{}` level (we do the latter).
#
# Otherwise, we get warnings like this:
# > [warn] 22#22: no resolver defined to resolve r3.o.lencr.org while requesting certificate status, responder: r3.o.lencr.org, certificate: "/matrix/ssl/config/live/.../fullchain.pem"
#
# We point it to the internal Docker resolver, which likely delegates to nameservers defined in `/etc/resolv.conf`.
#
# When nginx proxy is disabled, our configuration is likely used by non-containerized nginx, so can't use the internal Docker resolver.
# Pointing `resolver` to some public DNS server might be an option, but for now we impose DNS servers on people.
# It might also be that no such warnings occur when not running in a container.
matrix_nginx_proxy_http_level_resolver: "{{ '127.0.0.11' if matrix_nginx_proxy_enabled else '' }}"
# By default, this playbook automatically retrieves and auto-renews
# free SSL certificates from Let's Encrypt.
#
@ -396,7 +428,7 @@ matrix_ssl_pre_obtaining_required_service_start_wait_time_seconds: 60
# Nginx Optimize SSL Session
#
# ssl_session_cache:
# - Creating a cache of TLS connection parameters reduces the number of handshakes
# - Creating a cache of TLS connection parameters reduces the number of handshakes
# and thus can improve the performance of application.
# - Default session cache is not optimal as it can be used by only one worker process
# and can cause memory fragmentation. It is much better to use shared cache.
@ -405,7 +437,7 @@ matrix_ssl_pre_obtaining_required_service_start_wait_time_seconds: 60
# ssl_session_timeout:
# - Nginx by default it is set to 5 minutes which is very low.
# should be like 4h or 1d but will require you to increase the size of cache.
# - Learn More:
# - Learn More:
# https://github.com/certbot/certbot/issues/6903
# https://github.com/mozilla/server-side-tls/issues/198
#

@ -34,7 +34,7 @@
template:
src: "{{ role_path }}/templates/usr-local-bin/matrix-ssl-lets-encrypt-certificates-renew.j2"
dest: "{{ matrix_local_bin_path }}/matrix-ssl-lets-encrypt-certificates-renew"
mode: 0750
mode: 0755
- name: Ensure SSL renewal systemd units installed
template:

@ -10,6 +10,14 @@
add_header Permissions-Policy interest-cohort=() always;
{% endif %}
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
{% for configuration_block in matrix_nginx_proxy_proxy_domain_additional_server_configuration_blocks %}
{{- configuration_block }}
{% endfor %}
@ -69,13 +77,13 @@ server {
ssl_ciphers {{ matrix_nginx_proxy_ssl_ciphers }};
{% endif %}
ssl_prefer_server_ciphers {{ matrix_nginx_proxy_ssl_prefer_server_ciphers }};
{% if matrix_nginx_proxy_ocsp_stapling_enabled %}
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_base_domain_hostname }}/chain.pem;
{% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off;
{% endif %}

@ -3,8 +3,12 @@
{% macro render_vhost_directives() %}
gzip on;
gzip_types text/plain application/json application/javascript text/css image/x-icon font/ttf image/gif;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
add_header X-Content-Type-Options nosniff;
{% for configuration_block in matrix_nginx_proxy_proxy_bot_go_neb_additional_server_configuration_blocks %}

@ -4,13 +4,20 @@
gzip on;
gzip_types text/plain application/json application/javascript text/css image/x-icon font/ttf image/gif;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
add_header X-Frame-Options SAMEORIGIN;
{% if matrix_nginx_proxy_floc_optout_enabled %}
add_header Permissions-Policy interest-cohort=() always;
{% endif %}
{% for configuration_block in matrix_nginx_proxy_proxy_element_additional_server_configuration_blocks %}
{{- configuration_block }}
{% endfor %}

@ -4,10 +4,14 @@
gzip on;
gzip_types text/plain application/json application/javascript text/css image/x-icon font/ttf image/gif;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options SAMEORIGIN;
add_header X-XSS-Protection "1; mode=block";
add_header Content-Security-Policy "frame-ancestors 'none'";
{% if matrix_nginx_proxy_floc_optout_enabled %}
add_header Permissions-Policy interest-cohort=() always;

@ -3,8 +3,12 @@
{% macro render_vhost_directives() %}
gzip on;
gzip_types text/plain application/json application/javascript text/css image/x-icon font/ttf image/gif;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
add_header X-Content-Type-Options nosniff;
{% if matrix_nginx_proxy_floc_optout_enabled %}
add_header Permissions-Policy interest-cohort=() always;

@ -20,6 +20,14 @@
{% if matrix_nginx_proxy_floc_optout_enabled %}
add_header Permissions-Policy interest-cohort=() always;
{% endif %}
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
location /.well-known/matrix {
root {{ matrix_static_files_base_path }};

@ -4,7 +4,11 @@
gzip on;
gzip_types text/plain application/json application/javascript text/css image/x-icon font/ttf image/gif;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
# duplicate X-Content-Type-Options & X-Frame-Options header
# Enabled by grafana by default
# add_header X-Content-Type-Options nosniff;

@ -3,8 +3,12 @@
{% macro render_vhost_directives() %}
gzip on;
gzip_types text/plain application/json application/javascript text/css image/x-icon font/ttf image/gif;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
add_header X-Content-Type-Options nosniff;
{% if matrix_nginx_proxy_floc_optout_enabled %}
add_header Permissions-Policy interest-cohort=() always;

@ -5,6 +5,14 @@
add_header Permissions-Policy interest-cohort=() always;
{% endif %}
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
{% for configuration_block in matrix_nginx_proxy_proxy_riot_additional_server_configuration_blocks %}
{{- configuration_block }}
{% endfor %}
@ -67,7 +75,7 @@ server {
ssl_stapling_verify on;
ssl_trusted_certificate {{ matrix_ssl_config_dir_path }}/live/{{ matrix_nginx_proxy_proxy_riot_compat_redirect_hostname }}/chain.pem;
{% endif %}
{% if matrix_nginx_proxy_ssl_session_tickets_off %}
ssl_session_tickets off;
{% endif %}

@ -3,8 +3,12 @@
{% macro render_vhost_directives() %}
gzip on;
gzip_types text/plain application/json application/javascript text/css image/x-icon font/ttf image/gif;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% if matrix_nginx_proxy_hsts_preload_enabled %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
{% else %}
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
{% endif %}
add_header X-XSS-Protection "{{ matrix_nginx_proxy_xss_protection }}";
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;

@ -4,6 +4,11 @@
#
# Thus, we ensure a larger bucket size value is used.
server_names_hash_bucket_size 64;
{% if matrix_nginx_proxy_http_level_resolver %}
resolver {{ matrix_nginx_proxy_http_level_resolver }};
{% endif %}
{% for configuration_block in matrix_nginx_proxy_proxy_http_additional_server_configuration_blocks %}
{{- configuration_block }}
{% endfor %}

@ -77,14 +77,14 @@
template:
src: "{{ role_path }}/templates/usr-local-bin/matrix-postgres-cli.j2"
dest: "{{ matrix_local_bin_path }}/matrix-postgres-cli"
mode: 0750
mode: 0755
when: matrix_postgres_enabled|bool
- name: Ensure matrix-change-user-admin-status script created
template:
src: "{{ role_path }}/templates/usr-local-bin/matrix-change-user-admin-status.j2"
dest: "{{ matrix_local_bin_path }}/matrix-change-user-admin-status"
mode: 0750
mode: 0755
when: matrix_postgres_enabled|bool
- name: (Migration) Ensure old matrix-make-user-admin script deleted
@ -97,7 +97,7 @@
template:
src: "{{ role_path }}/templates/usr-local-bin/matrix-postgres-update-user-password-hash.j2"
dest: "{{ matrix_local_bin_path }}/matrix-postgres-update-user-password-hash"
mode: 0750
mode: 0755
when: matrix_postgres_enabled|bool
- name: Ensure matrix-postgres.service installed

@ -0,0 +1,49 @@
# matrix-prometheus-postgres-exporter is an Prometheus exporter for postgres metrics
# See: https://github.com/prometheus-community/postgres_exporter
matrix_prometheus_postgres_exporter_enabled: false
matrix_prometheus_postgres_exporter_version: v0.9.0
matrix_prometheus_postgres_exporter_port: 9187
matrix_prometheus_postgres_exporter_docker_image: "quay.io/prometheuscommunity/postgres-exporter:{{ matrix_prometheus_postgres_exporter_version }}"
matrix_prometheus_postgres_exporter_docker_image_force_pull: "{{ matrix_prometheus_postgres_exporter_docker_image.endswith(':latest') }}"
# A list of extra arguments to pass to the container
matrix_prometheus_postgres_exporter_container_extra_arguments: ["-e PG_EXPORTER_AUTO_DISCOVER_DATABASES=true",
"-e PG_EXPORTER_WEB_LISTEN_ADDRESS=\":{{matrix_prometheus_postgres_exporter_port}}\"",
"-e DATA_SOURCE_NAME=\"postgresql://{{matrix_prometheus_postgres_exporter_database_username}}:{{matrix_prometheus_postgres_exporter_database_password}}@{{matrix_prometheus_postgres_exporter_database_hostname}}:5432/{{matrix_prometheus_postgres_exporter_database_name}}?sslmode=disable\"" ]
# List of systemd services that matrix-prometheus-postgres-exporter.service depends on
matrix_prometheus_postgres_exporter_systemd_required_services_list: ['docker.service']
# List of systemd services that matrix-prometheus-postgres-exporter.service wants
matrix_prometheus_postgres_exporter_systemd_wanted_services_list: []
# details for connecting to the database
matrix_prometheus_postgres_exporter_database_username: 'matrix_prometheus_postgres_exporter'
matrix_prometheus_postgres_exporter_database_password: 'some-password'
matrix_prometheus_postgres_exporter_database_hostname: 'matrix-postgres'
matrix_prometheus_postgres_exporter_database_port: 5432
matrix_prometheus_postgres_exporter_database_name: 'matrix_prometheus_postgres_exporter'
# Controls whether the matrix-prometheus container exposes its HTTP port (tcp/9100 in the container).
#
# Takes an "<ip>:<port>" value (e.g. "127.0.0.1:9100"), or empty string to not expose.
#
# Official recommendations are to run this container with `--net=host`,
# but we don't do that, since it:
# - likely exposes the metrics web server way too publicly (before applying https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/1008)
# - or listens on a loopback interface only (--net=host and 127.0.0.1:9100), which is not reachable from another container (like `matrix-prometheus`)
#
# Using `--net=host` and binding to Docker's `matrix` bridge network may be a solution to both,
# but that's trickier to accomplish and won't necessarily work (hasn't been tested).
#
# Not using `--net=host` means that our network statistic reports are likely broken (inaccurate),
# because node-exporter can't see all interfaces, etc.
# For now, we'll live with that, until someone develops a better solution.
matrix_prometheus_postgres_exporter_container_http_host_bind_port: ''
matrix_prometheus_postgres_exporter_dashboard_urls:
- "https://grafana.com/api/dashboards/9628/revisions/7/download"

@ -0,0 +1,5 @@
- set_fact:
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-prometheus-postgres-exporter.service'] }}"
when: matrix_prometheus_postgres_exporter_enabled|bool

@ -0,0 +1,8 @@
- import_tasks: "{{ role_path }}/tasks/init.yml"
tags:
- always
- import_tasks: "{{ role_path }}/tasks/setup.yml"
tags:
- setup-all
- setup-prometheus-postgres-exporter

@ -0,0 +1,54 @@
---
#
# Tasks related to setting up matrix-prometheus-postgres-exporter
#
- name: Ensure matrix-prometheus-postgres-exporter image is pulled
docker_image:
name: "{{ matrix_prometheus_postgres_exporter_docker_image }}"
source: "{{ 'pull' if ansible_version.major > 2 or ansible_version.minor > 7 else omit }}"
force_source: "{{ matrix_prometheus_postgres_exporter_docker_image_force_pull if ansible_version.major > 2 or ansible_version.minor >= 8 else omit }}"
force: "{{ omit if ansible_version.major > 2 or ansible_version.minor >= 8 else matrix_prometheus_postgres_exporter_docker_image_force_pull }}"
when: "matrix_prometheus_postgres_exporter_enabled|bool"
- name: Ensure matrix-prometheus-postgres-exporter.service installed
template:
src: "{{ role_path }}/templates/systemd/matrix-prometheus-postgres-exporter.service.j2"
dest: "{{ matrix_systemd_path }}/matrix-prometheus-postgres-exporter.service"
mode: 0644
register: matrix_prometheus_postgres_exporter_systemd_service_result
when: matrix_prometheus_postgres_exporter_enabled|bool
- name: Ensure systemd reloaded after matrix-prometheus.service installation
service:
daemon_reload: yes
when: "matrix_prometheus_postgres_exporter_enabled|bool and matrix_prometheus_postgres_exporter_systemd_service_result.changed"
#
# Tasks related to getting rid of matrix-prometheus-postgres-exporter (if it was previously enabled)
#
- name: Check existence of matrix-prometheus-postgres-exporter service
stat:
path: "{{ matrix_systemd_path }}/matrix-prometheus-postgres-exporter.service"
register: matrix_prometheus_postgres_exporter_service_stat
- name: Ensure matrix-prometheus-postgres-exporter is stopped
service:
name: matrix-prometheus-postgres-exporter
state: stopped
daemon_reload: yes
register: stopping_result
when: "not matrix_prometheus_postgres_exporter_enabled|bool and matrix_prometheus_postgres_exporter_service_stat.stat.exists"
- name: Ensure matrix-prometheus-postgres-exporter.service doesn't exist
file:
path: "{{ matrix_systemd_path }}/matrix-prometheus-postgres-exporter.service"
state: absent
when: "not matrix_prometheus_postgres_exporter_enabled|bool and matrix_prometheus_postgres_exporter_service_stat.stat.exists"
- name: Ensure systemd reloaded after matrix-prometheus-postgres-exporter.service removal
service:
daemon_reload: yes
when: "not matrix_prometheus_postgres_exporter_enabled|bool and matrix_prometheus_postgres_exporter_service_stat.stat.exists"

@ -0,0 +1,42 @@
#jinja2: lstrip_blocks: "True"
[Unit]
Description=matrix-prometheus-postgres-exporter
{% for service in matrix_prometheus_postgres_exporter_systemd_required_services_list %}
Requires={{ service }}
After={{ service }}
{% endfor %}
{% for service in matrix_prometheus_postgres_exporter_systemd_wanted_services_list %}
Wants={{ service }}
{% endfor %}
DefaultDependencies=no
[Service]
Type=simple
Environment="HOME={{ matrix_systemd_unit_home_path }}"
ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-prometheus-postgres-exporter 2>/dev/null'
ExecStartPre=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-prometheus-postgres-exporter 2>/dev/null'
ExecStart={{ matrix_host_command_docker }} run --rm --name matrix-prometheus-postgres-exporter \
--log-driver=none \
--user={{ matrix_user_uid }}:{{ matrix_user_gid }} \
--cap-drop=ALL \
--read-only \
{% for arg in matrix_prometheus_postgres_exporter_container_extra_arguments %}
{{ arg }} \
{% endfor %}
--network={{ matrix_docker_network }} \
{% if matrix_prometheus_postgres_exporter_container_http_host_bind_port %}
-p {{ matrix_prometheus_postgres_exporter_container_http_host_bind_port }}:{{matrix_prometheus_postgres_exporter_port}} \
{% endif %}
--pid=host \
{{ matrix_prometheus_postgres_exporter_docker_image }}
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} kill matrix-prometheus-postgres-exporter 2>/dev/null'
ExecStop=-{{ matrix_host_command_sh }} -c '{{ matrix_host_command_docker }} rm matrix-prometheus-postgres-exporter 2>/dev/null'
Restart=always
RestartSec=30
SyslogIdentifier=matrix-prometheus-postgres-exporter
[Install]
WantedBy=multi-user.target

@ -38,3 +38,9 @@ scrape_configs:
static_configs:
- targets: {{ matrix_prometheus_scraper_node_targets|to_json }}
{% endif %}
{% if matrix_prometheus_scraper_postgres_enabled %}
- job_name: postgres
static_configs:
- targets: {{ matrix_prometheus_scraper_postgres_targets|to_json }}
{% endif %}

@ -8,7 +8,7 @@ matrix_synapse_admin_container_self_build_repo: "https://github.com/Awesome-Tech
matrix_synapse_admin_docker_src_files_path: "{{ matrix_base_data_path }}/synapse-admin/docker-src"
matrix_synapse_admin_version: latest
matrix_synapse_admin_version: 0.8.1
matrix_synapse_admin_docker_image: "{{ matrix_synapse_admin_docker_image_name_prefix }}awesometechnologies/synapse-admin:{{ matrix_synapse_admin_version }}"
matrix_synapse_admin_docker_image_name_prefix: "{{ 'localhost/' if matrix_synapse_admin_container_self_build else matrix_container_global_registry_prefix }}"
matrix_synapse_admin_docker_image_force_pull: "{{ matrix_synapse_admin_docker_image.endswith(':latest') }}"

@ -497,6 +497,8 @@ matrix_synapse_ext_password_provider_ldap_attributes_name: "cn"
matrix_synapse_ext_password_provider_ldap_bind_dn: ""
matrix_synapse_ext_password_provider_ldap_bind_password: ""
matrix_synapse_ext_password_provider_ldap_filter: ""
matrix_synapse_ext_password_provider_ldap_active_directory: false
matrix_synapse_ext_password_provider_ldap_default_domain: ""
# Enable this to activate the Synapse Antispam spam-checker module.
# See: https://github.com/t2bot/synapse-simple-antispam
@ -505,6 +507,27 @@ matrix_synapse_ext_spam_checker_synapse_simple_antispam_git_repository_url: "htt
matrix_synapse_ext_spam_checker_synapse_simple_antispam_git_version: "923ca5c85b08f157181721abbae50dd89c31e4b5"
matrix_synapse_ext_spam_checker_synapse_simple_antispam_config_blocked_homeservers: []
# Enable this to activate the Mjolnir Antispam spam-checker module.
# See: https://github.com/matrix-org/mjolnir#synapse-module
matrix_synapse_ext_spam_checker_mjolnir_antispam_enabled: false
matrix_synapse_ext_spam_checker_mjolnir_antispam_git_repository_url: "https://github.com/matrix-org/mjolnir"
matrix_synapse_ext_spam_checker_mjolnir_antispam_git_version: "70f353fbbad0af469b1001080dea194d512b2815"
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_invites: true
# Flag messages sent by servers/users in the ban lists as spam. Currently
# this means that spammy messages will appear as empty to users. Default
# false.
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_messages: false
# Remove users from the user directory search by filtering matrix IDs and
# display names by the entries in the user ban list. Default false.
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_usernames: false
# The room IDs of the ban lists to honour. Unlike other parts of Mjolnir,
# this list cannot be room aliases or permalinks. This server is expected
# to already be joined to the room - Mjolnir will not automatically join
# these rooms.
# ["!roomid:example.org"]
matrix_synapse_ext_spam_checker_mjolnir_antispam_config_ban_lists: []
matrix_s3_media_store_enabled: false
matrix_s3_media_store_custom_endpoint_enabled: false
matrix_s3_goofys_docker_image: "ewoutp/goofys:latest"

@ -0,0 +1,7 @@
---
- import_tasks: "{{ role_path }}/tasks/ext/mjolnir-antispam/setup_install.yml"
when: matrix_synapse_ext_spam_checker_mjolnir_antispam_enabled|bool
- import_tasks: "{{ role_path }}/tasks/ext/mjolnir-antispam/setup_uninstall.yml"
when: "not matrix_synapse_ext_spam_checker_mjolnir_antispam_enabled|bool"

@ -0,0 +1,52 @@
---
- name: Ensure git installed (RedHat)
yum:
name:
- git
state: present
update_cache: no
when: "ansible_os_family == 'RedHat'"
- name: Ensure git installed (Debian)
apt:
name:
- git
state: present
update_cache: no
when: "ansible_os_family == 'Debian'"
- name: Ensure git installed (Archlinux)
pacman:
name:
- git
state: present
update_cache: no
when: "ansible_distribution == 'Archlinux'"
- name: Clone mjolnir-antispam git repository
git:
repo: "{{ matrix_synapse_ext_spam_checker_mjolnir_antispam_git_repository_url }}"
version: "{{ matrix_synapse_ext_spam_checker_mjolnir_antispam_git_version }}"
dest: "{{ matrix_synapse_ext_path }}/mjolnir"
become: true
become_user: "{{ matrix_user_username }}"
- set_fact:
matrix_synapse_spam_checker: >
{{ matrix_synapse_spam_checker }}
+
[{
"module": "mjolnir.AntiSpam",
"config": {
"block_invites": {{ matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_invites }},
"block_messages": {{ matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_messages }},
"block_usernames": {{ matrix_synapse_ext_spam_checker_mjolnir_antispam_config_block_usernames }},
"ban_lists": {{ matrix_synapse_ext_spam_checker_mjolnir_antispam_config_ban_lists }}
}
}]
matrix_synapse_container_extra_arguments: >
{{ matrix_synapse_container_extra_arguments|default([]) }}
+
["--mount type=bind,src={{ matrix_synapse_ext_path }}/mjolnir/synapse_antispam/mjolnir,dst={{ matrix_synapse_in_container_python_packages_path }}/mjolnir,ro"]

@ -0,0 +1,6 @@
---
- name: Ensure mjolnir-antispam doesn't exist
file:
path: "{{ matrix_synapse_ext_path }}/mjolnir"
state: absent

@ -7,3 +7,5 @@
- import_tasks: "{{ role_path }}/tasks/ext/ldap-auth/setup.yml"
- import_tasks: "{{ role_path }}/tasks/ext/synapse-simple-antispam/setup.yml"
- import_tasks: "{{ role_path }}/tasks/ext/mjolnir-antispam/setup.yml"

@ -10,7 +10,7 @@
- name: Set matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time, if not provided
set_fact:
matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time: 180
matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time: 300
when: "matrix_synapse_rust_synapse_compress_state_find_rooms_command_wait_time|default('') == ''"
- name: Set matrix_synapse_rust_synapse_compress_state_compress_room_time, if not provided

@ -106,4 +106,4 @@
template:
src: "{{ role_path }}/templates/synapse/usr-local-bin/matrix-synapse-register-user.j2"
dest: "{{ matrix_local_bin_path }}/matrix-synapse-register-user"
mode: 0750
mode: 0755

@ -2596,6 +2596,8 @@ password_providers:
uri: {{ matrix_synapse_ext_password_provider_ldap_uri|string|to_json }}
start_tls: {{ matrix_synapse_ext_password_provider_ldap_start_tls|to_json }}
base: {{ matrix_synapse_ext_password_provider_ldap_base|string|to_json }}
active_directory: {{ matrix_synapse_ext_password_provider_ldap_active_directory|to_json }}
default_domain: {{ matrix_synapse_ext_password_provider_ldap_default_domain|string|to_json }}
attributes:
uid: {{ matrix_synapse_ext_password_provider_ldap_attributes_uid|string|to_json }}
mail: {{ matrix_synapse_ext_password_provider_ldap_attributes_mail|string|to_json }}

@ -54,5 +54,5 @@
- matrix-coturn
- matrix-aux
- matrix-postgres-backup
- matrix-common-after
- matrix-prometheus-postgres-exporter
- matrix-common-after
Loading…
Cancel
Save