Инструкция запуска реплики Mongo версии 3.6 в Docker Swarm

Настройка Mongo кластера

Создайте внешнюю сеть:

_x000D_docker network create --subnet 172.22.0.1/16 --driver=overlay --attachable cloud_backend -o "com.docker.network.bridge.name"="cloud_backend"

Создайте volume на 3х серверах

_x000D_docker volume create database_mongo_key

Создайте на одном хосте файл ключ

_x000D_openssl rand -base64 700 > /var/lib/docker/volumes/database_mongo_key/_data/mongo.key_x000D_chmod 400 /var/lib/docker/volumes/database_mongo_key/_data/mongo.key_x000D_chown 999:999 /var/lib/docker/volumes/database_mongo_key/_data/mongo.key_x000D_

И скопируйте его на другие хосты в ту же папку

Создайте mongo.yaml файл

_x000D_version: "3.7"_x000D__x000D_services:_x000D__x000D_ mongo1_replica1:_x000D_ image: mongo:3.6.23_x000D_ hostname: "{{.Task.ID}}.{{.Service.Name}}.local"_x000D_ command: --shardsvr --replSet replica1 --keyFile /data/mongo/mongo.key --journal --port 27017 --bind_ip 0.0.0.0 --auth_x000D_ volumes:_x000D_ - "mongo_key:/data/mongo"_x000D_ - "mongo1_replica1_configdb:/data/configdb"_x000D_ - "mongo1_replica1_db:/data/db"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - mongo_x000D_ logging:_x000D_ driver: journald_x000D__x000D_ mongo2_replica1:_x000D_ image: mongo:3.6.23_x000D_ hostname: "{{.Task.ID}}.{{.Service.Name}}.local"_x000D_ command: --shardsvr --replSet replica1 --keyFile /data/mongo/mongo.key --journal --port 27017 --bind_ip 0.0.0.0 --auth_x000D_ volumes:_x000D_ - "mongo_key:/data/mongo"_x000D_ - "mongo2_replica1_configdb:/data/configdb"_x000D_ - "mongo2_replica1_db:/data/db"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - mongo_x000D_ logging:_x000D_ driver: journald_x000D__x000D_ mongo3_replica1:_x000D_ image: mongo:3.6.23_x000D_ hostname: "{{.Task.ID}}.{{.Service.Name}}.local"_x000D_ command: --shardsvr --replSet replica1 --keyFile /data/mongo/mongo.key --journal --port 27017 --bind_ip 0.0.0.0 --auth_x000D_ volumes:_x000D_ - "mongo_key:/data/mongo"_x000D_ - "mongo3_replica1_configdb:/data/configdb"_x000D_ - "mongo3_replica1_db:/data/db"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - mongo_x000D_ logging:_x000D_ driver: journald_x000D_ _x000D_ mongo1_config:_x000D_ image: mongo:3.6.23_x000D_ hostname: "{{.Task.ID}}.{{.Service.Name}}.local"_x000D_ command: --configsvr --replSet config1 --keyFile /data/mongo/mongo.key --journal --port 27017 --bind_ip 0.0.0.0 --auth_x000D_ volumes:_x000D_ - "mongo_key:/data/mongo"_x000D_ - "mongo1_config_db:/data/configdb" _x000D_ - "mongo1_config_data:/data/db"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - mongo_x000D_ logging:_x000D_ driver: journald_x000D__x000D_ mongo2_config:_x000D_ image: mongo:3.6.23_x000D_ hostname: "{{.Task.ID}}.{{.Service.Name}}.local"_x000D_ command: --configsvr --replSet config1 --keyFile /data/mongo/mongo.key --journal --port 27017 --bind_ip 0.0.0.0 --auth_x000D_ volumes:_x000D_ - "mongo_key:/data/mongo"_x000D_ - "mongo2_config_db:/data/configdb" _x000D_ - "mongo2_config_data:/data/db"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - mongo_x000D_ logging:_x000D_ driver: journald_x000D__x000D_ mongo3_config:_x000D_ image: mongo:3.6.23_x000D_ hostname: "{{.Task.ID}}.{{.Service.Name}}.local"_x000D_ command: --configsvr --replSet config1 --keyFile /data/mongo/mongo.key --journal --port 27017 --bind_ip 0.0.0.0 --auth_x000D_ volumes:_x000D_ - "mongo_key:/data/mongo"_x000D_ - "mongo3_config_db:/data/configdb" _x000D_ - "mongo3_config_data:/data/db"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - mongo_x000D_ logging:_x000D_ driver: journald _x000D_ _x000D_ mongo1:_x000D_ image: mongo:3.6.23_x000D_ hostname: "{{.Task.ID}}.{{.Service.Name}}.local"_x000D_ command: mongos --keyFile /data/mongo/mongo.key --configdb config1/mongo1_config:27017,mongo2_config:27017,mongo3_config:27017 --bind_ip 0.0.0.0 --port 27017_x000D_ volumes:_x000D_ - "mongo_key:/data/mongo"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - mongo_x000D_ - cloud_backend_x000D_ logging:_x000D_ driver: journald _x000D_ _x000D_volumes:_x000D_ mongo_key: _x000D_ mongo1_config_db:_x000D_ mongo1_config_data:_x000D_ mongo1_replica1_configdb:_x000D_ mongo1_replica1_db:_x000D_ mongo2_config_db:_x000D_ mongo2_config_data:_x000D_ mongo2_replica1_configdb:_x000D_ mongo2_replica1_db:_x000D_ mongo3_config_db:_x000D_ mongo3_config_data:_x000D_ mongo3_replica1_configdb:_x000D_ mongo3_replica1_db:_x000D__x000D_networks:_x000D_ mongo:_x000D_ cloud_backend:_x000D_ external: true_x000D_

Сделайте деплой сервисов

_x000D_docker stack deploy -c mongo.yaml database --with-registry-auth

Подключитесь к реплике:

_x000D_docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=database_mongo1_replica1) mongo_x000D_

Выполните команды:

_x000D_use admin_x000D_rs.initiate({ _id: "replica1", members: [ { _id: 1, host: "mongo1_replica1:27017", priority: 1 } ] });_x000D_db.createUser({ user: 'admin', pwd: 'admin', roles: [ { role: 'root', db: 'admin' } ] });_x000D_db.auth({ user: 'admin', pwd: 'admin' })_x000D_rs.add({host: "mongo2_replica1:27017", priority: 1})_x000D_rs.add({host: "mongo3_replica1:27017", priority: 1})

Подключитесь к конфиг серверу:

_x000D_docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=database_mongo1_config) mongo

Выполните команды:

_x000D_use admin_x000D_rs.initiate({ _id: "config1", members: [ { _id: 1, host: "mongo1_config:27017", priority: 1 } ] });_x000D_db.createUser({ user: 'admin', pwd: 'admin', roles: [ { role: 'root', db: 'admin' } ] });_x000D_db.auth({ user: 'admin', pwd: 'admin' })_x000D_rs.add({host: "mongo2_config:27017", priority: 1})_x000D_rs.add({host: "mongo3_config:27017", priority: 1})

Подключитесь к монго прокси:

_x000D_docker exec -it $(docker ps -qf label=com.docker.swarm.service.name=database_mongo1) mongo

Выполните команды:

_x000D_use admin_x000D_db.auth({ user: 'admin', pwd: 'admin' })_x000D_sh.addShard("replica1/mongo1_replica1")_x000D_

Создайте тестовую базу данных:

_x000D_use test_x000D_db.createCollection('test');_x000D_sh.enableSharding('test');

Установка Rockmongo

Создайте файл mongo_adminer.yaml

_x000D_version: "3.7"_x000D__x000D_services:_x000D__x000D_ mongo_adminer:_x000D_ image: bayrell/alpine_mongo_mysql_adminer:1.0-1_x000D_ hostname: "{{.Service.Name}}.{{.Task.ID}}.local"_x000D_ environment:_x000D_ MONGO_CONFIG: >_x000D_ [_x000D_ {_x000D_ "mongo_name": "Mongo1",_x000D_ "mongo_host": "mongodb://mongo1",_x000D_ "mongo_port": "",_x000D_ "mongo_timeout": 0,_x000D_ "mongo_auth": true_x000D_ },_x000D_ {_x000D_ "mongo_name": "replicaSet1",_x000D_ "mongo_options": { "replicaSet": "replica1" },_x000D_ "mongo_host": "mongodb://mongo1_replica1,mongo2_replica1,mongo3_replica1",_x000D_ "mongo_port": "",_x000D_ "mongo_timeout": 0,_x000D_ "mongo_auth": true_x000D_ },_x000D_ {_x000D_ "mongo_name": "configServer1",_x000D_ "mongo_options": { "replicaSet": "config1" },_x000D_ "mongo_host": "mongodb://mongo1_config,mongo2_config,mongo3_config",_x000D_ "mongo_port": "",_x000D_ "mongo_timeout": 0,_x000D_ "mongo_auth": true_x000D_ }_x000D_ ]_x000D_ volumes:_x000D_ - "mongo_adminer_php:/data"_x000D_ deploy:_x000D_ replicas: 1_x000D_ endpoint_mode: dnsrr_x000D_ update_config:_x000D_ parallelism: 1_x000D_ failure_action: rollback_x000D_ delay: 5s_x000D_ restart_policy:_x000D_ condition: "on-failure"_x000D_ delay: 10s_x000D_ window: 120s_x000D_ placement:_x000D_ constraints:_x000D_ - node.labels.name == docker0_x000D_ networks:_x000D_ - cloud_backend_x000D_ logging:_x000D_ driver: journald_x000D__x000D_volumes:_x000D_ mongo_adminer_php:_x000D__x000D_networks:_x000D_ cloud_backend:_x000D_ external: true_x000D_

Сделайте deploy

_x000D_docker stack deploy -c mongo_adminer.yaml database --with-registry-auth

Полезные команды

Инициализация реплики

_x000D_rs.initiate()

Добавление хоста в реплику

_x000D_rs.add("host:27001")

Добавить хост с первым приоритетом

_x000D_rs.add({host: "host:27017", priority: 1})

Удаление из реплики

_x000D_rs.remove("host:27001")

Добавление арбитра

_x000D_rs.addArb("host:27001")

Изменение приоритетов

_x000D_cfg = rs.conf()_x000D_cfg.members[0].priority = 2 _x000D_cfg.members[1].priority = 3 _x000D_cfg.members[2].priority = 1 _x000D_rs.reconfig(cfg, {force : true})_x000D_

Чем выше цифра приоритета, тем ниже сам приоритет при выборе Primary узла.