
Migration plan: * add zk* to emergency * copy data files on each node to a safe place for DR backup * make a json data backup: zk-shell localhost:2181 --run-once 'mirror / json://!tmp!zookeeper-backup.json/' * manually run a modified playbook to set up the docker infra without starting containers * rolling restart; for each node: * stop zk * split data and log files and move them to new locations * remove zk packages * start zk containers * remove from emergency; land this change. Change-Id: Ic06c9cf9604402aa8eb4bb79238021c14c5d9563
29 lines
1.1 KiB
Django/Jinja
29 lines
1.1 KiB
Django/Jinja
dataDir=/data
|
|
dataLogDir=/datalog
|
|
# The number of milliseconds of each tick
|
|
tickTime=2000
|
|
# The number of ticks that the initial
|
|
# synchronization phase can take
|
|
initLimit=10
|
|
# The number of ticks that can pass between
|
|
# sending a request and getting an acknowledgement
|
|
syncLimit=5
|
|
# When enabled, ZooKeeper auto purge feature retains the autopurge.
|
|
# snapRetainCount most recent snapshots and the corresponding
|
|
# transaction logs in the dataDir and dataLogDir respectively and
|
|
# deletes the rest. Defaults to 3. Minimum value is 3.
|
|
autopurge.snapRetainCount=3
|
|
# The frequency in hours to look for and purge old snapshots,
|
|
# defaults to 0 (disabled). The number of retained snapshots can
|
|
# be separately controlled through snapRetainCount and
|
|
# defaults to the minimum value of 3. This will quickly fill the
|
|
# disk in production if not enabled. Works on ZK >=3.4.
|
|
autopurge.purgeInterval=6
|
|
maxClientCnxns=60
|
|
standaloneEnabled=true
|
|
admin.enableServer=true
|
|
clientPort=2181
|
|
{% for host in groups['zookeeper'] %}
|
|
server.{{ loop.index }}={{ (hostvars[host].ansible_default_ipv4.address) }}:2888:3888
|
|
{% endfor %}
|