Install apparmor when installing podman
The old install-docker upstream.yaml tasks installed apparmor for docker (it was origianlly a dependency but then docker removed it as an explicit dependency while still explicitly depending on it so we manually installed it). When we started deploying Noble nodes with podman via the install-docker role we didn't get apparmor because podman doesn't appear to depend on it. However when we got to production the production images already come with apparmor which includes profiles for things like podman and rsyslog which have caused problems for us deploying services with podman. Attempt to catch these issues in CI by explicitly installing apparmor. This should be a noop for production beceaus apparmor is already installed. This should help us catch problems with podman in CI before we ever get to production. To ensure that apparmor is working properly we capture apparmor_status output as part of our system-config-run job log collection. Note we remove the zuul lb test for haproxy.log being present as current apparmor problems with the rsyslogd profile prevent that from occuring on noble. The next change will correct that issue and reinstate the test case. Change-Id: Iea5966dbb2dcfbe1e51d9c00bad67a9d37e1b7e1
This commit is contained in:
parent
15e0d6c7df
commit
170c003bc7
@ -18,6 +18,10 @@
|
|||||||
# TODO do we need these extra tools?
|
# TODO do we need these extra tools?
|
||||||
- buildah
|
- buildah
|
||||||
- skopeo
|
- skopeo
|
||||||
|
# Production nodes have apparmor but CI nodes don't. List it
|
||||||
|
# explicitly here to resolve the delta. The old docker upstream
|
||||||
|
# install path also installed apparmor.
|
||||||
|
- apparmor
|
||||||
state: present
|
state: present
|
||||||
|
|
||||||
- name: Disable docker daemon service
|
- name: Disable docker daemon service
|
||||||
|
@ -25,6 +25,11 @@
|
|||||||
- docker
|
- docker
|
||||||
- podman
|
- podman
|
||||||
|
|
||||||
|
- name: Get AppArmor Status
|
||||||
|
shell: 'apparmor_status | tee /var/log/apparmor_status'
|
||||||
|
become: yes
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
- include_role:
|
- include_role:
|
||||||
name: stage-output
|
name: stage-output
|
||||||
|
|
||||||
|
@ -32,10 +32,3 @@ def test_haproxy_statsd_running(host):
|
|||||||
out = json.loads(cmd.stdout)
|
out = json.loads(cmd.stdout)
|
||||||
assert out[0]["State"]["Status"] == "running"
|
assert out[0]["State"]["Status"] == "running"
|
||||||
assert out[0]["RestartCount"] == 0
|
assert out[0]["RestartCount"] == 0
|
||||||
|
|
||||||
def test_haproxy_logging(host):
|
|
||||||
# rsyslog is configured to add a unix socket at this path
|
|
||||||
assert host.file('/var/lib/haproxy/dev/log').is_socket
|
|
||||||
# Haproxy logs to syslog via the above socket which produces
|
|
||||||
# this logfile
|
|
||||||
assert host.file('/var/log/haproxy.log').is_file
|
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
zuul_copy_output: "{{ copy_output | combine(host_copy_output | default({})) }}"
|
zuul_copy_output: "{{ copy_output | combine(host_copy_output | default({})) }}"
|
||||||
stage_dir: "{{ ansible_user_dir }}/zuul-output"
|
stage_dir: "{{ ansible_user_dir }}/zuul-output"
|
||||||
copy_output:
|
copy_output:
|
||||||
|
'/var/log/apparmor_status': logs_txt
|
||||||
'/var/log/syslog': logs_txt
|
'/var/log/syslog': logs_txt
|
||||||
'/var/log/messages': logs_txt
|
'/var/log/messages': logs_txt
|
||||||
'/var/log/exim4': logs
|
'/var/log/exim4': logs
|
||||||
|
Loading…
x
Reference in New Issue
Block a user