使用kube-bench工具对k8s集群进行安全检测
背景
工具介绍
Kube-Bench是一款针对Kubernete的安全检测工具,从本质上来说,Kube-Bench是一个基于Go开发的应用程序,它可以帮助研究人员对部署的Kubernete进行安全检测,安全检测原则遵循CIS Kubernetes Benchmark。
测试规则需要通过YAML文件进行配置,因此我们可以轻松更新该工具的测试规则。
注意事项
Kube-Bench尽可能地实现了CIS Kubernetes Benchmark,如果kube bench没有正确执行安全基准测试,请点击https://www.cisecurity.org 提交问题。
Kubernete版本和CIS基准测试版本之间没有一对一的映射。请参阅CIS Kubernetes基准测试支持,以查看基准测试的不同版本包含哪些Kubernetes版本。
Kube-Bench无法检查受管集群的主节点,例如GKE、EKS和AKS,因为Kube-Bench不能访问这些节点。不过,Kube-Bench在这些环境中仍然可以检查worker节点配置。
实战开始
根据文档说明,以下步骤便可快速开始
job.yaml(来源:https://github.com/aquasecurity/kube-bench/blob/main/job.yaml)
---
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
spec:
template:
metadata:
labels:
app: kube-bench
spec:
containers:
- command: ["kube-bench"]
image: docker.io/aquasec/kube-bench:v0.9.2
name: kube-bench
volumeMounts:
- name: var-lib-cni
mountPath: /var/lib/cni
readOnly: true
- mountPath: /var/lib/etcd
name: var-lib-etcd
readOnly: true
- mountPath: /var/lib/kubelet
name: var-lib-kubelet
readOnly: true
- mountPath: /var/lib/kube-scheduler
name: var-lib-kube-scheduler
readOnly: true
- mountPath: /var/lib/kube-controller-manager
name: var-lib-kube-controller-manager
readOnly: true
- mountPath: /etc/systemd
name: etc-systemd
readOnly: true
- mountPath: /lib/systemd/
name: lib-systemd
readOnly: true
- mountPath: /srv/kubernetes/
name: srv-kubernetes
readOnly: true
- mountPath: /etc/kubernetes
name: etc-kubernetes
readOnly: true
- mountPath: /usr/local/mount-from-host/bin
name: usr-bin
readOnly: true
- mountPath: /etc/cni/net.d/
name: etc-cni-netd
readOnly: true
- mountPath: /opt/cni/bin/
name: opt-cni-bin
readOnly: true
hostPID: true
restartPolicy: Never
volumes:
- name: var-lib-cni
hostPath:
path: /var/lib/cni
- hostPath:
path: /var/lib/etcd
name: var-lib-etcd
- hostPath:
path: /var/lib/kubelet
name: var-lib-kubelet
- hostPath:
path: /var/lib/kube-scheduler
name: var-lib-kube-scheduler
- hostPath:
path: /var/lib/kube-controller-manager
name: var-lib-kube-controller-manager
- hostPath:
path: /etc/systemd
name: etc-systemd
- hostPath:
path: /lib/systemd
name: lib-systemd
- hostPath:
path: /srv/kubernetes
name: srv-kubernetes
- hostPath:
path: /etc/kubernetes
name: etc-kubernetes
- hostPath:
path: /usr/bin
name: usr-bin
- hostPath:
path: /etc/cni/net.d/
name: etc-cni-netd
- hostPath:
path: /opt/cni/bin/
name: opt-cni-bin
$ kubectl apply -f job.yaml
job.batch/kube-bench created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-bench-j76s9 0/1 ContainerCreating 0 3s
# Wait for a few seconds for the job to complete
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-bench-j76s9 0/1 Completed 0 11s
# The results are held in the pod's logs
kubectl logs kube-bench-j76s9
[INFO] 1 Master Node Security Configuration
[INFO] 1.1 API Server
...
执行之后如下所示
[root@master01 ~]# kubectl logs --since=1h kube-bench-ggsjc
[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
[FAIL] 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
[PASS] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
[WARN] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)
[WARN] 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)
[PASS] 4.1.5 Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)
[PASS] 4.1.6 Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)
[WARN] 4.1.7 Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)
[PASS] 4.1.8 Ensure that the client certificate authorities file ownership is set to root:root (Manual)
[FAIL] 4.1.9 If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)
[PASS] 4.1.10 If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)
[INFO] 4.2 Kubelet
[PASS] 4.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
[PASS] 4.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 4.2.3 Ensure that the --client-ca-file argument is set as appropriate (Automated)
[PASS] 4.2.4 Verify that the --read-only-port argument is set to 0 (Manual)
[PASS] 4.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[PASS] 4.2.6 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[PASS] 4.2.7 Ensure that the --hostname-override argument is not set (Manual)
[PASS] 4.2.8 Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)
[WARN] 4.2.9 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)
[PASS] 4.2.10 Ensure that the --rotate-certificates argument is not set to false (Automated)
[PASS] 4.2.11 Verify that the RotateKubeletServerCertificate argument is set to true (Manual)
[WARN] 4.2.12 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)
[WARN] 4.2.13 Ensure that a limit is set on pod PIDs (Manual)
[INFO] 4.3 kube-proxy
[PASS] 4.3.1 Ensure that the kube-proxy metrics service is bound to localhost (Automated)
== Remediations node ==
4.1.1 Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 /lib/systemd/system/kubelet.service
4.1.3 Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 /etc/kubernetes/proxy.conf
4.1.4 Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root /etc/kubernetes/proxy.conf
4.1.7 Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
4.1.9 Run the following command (using the config file location identified in the Audit step)
chmod 600 /var/lib/kubelet/config.yaml
4.2.9 If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
/lib/systemd/system/kubelet.service on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
4.2.12 If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
/lib/systemd/system/kubelet.service on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
4.2.13 Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
== Summary node ==
16 checks PASS
2 checks FAIL
6 checks WARN
0 checks INFO
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[FAIL] 5.1.1 Ensure that the cluster-admin role is only used where required (Automated)
[FAIL] 5.1.2 Minimize access to secrets (Automated)
[FAIL] 5.1.3 Minimize wildcard use in Roles and ClusterRoles (Automated)
[FAIL] 5.1.4 Minimize access to create pods (Automated)
[FAIL] 5.1.5 Ensure that default service accounts are not actively used (Automated)
[FAIL] 5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Automated)
[WARN] 5.1.7 Avoid use of system:masters group (Manual)
[WARN] 5.1.8 Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)
[WARN] 5.1.9 Minimize access to create persistent volumes (Manual)
[WARN] 5.1.10 Minimize access to the proxy sub-resource of nodes (Manual)
[WARN] 5.1.11 Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)
[WARN] 5.1.12 Minimize access to webhook configuration objects (Manual)
[WARN] 5.1.13 Minimize access to the service account token creation (Manual)
[INFO] 5.2 Pod Security Standards
[WARN] 5.2.1 Ensure that the cluster has at least one active policy control mechanism in place (Manual)
[WARN] 5.2.2 Minimize the admission of privileged containers (Manual)
[WARN] 5.2.3 Minimize the admission of containers wishing to share the host process ID namespace (Manual)
[WARN] 5.2.4 Minimize the admission of containers wishing to share the host IPC namespace (Manual)
[WARN] 5.2.5 Minimize the admission of containers wishing to share the host network namespace (Manual)
[WARN] 5.2.6 Minimize the admission of containers with allowPrivilegeEscalation (Manual)
[WARN] 5.2.7 Minimize the admission of root containers (Manual)
[WARN] 5.2.8 Minimize the admission of containers with the NET_RAW capability (Manual)
[WARN] 5.2.9 Minimize the admission of containers with added capabilities (Manual)
[WARN] 5.2.10 Minimize the admission of containers with capabilities assigned (Manual)
[WARN] 5.2.11 Minimize the admission of Windows HostProcess containers (Manual)
[WARN] 5.2.12 Minimize the admission of HostPath volumes (Manual)
[WARN] 5.2.13 Minimize the admission of containers which use HostPorts (Manual)
[INFO] 5.3 Network Policies and CNI
[WARN] 5.3.1 Ensure that the CNI in use supports NetworkPolicies (Manual)
[WARN] 5.3.2 Ensure that all Namespaces have NetworkPolicies defined (Manual)
[INFO] 5.4 Secrets Management
[WARN] 5.4.1 Prefer using Secrets as files over Secrets as environment variables (Manual)
[WARN] 5.4.2 Consider external secret storage (Manual)
[INFO] 5.5 Extensible Admission Control
[WARN] 5.5.1 Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)
[INFO] 5.7 General Policies
[WARN] 5.7.1 Create administrative boundaries between resources using namespaces (Manual)
[WARN] 5.7.2 Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)
[WARN] 5.7.3 Apply SecurityContext to your Pods and Containers (Manual)
[WARN] 5.7.4 The default namespace should not be used (Manual)
== Remediations policies ==
5.1.1 Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]
Condition: is_compliant is false if rolename is not cluster-admin and rolebinding is cluster-admin.
5.1.2 Where possible, remove get, list and watch access to Secret objects in the cluster.
5.1.3 Where possible replace any use of wildcards ["*"] in roles and clusterroles with specific
objects or actions.
Condition: role_is_compliant is false if ["*"] is found in rules.
Condition: clusterrole_is_compliant is false if ["*"] is found in rules.
5.1.4 Where possible, remove create access to pod objects in the cluster.
5.1.5 Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.
5.1.6 Modify the definition of ServiceAccounts and Pods which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Condition: Pod is_compliant to true when
- ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
- ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false
5.1.7 Remove the system:masters group from all users in the cluster.
5.1.8 Where possible, remove the impersonate, bind and escalate rights from subjects.
5.1.9 Where possible, remove create access to PersistentVolume objects in the cluster.
5.1.10 Where possible, remove access to the proxy sub-resource of node objects.
5.1.11 Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
5.1.12 Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
5.1.13 Where possible, remove access to the token sub-resource of serviceaccount objects.
5.2.1 Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
5.2.2 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
5.2.3 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
5.2.4 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
5.2.5 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
5.2.6 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
5.2.7 Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
5.2.8 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
5.2.9 Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
5.2.10 Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
5.2.11 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
5.2.12 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
5.2.13 Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
5.3.1 If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
5.3.2 Follow the documentation and create NetworkPolicy objects as you need them.
5.4.1 If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
5.4.2 Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
5.5.1 Follow the Kubernetes documentation and setup image provenance.
5.7.1 Follow the documentation and create namespaces for objects in your deployment as you need
them.
5.7.2 Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
5.7.3 Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
5.7.4 Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
== Summary policies ==
0 checks PASS
6 checks FAIL
29 checks WARN
0 checks INFO
== Summary total ==
16 checks PASS
8 checks FAIL
35 checks WARN
0 checks INFO