Gravity pod security policies


#1

Ported from support chat

Question: but when gravity is installing applications into the cluster, it’s doing so via the default serviceaccount, right?

I’m currently seeing “StatefulSet failed error: pods “-0” is forbidden: unable to validate against any pod security policy: []”. It’s probably because we moved everything out of kube-system and into our own namespace.


#2

Gravity clusters are locked down by default so you need to either define your PSP, or you can see if one of the default ones that gravity ships will work for you: https://gravitational.com/gravity/docs/cluster/#pod-security-policies
you will need to define a service account / role / binding. Please check this example of a deployment of kube-router with custom PSP:

apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
  name: kube-router
spec:
  fsGroup:
    rule: RunAsAny
  hostNetwork: true
  hostPorts:
  - max: 65535
    min: 1024
  privileged: true
  allowedCapabilities:
    - NET_ADMIN
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2017-09-21T00:58:22Z
  name: kube-router
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: kube-router
rules:
- apiGroups:
  - extensions
  resourceNames:
  - kube-router
  resources:
  - podsecuritypolicies
  verbs:
  - use
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kube-router
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-router
subjects:
- kind: ServiceAccount
  name: kube-router
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    k8s-app: kube-router
    tier: node
  name: kube-router
  namespace: kube-system
spec:
  template:
    metadata:
      labels:
        k8s-app: kube-router
        tier: node
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccount: kube-router
      containers:
      - name: kube-router
        securityContext:
          runAsUser: 0
          capabilities:
            add: ["NET_ADMIN"]
        image: cloudnativelabs/kube-router
        imagePullPolicy: Always
        args:
        - --run-router=false
        - --run-firewall=true
        - --run-service-proxy=false
        - --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        volumeMounts:
        - name: lib-modules
          mountPath: /lib/modules
          readOnly: true
        - name: kubeconfig
          mountPath: /etc/kubernetes/scheduler.kubeconfig
          readOnly: true
        - name: state
          mountPath: /var/state
          readOnly: true          
      hostNetwork: true
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
      - hostPath:
          path: /lib/modules
        name: lib-modules
      - name: kubeconfig
        hostPath:
          path: /etc/kubernetes/scheduler.kubeconfig
      - name: state
        hostPath:
          path: /var/state