Problems with general cluster configuration

I cannot seem to effect change in regards to podCIDR or serviceCIDR as laid out in the docs example here: https://gravitational.com/gravity/docs/config/#general-cluster-configuration

While I’ve successfully updated the runtime environment, I cannot seem to effect change in the cluster configuration. I’ve used the examples from the docs, but the podCIDR and the serviceCIDR don’t seem to reflect my config file. Am I doing something wrong?

I’m running sudo ./gravity install --config=cluster-config.yaml

My cluster-config.yaml file:


---

kind: ClusterConfiguration

version: v1

spec:

  global:

      # represents the IP range from which to assign service Cluster IPs

    serviceCIDR:  10.233.0.0/16

    # CIDR range for Pods in Cluster

    podCIDR: 10.234.0.0/16

---

kind: RuntimeEnvironment

version: v1

spec:

  data:

    KUBE_PROXY_FLAGS: "--proxy-mode=ipvs"

I see apiserver still using --service-cluster-ip-range=10.100.0.0/16 and kube-controller-manager still using --cluster-cidr=10.244.0.0/16.

In my associated cluster-configuration configmap I only see the following:

[centos@node01 atom]$ kubectl describe configmap -n kube-system cluster-configuration
Name:         cluster-configuration
Namespace:    kube-system
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","data":{"spec":"{\"kind\":\"RuntimeEnvironment\",\"version\":\"v1\",\"metadata\":{\"name\":\"cluster-configuration\"},\...

Data
====
spec:
----
{"kind":"RuntimeEnvironment","version":"v1","metadata":{"name":"cluster-configuration"},"spec":{}}
Events:  <none>

I think setting the serviceCIDR / podCIDR is a bug in the documentations, I don’t think those values should be controllable via the ClusterConfiguration resource, and I don’t belive they can be safely changes after install which the ClusterConfiguration also does. I think it was just an artifact of exposing the underlying kubernetes config that those keys got included.

Instead, the cli installer accepts two flags for setting the IP ranges, --pod-network-cidr and --service-cidr which control the flags passed to the kubernetes services and our internal state related to networking.

Thanks Kevin, I’ll use these instead. What is the reason both the pod-networking and the service CIDRs must be /16 or greater? We were planning to use /18’s for example.

I don’t believe these need to be /16’s.

The pod-cidr needs to be large enough to allocate a /24 per node in the cluster.
The service-cidr needs to be large enough to allocate an IP per service within the cluster, and just defaults to a /16.

Install will through an error (I forget if it’s Planet or otherwise) if these CIDRs are less than /16. Perhaps this is something that can be looked into and fixed if necessary?

Yea, we can dig in if we receive the specific errors and steps to reproduce.