I was wondering if it make sense to use Gravity to simply deploy an empty k8s cluster to begin working with?
Absolutely, while a lot of features are geared towards packaging applications, nothing stops you from using an empty cluster. We also publish an empty gravity cluster, so you don’t need to touch the build stuff. With tele OSS you should be able to run
tele ls / tele ls --all and
tele pull gravity:<version> to download an installer that’s effectively an empty app.
How does upgrading work?
We write all the automation around upgrading gravity as a platform, and expose application hooks for upgrading the deployed software. In an empty cluster, the hooks can be left empty and gravity will only upgrade the components that make up gravity itself: https://gravitational.com/gravity/docs/cluster/#updating-a-cluster
I’m assuming if I go ahead and manually deploy pods and services after the cluster is initialised that the upgrade is non-destructive and won’t re-roll the cluster as a new deployment?
Correct. The gravity cluster upgrade is an online rolling upgrade, so in theory the application should remain available throughout the upgrade. The exception to this is when upgrading etcd (which only happens if required), where our current method of etcd upgrades will see etcd and the kubernetes API go offline temporarily, but all pods and services should remain online. Some customers for various reasons do choose to scale down all deployments as part of their upgrade process, but that’s up to the customer.
As Gravity depends on the use of etcd, I was wondering if it can play nicely with lower powered nodes i.e. 2vcpu, 8GB and shared SSD?
This doesn’t often work well, etcd often struggles when the disk isn’t dedicated or there isn’t enough available iops. Doesn’t mean it absolutely won’t work, but it can often be a struggle, and isn’t something we really support. Our recommendations around etcd IO are available here: https://gravitational.com/gravity/docs/requirements/#etcd-disk
Recent releases of gravity will test the etcd disk IO. When sharing a disk though, even though the installation test may pass, other services interacting with the disk could cause IO latency within etcd.
Finally, I assume it should be possible to add manifests to the cluster build to make deployment a bit easier i.e. traefik, postgresql-operator etc?
Yep, even though it’s not a complete app in a sense, it is common to use the gravity concept of a cluster image to use as a base set of services to be available within the cluster. So these services can be installed at cluster install time, but only provide the base infrastructure for the cluster. In essence, you’re app (what we call a cluster image) is just gravity + base services with anything else deployed later.