Failed to check time drift Error

I have a 2 node cluster and it went degraded with the following error, time seems to be fine on 2 nodes, what can we look into to fix it, also is it possible to fail installation if NTP is not configured?

{
   "checker": "time-drift", 
   "detail": "failed to check time drift",
   "status": "failed",
   "error": "rpc error: code = DeadlineExceeded desc = context deadline exceeded"
}

Yes, we generally recommend the times be kept in sync, as clock drift can cause problems within the cluster that are particularly hard to detect. The root of the issue is in etcd when creating a key that expires, this is based on the wall clock, which if unsynced can cause keys to expire much earlier than expected. We use expiring etcd keys for things like elections, so if the key expires before it is renewed, lots of problems.

This particular check is hosted within the gravity-agent process running within planet. So I would start by checking that process on each nodes for errors or warnings in the systemd journal. Context deadline exceeded means that the check failed due to taking too long to execute, this could also reflect the peer being unavailable, the gRPC port being blocked (I think it’s port 7575) between nodes, etc.