Live and let Learn
We’ve been able to run Kubeapps in a multi-cluster setup on various Kubernetes clusters for a while now, but this was dependent on the Kubeapps’ user being authenticated in a way that all the clusters trust. Up until now, this meant having all the clusters configured to trust the same OIDC identity provider, which is not possible in some Kubernetes environments.
Particularly, this meant we were unable to demonstrate multi-cluster Kubeapps with clusters created by Tanzu Mission Control since we can’t specify API server options, such as OIDC configuration, when creating a cluster in TMC. But that requirement has now changed thanks to a new project called Pinniped.
This is part two of a series detailing the steps required to run Kubeapps on a VMware TKG management cluster (on AWS) configured to allow users to deploy applications to multiple workload clusters, using the new multicluster support in Kubeapps. Though details will differ, a similar configuration works on other non-TKG multicluster setups as well.
Andres and I have been doing quite a bit of feature work in Kubeapps over the past months at VMware and one of the key features that I’ve been working on personally is enabling Kubeapps users to deploy applications not only on the cluster on which Kubeapps is installed, but to multiple other clusters as well.
I recently purchased a Dell XPS13 (9365) (thanks to Bitnami for whom I now work) which comes with Windows 10 preinstalled. I was aware when purchasing that suspend on Linux is not yet working (thanks David Farrell. As of Aug 2017 suspend is fixed, see below), as well as other functionality (autorotate, pen integration etc.) and so was keen to have a few options to work on this machine