Automatic Import of Managed Kubernetes
Guide to configure your source to automatically import your managed Kubernetes clusters
Last updated
Guide to configure your source to automatically import your managed Kubernetes clusters
Last updated
If you host your Kubernetes clusters in AWS, Azure, or GCP you don't need to generate your config from scratch. If your cluster API is public and your credentials allow access then Hava can generate and import for you automatically.
Once the cluster configuration has been created it will be listed as a sub-source of the primary source it's linked to. When the primary source is updated, your cluster resources will also be synced and your diagrams generated. If the cluster is ever removed the source will automatically be removed as well.
Unfortunately AWS IAM does not support giving a role or user access to the EKS clusters from the parent account, so a config change has to be added to each cluster. To allow Hava access to your EKS clusters you need to make sure that the user or role you use to import in Hava is added to the mapUsers
section in your aws-auth
ConfigMap within the cluster.
To import your AKS clusters you'll simply need to add a role to the Service Principle you've used to import your data in Hava. This step is in the Powershell instructions too, but if you created your SP before support was added you simply need to log into Powershell in the Azure Portal and run the following commands:
Once you've added this role simply re-sync your source in Hava to see container diagrams of your public AKS clusters.
With the default Project Reader role your GKE clusters should be ready to import right away! So long as the cluster is public, or you allow access to your control plane via an external IP address, you should begin to see your clusters once your GCP source is imported.
If you have a limited access service account you just need to make sure you add the Kubernetes Engine Viewer role to your service account.