-
Notifications
You must be signed in to change notification settings - Fork 107
clusteradm join
fails to add GKE cluster as managed cluster to kind hub and errors in OCM pod logs
#958
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
To resolve the issue where the clusteradm join --force-internal-endpoint-lookup --wait --hub-token <hub-token> --hub-apiserver <hub-apiserver> --cluster-name <cluster-name> --context <managed-cluster-context> Make sure to set the After the OCM agent is running on your managed cluster, it will send a “handshake” to your hub cluster and wait for approval from the hub cluster admin. Follow these steps to accept the join request and verify the setup:
Ensure that the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
The GKE cluster needs to reach out and connect to the hub API server. So this API address needs to be valid. The documentation probably needs to be fixed. It probably assume if you use hub as KinD cluster then your managedcluster is KinD as well. |
Thanks! That was an oversight on my part. I was aware from colleagues using OCM that that flag instructs the managed cluster how to contact the hub cluster, but I didn't realize that the hub and spoke wouldn't be able to communicate with each other in my setup since my hub cluster was running on my laptop. I created #959 after running through it again with two GKE clusters this time (I assumed that they would be able to reach each others' control planes since I can use public IP to reach either from my computer) and getting a different error message. I believe the solution to closing out this issue would be, as you proposed, some added documentation in the getting started section clarifying the network architecture and what would happen if someone were to use kind for one of the clusters or all of the clusters. |
Describe the bug
I was following the pages in the docs on setting up a hub (https://open-cluster-management.io/docs/getting-started/installation/start-the-control-plane/) and registering a cluster (https://open-cluster-management.io/docs/getting-started/installation/register-a-cluster/). My plan was to achieve a hub running with a GKE managed cluster joined to the hub. I didn't care where the hub was running (only that the managed cluster was GKE) so I used kind for it.
The
clusteradm join
command ended up making no progress with the message "Waiting for klusterlet agent to become ready... (UnavailablePods)" displayed. And I saw errors in the two OCM pod logs.See reproduction details below for details.
To Reproduce
Steps to reproduce the behavior:
clusteradm join
command with your token etc in it.clusteradm
command from the previous step but with--force-internal-endpoint-lookup
added to it (as the page says you should do since the hub is a kind cluster). Example command is as follows.klusterlet-registration-agent-…
.klusterlet-…
.Expected behavior
The
clusteradm join
command to complete, the GKE cluster to be successfully joined to the hub cluster, and at least onemanagedcluster
resource to exist in the hub cluster after these steps finish.Environment ie: OCM version, Kubernetes version and provider:
Additional context
The text was updated successfully, but these errors were encountered: