You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/user/troubleshooting.md
+235Lines changed: 235 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -99,6 +99,237 @@ This is safe to ignore and merely indicates that the etcd bootstrapping is still
99
99
100
100
The easiest way to get more debugging information from the installer is to check the log file (`.openshift_install.log`) in the install directory. Regardless of the logging level specified, the installer will write its logs in case they need to be inspected retroactively.
101
101
102
+
### Installer Fails to Initialize the Cluster
103
+
104
+
The installer uses the [cluster-version-operator] to create all the components of an OpenShift cluster. When the installer fails to initialize the cluster, the most important information can be fetched by looking at the [ClusterVersion][clusterversion] and [ClusterOperator][clusteroperator] objects:
105
+
106
+
1. Inspecting the `ClusterVersion` object.
107
+
108
+
```console
109
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get clusterversion -oyaml
Some of most important [conditions][cluster-operator-conditions] to take note are `Failing`, `Available` and `Progressing`. You can look at the conditions using:
156
+
157
+
```console
158
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get clusterversion version -o=jsonpath='{range .status.conditions[*]}{.type}{" "}{.status}{" "}{.message}{"\n"}{end}'
159
+
Available True Done applying 4.0.0-0.alpha-2019-02-26-194020
160
+
Failing False
161
+
Progressing False Cluster version is 4.0.0-0.alpha-2019-02-26-194020
162
+
RetrievedUpdates False Unable to retrieve available updates: unknown version 4.0.0-0.alpha-2019-02-26-194020
163
+
```
164
+
165
+
2. Inspecting the `ClusterOperator` object.
166
+
167
+
You can get the status of all the cluster operators:
168
+
169
+
```console
170
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get clusteroperator
To get detailed information on why an individual cluster operator is `Failing` or not yet `Available`, you can check the status of that individual operator, for example `monitoring`:
197
+
198
+
```console
199
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get clusteroperator monitoring -oyaml
Again, the cluster operators also publish [conditions][cluster-operator-conditions] like `Failing`, `Available` and `Progressing` that can help user provide information on the current state of the operator:
228
+
229
+
```console
230
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get clusteroperator monitoring -o=jsonpath='{range .status.conditions[*]}{.type}{" "}{.status}{" "}{.message}{"\n"}{end}'
231
+
Available True Successfully rolled out the stack
232
+
Progressing False
233
+
Failing False
234
+
```
235
+
236
+
Each clusteroperator also publishes the list of objects owned by the cluster operator. To get that information:
237
+
238
+
```console
239
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get clusteroperator kube-apiserver -o=jsonpath='{.status.relatedObjects}'
**NOTE:** Failing to initialize the cluster is usually not a fatal failure in terms of cluster creation as the user can look at the failures from `ClusterOperator` to debug failures for a cluster operator and take actions which can allow `cluster-version-operator` to make progress.
244
+
245
+
### Installer Fails to Fetch Console URL
246
+
247
+
The installer fetches the URL for OpenShift console using the [route][route-object] in `openshift-console` namespace. If the installer fails the fetch the URL for the console:
248
+
249
+
1. Check if the console router is `Available` or `Failing`
250
+
251
+
```console
252
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get clusteroperator console -oyaml
The installer adds the CA certificate for the router to the list of trusted client certificate authorities in `${INSTALL_DIR}/auth/kubeconfig`. If the installer fails to add the router CA to `kubeconfig`, you can fetch the router CA from the cluster using:
307
+
308
+
```console
309
+
$ oc --config=${INSTALL_DIR}/auth/kubeconfig get configmaps router-ca -n openshift-config-managed -o=jsonpath='{.data.ca-bundle\.crt}'
0 commit comments