When you are importing or re-testing a project from a Kubernetes integration in the Snyk UI, you might see several issues.
Some common errors and suggested resolutions are described below.
No workloads showing at all
If this is the case, there will most likely be an issue with your integration setup. Please check the following:
- You have at least the minimum prerequisites set for snyk-monitor
- You are running the latest version of snyk-monitor
- If you are running snyk-monitor v1.x, you need to upgrade. This version was deprecated as of 11 April 2023 and will no longer work. Upgrade to v2.x following the upgrade instructions for your platform
- When upgrading to V2.x, if you face issues with the upgrade, please ensure that you have removed the old secret, run force-update, and created a new secret.
- The Service Account used for the integration must have Admin credentials (either Group or Org level), or if using a custom role, must have the Publish Kubernetes Resources permission. Without the correct credentials, the snyk-monitor logs will show 403 Forbidden errors.
- Only set the
integrationApi
environment variable if accessing the MT-EU or MT-AU regional data centres. Setting this when trying to access the default US data centre (if you log in to app.snyk.io) to will result in 403 Forbidden errors. - If you are using a private container registry, ensure that you have configured a dockercfg.json and created a secret correctly according to the private container registry authentication instructions.
- Workloads will only show once the images have been scanned, which can take a short while. The first workload should be visible within an hour of completing setup and deploying your snyk-monitor.
Only some workloads or namespaces are showing
- Depending on how many workloads are running in the cluster and how frequently new images are deployed, it can take hours for a full cluster scan to complete.
- Check that what you are expecting to scan falls into the supported workspaces, registries, languages and operating systems for the integration.
- When using the automatic import/delete option,
job
andpod
type workloads are excluded by default. See Advanced use of automatic import/deletion
- When using the automatic import/delete option,
- Certain namespaces are omitted by default. See Configuring excluded namespaces
Project last retest date is over 7 days ago
In order to maintain the health of the database, any information that relates to a workload that has not been changed or updated for eight (8) days will be removed. This can lead to failure on retesting the workload. (see Kubernetes integration overview)
Updates are not showing
- Have the changes been deployed to the same image tag?
- Does reimporting the image resolve the issue?
Kubernetes integration defaults to manual import/delete via the Snyk UI. If required, you can configure Automatic import/deletion of Kubernetes workloads.
To troubleshoot issues with the automatic import/delete function:
- check that you have either
-set policyOrgs={<org-id>}
OR-set workloadPoliciesMap=snyk-monitor-custom-policies
in the install command. job
andpod
type workloads are excluded from automatic import/delete by default. See Advanced use of automatic import/deletion- restarting the snyk-monitor can be helpful to troubleshoot if the custom policy does not appear to be applying.
To obtain logs run kubectl logs deployment.apps/snyk-monitor -n snyk-monitor > logs.txt
You can compare your logs with the following issues:
Authentication is required
unable to retrieve auth token: invalid username/password: unknown: Authentication is required
- If the issues are particular to a private container registry, or you see a message such as above in the snyk-monitor logs, check that your dockercfg.json is correctly configured according to the private container registry authentication instructions, and contains no formatting or newline errors.
-
When Base64-encoding credentials, be careful not to include any newlines in the text, as these break authentication. When encoding base64 auth details, use the
-n
flag to stop whitespace from being included e.g.echo -n <USERNAME>:<PASSWORD> | base64
-
Manifest unknown
Error response from daemon: manifest for <id> not found: manifest
unknown: manifest unknown
- This may happen especially with public images where the image sha is no longer available
- Check if the image can be pulled by name and sha (found in the logs under “id”) using
docker pull <imageName>@<id>
- If the image cannot be pulled by name and sha, try pulling the image by name and tag (found in the logs under “imageName”) using
docker pull <imageName:tag>
-
If the image cannot be pulled by name@sha, but can be pulled by name:tag:
- the deployed image may have been deleted from the Container Registry
- the image ID property on the workload may have been overwritten with a new sha e.g. when loaded using “nerdctl” it changes the SHA into something else.
-
Occasionally it is useful to know which of your snyk-monitors are running on particular versions. In the Kubernetes integration page, there is a list of snyk-monitors configured with the specified integration ID.
Normally, each will show a version number, followed by the cluster name where it is running.
In cases where the snyk-monitor version shows as unknown
in the Kubernetes integration page, please restart the pod.
If you need troubleshooting assistance with your Kubernetes integration, please provide the following:
- snyk orgID (go to org settings and copy the UUID, or provide the org URL)
- snyk-monitor logs (please do not trim the logs)
- To obtain logs run
kubectl logs deployment.apps/snyk-monitor -n snyk-monitor > logs.txt
- To obtain logs run
- Kubernetes platform and version of Kubernetes - Amazon EKS or Google GKE or Azure AKS or Red Hat OpenShift, etc.
- Your helm installation command/s, including any optional steps.
- If possible, your helm chart, with any secrets redacted
- If you are using a private registry, a copy of your dockercfg.json (with secrets redacted)
- A copy of your custom policy, if you are using one.
- Your helm chart version. Run
helm search repo snyk-charts
- Specifics about the issue that you are facing - for example which running workloads are not being detected, or a link to projects which are not retesting.