1. How does the Snyk Controller work and how does it pull images?
The Snyk Controller has read-only access to Kubernetes workloads and container registries.
The Snyk Controller pulls images to disk, scans them, then deletes the images.
The Snyk Controller collects only the minimum relevant information to do vulnerability analysis: the list of dependencies in the image.
Private container registries are accessed using a Docker config file. For certain managed environments (AKS, GKE) it is possible to automatically get credentials to access the private container registries using credentials helpers. These can also be configured in the docker config file.
See the documentation for a more detailed explanation.
2. What happens if the image has not changed between deployments? And where is the image metadata stored?
Snyk stores collected data in a database and periodically deletes data that hasn't been used in Snyk's analysis for vulnerabilities. This data is used to detect changes to the workloads for the next time the Snyk vulnerability analysis is run. For workload, auto-import, and Snyk projects are updated immediately as Snyk receives the scan results.
See the documentation for a more detailed explanation of how we handle this.
3. Can the Snyk Controller pull images from a localhost cache (for example, nginx)?
If the "image" property of the container in a workload points to 127.0.0.1, and there is something that acts as a container registry on that address, then the Snyk Controller can pull it.
4. How does the Snyk Controller deal with network errors?
The Snyk Controller retries the connection with the Kubernetes API multiple times every few seconds. Once connectivity is re-established, the scanning and detection of workloads continue to operate as usual.
5. Do I need to install the Snyk Controller in each of my Kubernetes clusters?
Yes. The Snyk Controller communicates with the API server on the cluster in which it is deployed. In order to retrieve all of the workloads and images in your environment, the Controller needs to be installed in each of these clusters. Currently, it is not possible to install the Controller in one cluster and get access to workloads and images in different clusters.
6. What is the resource consumption of the Snyk Controller?
This depends on the size of the environment such as the number of workloads and the number and size of the images in the cluster. Some parameters of the Snyk Controller may be configured, such as allowing more CPU to speed up scans, increasing RAM to handle larger images, or increasing the number of concurrent workers that scan images.
Snyk tries to keep the footprint of the Snyk Controller minimal. The default resource values can be found in the documentation.
7. In case of failure in the Snyk Controller, would there be any impact on other services in the cluster?
No. The Snyk Controller runs in an isolated Pod in its own namespace and has read-only access to any resources in the cluster. The impact of the failure is constrained to the Snyk Controller Pod.