Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code = simple70
Pass the Linux Foundation Kubernetes and Cloud Native KCNA Questions and answers with Dumpstech
The Kubernetes project work is carried primarily by SIGs. What does SIG stand for?
Options:
Special Interest Group
Software Installation Guide
Support and Information Group
Strategy Implementation Group
In Kubernetes governance and project structure,SIGstands forSpecial Interest Group, soAis correct. Kubernetes is a large open source project under the Cloud Native Computing Foundation (CNCF), and its work is organized into groups that focus on specific domains—such as networking, storage, node, scheduling, security, docs, testing, and many more. SIGs provide a scalable way to coordinate contributors, prioritize work, review design proposals (KEPs), triage issues, and manage releases in their area.
Each SIG typically has regular meetings, mailing lists, chat channels, and maintainers who guide the direction of that part of the project. For example, SIG Network focuses on Kubernetes networking architecture and components, SIG Storage on storage APIs and CSI integration, and SIG Scheduling on scheduler behavior and extensibility. This structure helps Kubernetes evolve while maintaining quality, review rigor, and community-driven decision making.
The other options are not part of Kubernetes project terminology. “Software Installation Guide” and the others might sound plausible, but they are not how Kubernetes defines SIGs.
Understanding SIGs matters operationally because many Kubernetes features and design changes originate from SIGs. When you read Kubernetes enhancement proposals, release notes, or documentation, you’ll often see SIG ownership and references. In short,SIGs are the primary organizational units for Kubernetes engineering and stewardship, and SIG =Special Interest Group.
What function does kube-proxy provide to a cluster?
Options:
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxyis a node-level networking component that helps implement the KubernetesServiceabstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is whyBis correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect becauseIngressis a separate API resource and requires anIngress Controller(like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so thatService traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
Which component in Kubernetes is responsible to watch newly created Pods with no assigned node, and selects a node for them to run on?
Options:
etcd
kube-controller-manager
kube-proxy
kube-scheduler
The correct answer isD: kube-scheduler. The kube-scheduler is the control plane component responsible for assigning Pods to nodes. It watches for newly created Pods that do not have a spec.nodeName set (i.e., unscheduled Pods). For each such Pod, it evaluates the available nodes against scheduling constraints and chooses the best node, then performs a “bind” operation by setting the Pod’s spec.nodeName.
Scheduling decisions consider many factors: resource requests vs node allocatable capacity, taints/tolerations, node selectors and affinity/anti-affinity, topology spread constraints, and other policy inputs. The scheduler typically runs a two-phase process:filtering(find feasible nodes) andscoring(rank feasible nodes) before selecting one.
Option A (etcd) is the datastore that persists cluster state; it does not make scheduling decisions. Option B (kube-controller-manager) runs controllers (Deployment, Node, Job controllers, etc.) but not scheduling. Option C (kube-proxy) is a node component for Service networking; it doesn’t place Pods.
Understanding this separation is key for troubleshooting. If Pods are stuck Pending with “no nodes available,” the scheduler’s feasibility checks are failing (insufficient CPU/memory, taints not tolerated, affinity mismatch). If Pods schedule but land unexpectedly, it’s often due to scoring preferences or missing constraints. In all cases, the component that performs the node selection is thekube-scheduler.
Therefore, the verified correct answer isD.
=========
Imagine there is a requirement to run a database backup every day. Which Kubernetes resource could be used to achieve that?
Options:
kube-scheduler
CronJob
Task
Job
To run a workload on a repeating schedule (like “every day”), Kubernetes providesCronJob, makingBcorrect. A CronJob creates Jobs according to a cron-formatted schedule, and then each Job creates one or more Pods that run to completion. This is the Kubernetes-native replacement for traditional cron scheduling, but implemented as a declarative resource managed by controllers in the cluster.
For a daily database backup, you’d define a CronJob with a schedule (e.g., "0 2 * * *" for 2:00 AM daily), and specify the Pod template that performs the backup (invokes backup scripts/tools, writes output to durable storage, uploads to object storage, etc.). Kubernetes will then create a Job at each scheduled time. CronJobs also support operational controls like concurrencyPolicy (Allow/Forbid/Replace) to decide what happens if a previous backup is still running, startingDeadlineSeconds to handle missed schedules, and history limits to retain recent successful/failed Job records for debugging.
Option D (Job) is close but not sufficient for “every day.” A Job runs a workload until completion once; you would need an external scheduler to create a Job every day. Option A (kube-scheduler) is a control plane component responsible for placing Pods onto nodes and does not schedule recurring tasks. Option C (“Task”) is not a standard Kubernetes workload resource.
This question is fundamentally about mapping a recurring operational requirement (backup cadence) to Kubernetes primitives. The correct design is:CronJobtriggersJobcreation on a schedule;Jobruns Pods to completion. Therefore, the correct answer isB.
=========
What is the main purpose of the Ingress in Kubernetes?
Options:
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
Dis correct. Ingress is a Kubernetes API object that defines rules forexternal access to HTTP/HTTPS servicesin a cluster. The defining capability is Layer 7 routing—commonlyhost-basedandpath-basedrouting—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,” that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only thespecificationof routing rules. AnIngress controller(e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
Asuggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
BandCdescribe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matchesD.
=========
What is the main role of the Kubernetes DNS within a cluster?
Options:
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
Kubernetes DNS (commonly implemented byCoreDNS) providesservice discoveryinside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makesDcorrect. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement isD: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
What is the difference between a Deployment and a ReplicaSet?
Options:
With a Deployment, you can’t control the number of pod replicas.
A ReplicaSet does not guarantee a stable set of replica pods running.
A Deployment is basically the same as a ReplicaSet with annotations.
A Deployment is a higher-level concept that manages ReplicaSets.
ADeploymentis a higher-level controller that managesReplicaSetsand provides rollout/rollback behavior, soDis correct. A ReplicaSet’s primary job is to ensure that a specified number of Pod replicas are running at any time, based on a label selector and Pod template. It’s a fundamental “keep N Pods alive” controller.
Deployments build on that by managing the lifecycle of ReplicaSets over time. When you update a Deployment (for example, changing the container image tag or environment variables), Kubernetes creates anew ReplicaSetfor the new Pod template and gradually shifts replicas from the old ReplicaSet to the new one according to the rollout strategy (RollingUpdate by default). Deployments also retain revision history, making it possible to roll back to a previous ReplicaSet if a rollout fails.
Why the other options are incorrect:
A is false: Deployments absolutely control the number of replicas via spec.replicas and can also be controlled by HPA.
B is false: ReplicaSetsdoguarantee that a stable number of replicas is running (that is their core purpose).
C is false: a Deployment is not “a ReplicaSet with annotations.” It is a distinct API resource with additional controller logic for declarative updates, rollouts, and revision tracking.
Operationally, most teams create Deployments rather than ReplicaSets directly because Deployments are safer and more feature-complete for application delivery. ReplicaSets still appear in real clusters because Deployments create them automatically; you’ll commonly see multiple ReplicaSets during rollout transitions. Understanding the hierarchy is crucial for troubleshooting: if Pods aren’t behaving as expected, you often trace from Deployment → ReplicaSet → Pod, checking selectors, events, and rollout status.
So the key difference is:ReplicaSet maintains replica count; Deployment manages ReplicaSets and orchestrates updates.Therefore,Dis the verified answer.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Options:
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration areGitOps controllers, and the best match here isFlux and ArgoCD, soAis correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore,Ais the verified correct answer.
=========
Which of the following are tasks performed by a container orchestration tool?
Options:
Schedule, scale, and manage the health of containers.
Create images, scale, and manage the health of containers.
Debug applications, and manage the health of containers.
Store images, scale, and manage the health of containers.
A container orchestration tool (like Kubernetes) is responsible forscheduling,scaling, andhealth managementof workloads, makingAcorrect. Orchestration sits above individual containers and focuses on running applications reliably across a fleet of machines. Scheduling means deciding which node should run a workload based on resource requests, constraints, affinities, taints/tolerations, and current cluster state. Scaling means changing the number of running instances (replicas) to meet demand (manually or automatically through autoscalers). Health management includes monitoring whether containers and Pods are alive and ready, replacing failed instances, and maintaining the declared desired state.
Options B and D include “create images” and “store images,” which are not orchestration responsibilities. Image creation is a CI/build responsibility (Docker/BuildKit/build systems), and image storage is a container registry responsibility (Harbor, ECR, GCR, Docker Hub, etc.). Kubernetes consumes images from registries but does not build or store them. Option C includes “debug applications,” which is not a core orchestration function. While Kubernetes provides tools that help debugging (logs, exec, events), debugging is a human/operator activity rather than the orchestrator’s fundamental responsibility.
In Kubernetes specifically, these orchestration tasks are implemented through controllers and control loops: Deployments/ReplicaSets manage replica counts and rollouts, kube-scheduler assigns Pods to nodes, kubelet ensures containers run, and probes plus controller logic replace unhealthy replicas. This is exactly what makes Kubernetes valuable at scale: instead of manually starting/stopping containers on individual hosts, you declare your intent and let the orchestration system continually reconcile reality to match. That combination—placement + elasticity + self-healing—is the core of container orchestration, matching option A precisely.
=========
Which resource do you use to attach a volume in a Pod?
Options:
StorageVolume
PersistentVolume
StorageClass
PersistentVolumeClaim
In Kubernetes, Pods typically attach persistent storage by referencing aPersistentVolumeClaim (PVC), makingDcorrect. A PVC is a user’s request for storage with specific requirements (size, access mode, storage class). Kubernetes then binds the PVC to a matchingPersistentVolume (PV)(either pre-provisioned statically or created dynamically via a StorageClass and CSI provisioner). The Pod does not directly attach a PV; it references the PVC, and Kubernetes handles the binding and mounting.
This design separates responsibilities: administrators (or CSI drivers) manage PV provisioning and backend storage details, while developers consume storage via PVCs. In a Pod spec, you define a volume of type persistentVolumeClaim and set claimName:
Option B (PersistentVolume) is not directly referenced by Pods; PVs are cluster resources that represent actual storage. Pods don’t “pick” PVs; claims do. Option C (StorageClass) defines provisioning parameters (e.g., disk type, replication, binding mode) but is not what a Pod references to mount a volume. Option A is not a Kubernetes resource type.
Operationally, using PVCs enables dynamic provisioning and portability: the same Pod spec can be deployed across clusters where the StorageClass name maps to appropriate backend storage. It also supports lifecycle controls like reclaim policies (Delete/Retain) and snapshot/restore workflows depending on CSI capabilities.
So the Kubernetes resource you use in a Pod to attach a persistent volume isPersistentVolumeClaim, optionD.
=========