Spring Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code = simple70
Pass the IIBA Cybersecurity Analysis IIBA-CCA Questions and answers with Dumpstech
Exam IIBA-CCA Premium Access
View all detail and faqs for the IIBA-CCA exam
Which scenario is an example of the principle of least privilege being followed?
Options:
An application administrator has full permissions to only the applications they support
All application and database administrators have full permissions to every application in the company
Certain users are granted administrative access to their network account, in case they need to install a web-app
A manager who is conducting performance appraisals is granted access to HR files for all employees
The principle of least privilege requires that users, administrators, services, and applications are granted only the minimum access necessary to perform authorized job functions, and nothing more. Option A follows this principle because the administrator’s elevated permissions are limited in scope to the specific applications they are responsible for supporting. This reduces the attack surface and limits blast radius: if that administrator account is compromised, the attacker’s reach is constrained to only those applications rather than the entire enterprise environment.
Least privilege is typically implemented through role-based access control, separation of duties, and privileged access management practices. These controls ensure privileges are assigned based on defined roles, reviewed regularly, and removed when no longer required. They also promote using standard user accounts for routine tasks and reserving administrative actions for controlled, auditable sessions. In addition, least privilege supports stronger accountability through logging and change tracking, because fewer people have the ability to make high-impact changes across systems.
The other scenarios violate least privilege. Option B grants excessive enterprise-wide permissions, creating unnecessary risk and enabling widespread damage from mistakes or compromise. Option C provides “just in case” administrative access, which cybersecurity guidance explicitly discourages because it increases exposure without a validated business need. Option D is overly broad because access to all HR files exceeds what is required for performance appraisals, which typically should be limited to relevant employee records only.
When attackers exploit human emotions and connection to gain access, what technique are they using?
Options:
Social Engineering
Phishing
Tailgating
Malware
Social engineeringis the broad technique attackers use when they manipulate human psychology—such as trust, fear, urgency, curiosity, sympathy, authority, or the desire to be helpful—to persuade someone to take an action that benefits the attacker. The key idea in the question is “exploit human emotions and connection,” which is the defining characteristic of social engineering. Rather than breaking a system through purely technical means, the attacker targets the person as the easiest path to access, credentials, sensitive information, or physical entry.
Phishing is aspecific subtypeof social engineering that typically uses email, text messages, or fake websites to trick users into clicking links, opening attachments, or entering credentials. Tailgating is another subtype focused onphysical access, where an attacker follows an authorized person into a restricted area by leveraging politeness or social pressure. Malware is malicious software used to compromise systems; it can be delivered through social engineering, but malware itself is not the human-manipulation technique.
Cybersecurity control guidance treats social engineering as a major risk because it can bypass technical protections by causing legitimate users to unintentionally grant access. Common defenses include awareness training, verification procedures (call-back and out-of-band confirmation), least privilege, multi-factor authentication, strong email and web filtering, and clear reporting channels so suspicious requests can be escalated quickly.
How does Transport Layer Security ensure the reliability of a connection?
Options:
By ensuring a stateful connection between client and server
By conducting a message integrity check to prevent loss or alteration of the message
By ensuring communications use TCP/IP
By using public and private keys to verify the identities of the parties to the data transfer
Transport Layer Security (TLS) strengthens the trustworthiness of application communications by ensuring that data exchanged over an untrusted network is not silently modified and is coming from the expected endpoint. While TCP provides delivery features such as sequencing and retransmission, TLS contributes to what many cybersecurity documents describe as “reliable” secure communication by addingcryptographic integrity protections. TLS uses integrity checks (such as message authentication codes in older versions/cipher suites, or authenticated encryption modes like AES-GCM and ChaCha20-Poly1305 in modern TLS) so that any alteration of data in transit is detected. If an attacker intercepts traffic and tries to change commands, session data, or application content, the integrity verification fails and the connection is typically terminated, preventing corrupted or manipulated messages from being accepted as valid.
This is distinct from merely being “stateful” (a transport-layer property) or “using TCP/IP” (a networking stack choice). TLS can run over TCP and relies on TCP for delivery reliability, but TLS itself is focused onconfidentiality, integrity, and endpoint authentication. Public/private keys and certificates are used during the TLS handshake to authenticate servers (and optionally clients) and to establish shared session keys, but the ongoing protection that prevents undetected tampering is the integrity check on each protected record. Therefore, the best match to how TLS ensures secure, dependable communication is the message integrity mechanism described in option B.
Recovery Point Objectives and Recovery Time Objectives are based on what system attribute?
Options:
Sensitivity
Vulnerability
Cost
Criticality
Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are continuity and resilience targets that define how quickly a system must be restored and how much data loss is acceptable after an interruption. These objectives are derived primarily fromsystem criticality, meaning how essential the system is to business operations, safety, revenue, legal obligations, and customer commitments. Highly critical systems support mission-essential functions or time-sensitive services, so they require shorter RTOs (restore fast) and smaller RPOs (lose little or no data). Less critical systems can tolerate longer outages and larger data gaps, allowing longer RTOs and RPOs.
Cybersecurity and business continuity documents tie RTO/RPO determination to business impact analysis results. The BIA identifies maximum tolerable downtime, operational dependencies, and the consequences of service disruption and data unavailability. From there, organizations set RTO/RPO targets that align with risk appetite and required service levels. Those targets then drive technical and operational controls such as backup frequency, replication methods, high availability architecture, failover design, disaster recovery procedures, monitoring, and routine recovery testing.
Sensitivity focuses on confidentiality needs and may influence encryption and access controls, but it does not directly define acceptable downtime or data loss. Vulnerability describes weakness exposure and is used for threat/risk management, not recovery objectives. Cost is a constraint when selecting recovery solutions, but RTO/RPO are defined by business need and system importance first—then solutions are chosen to meet those targets within budget.
How is a risk score calculated?
Options:
Based on the confidentiality, integrity, and availability characteristics of the system
Based on the combination of probability and impact
Based on past experience regarding the risk
Based on an assessment of threats by the cyber security team
A risk score is commonly calculated by combining two core factors: how likely a risk scenario is to occur and how severe the consequences would be if it did occur. This is often described in cybersecurity risk documentation as likelihood times impact, or as a structured mapping using a risk matrix.Probability or likelihoodreflects the chance that a threat event will exploit a vulnerability under current conditions. It may consider elements such as threat activity, exposure, ease of exploitation, control strength, and historical incident patterns.Impactreflects the magnitude of harm to the organization, usually measured across business disruption, financial loss, legal or regulatory exposure, reputational damage, and harm to confidentiality, integrity, or availability.
While confidentiality, integrity, and availability are essential for understanding what matters and can influence impact ratings, they are typically inputs into impact determination rather than the full scoring method by themselves. Past experience and expert threat assessment can inform likelihood estimates, but they are not the standard calculation model on their own. The key concept is that risk must reflect both chance and consequence; a highly impactful event with very low likelihood may be scored similarly to a moderate impact event with high likelihood depending on the organization’s methodology.
Therefore, the most accurate description of how a risk score is calculated is the combination of probability and impact, enabling prioritization and consistent risk treatment decisions.
Public & Private key pairs are an example of what technology?
Options:
Virtual Private Network
IoT
Encryption
Network Segregation
Public and private key pairs are the foundation ofasymmetric encryption, also calledpublic key cryptography. In this model, each entity has two mathematically related keys: apublic keythat can be shared widely and aprivate keythat must be kept secret. The keys are designed so that what one key does, only the other key can undo. This enables two core security functions used throughout cybersecurity architectures.
First,confidentiality: data encrypted with a recipient’s public key can only be decrypted with the recipient’s private key. This allows secure communication without having to share a secret key in advance, which is especially important on untrusted networks like the internet. Second,digital signatures: a sender can sign data with their private key, and anyone can verify the signature using the sender’s public key. This provides authenticity (proof the sender possessed the private key), integrity (the data was not altered), and supports non-repudiation when combined with proper key custody and audit practices.
These mechanisms underpin widely used security controls such as TLS for secure web connections, secure email standards, code signing, and certificate-based authentication. A VPN may use public key cryptography during key exchange, but the key pair itself is specifically anencryption technology. IoT and network segregation are unrelated categories.
NIST 800-30 defines cyber risk as a function of the likelihood of a given threat-source exercising a potential vulnerability, and:
Options:
the pre-disposing conditions of the vulnerability.
the probability of detecting damage to the infrastructure.
the effectiveness of the control assurance framework.
the resulting impact of that adverse event on the organization.
NIST SP 800-30 describes risk using a classic risk model:risk is a function of likelihood and impact. In this model, a threat-source may exploit a vulnerability, producing a threat event that results in adverse consequences. Thelikelihoodcomponent reflects how probable it is that a threat event will occur and successfully cause harm, considering factors such as threat capability and intent (or in non-adversarial cases, the frequency of hazards), the existence and severity of vulnerabilities, exposure, and the strength of current safeguards. However, likelihood alone does not define risk; a highly likely event that causes minimal harm may be less important than a less likely event that causes severe harm.
The second required component is theimpact—the magnitude of harm to the organization if the adverse event occurs. Impact is commonly evaluated across mission and business outcomes, including financial loss, operational disruption, legal or regulatory consequences, reputational damage, and loss of confidentiality, integrity, or availability. This is why option D is correct: NIST’s definition explicitly ties the risk expression tothe resulting impact on the organization.
The other options may influence likelihood assessment or control selection, but they are not the missing definitional element. Detection probability and control assurance relate to monitoring and governance; predisposing conditions can shape likelihood. None replace the
What things must be identified to define an attack vector?
Options:
The platform, application, and data
The attacker and the vulnerability
The system, transport protocol, and target
The source, processor, and content
Anattack vectoris the route or method used to compromise an environment, and it is typically described as the way athreat actorexploits avulnerabilityto gain unauthorized access, execute code, steal data, or disrupt services. To define an attack vector correctly, cybersecurity documents emphasize that you must identify both parts of that relationship:who or what is attackingandwhat weakness is being exploited. The “attacker” component represents the threat source or threat actor, including their capability and intent (for example, cybercriminals using phishing, insiders abusing access, or automated botnets scanning the internet). The “vulnerability” component is the specific weakness or exposure that enables success, such as a missing patch, weak authentication, misconfiguration, excessive permissions, insecure coding flaw, or lack of user awareness.
Without identifying the attacker, you cannot properly characterize the likely techniques, scale, and motivation driving the vector. Without identifying the vulnerability, you cannot define the practical entry point and control gaps that make the vector feasible. Together, attacker plus vulnerability allows defenders to map realistic scenarios, prioritize controls, and select mitigations that reduce likelihood and impact. Those mitigations may include patching, configuration hardening, strong authentication, least privilege, network segmentation, user training, and monitoring. The other options list technology elements that can be involved in an incident, but they do not capture the essential definition of an attack vector as an exploitation path driven by a threat actor leveraging a weakness
Which of the following challenges to embedded system security can be addressed through ongoing, remote maintenance?
Options:
Processors being overwhelmed by the demands of security processing
Deploying updated firmware as vulnerabilities are discovered and addressed
Resource constraints due to limitations on battery, memory, and other physical components
Physical security attacks that take advantage of vulnerabilities in the hardware
Ongoing, remote maintenance is one of the most effective ways to improve the security posture of embedded systems over time because it enables timely remediation of newly discovered weaknesses. Embedded devices frequently run firmware that includes operating logic, network stacks, and third-party libraries. As vulnerabilities are discovered in these components, organizations must be able to deploy fixes quickly to reduce exposure. Remote maintenance supports this by enabling over-the-air firmware and software updates, configuration changes, certificate and key rotation, and the rollout of compensating controls such as updated security policies or hardened settings.
Option B is correct because remote maintenance directly addresses the challenge ofdeploying updated firmwareas issues are identified. Cybersecurity guidance for embedded and IoT environments emphasizes secure update mechanisms: authenticated update packages, integrity verification (such as digital signatures), secure distribution channels, rollback protection, staged deployment, and audit logging of update actions. These practices reduce the risk of attackers installing malicious firmware and help ensure devices remain supported throughout their operational life.
The other options are not primarily solved by remote maintenance. Limited CPU and memory are inherent design constraints that may require hardware redesign. Battery and component limitations are also physical constraints. Physical security attacks exploit device access and hardware weaknesses, which require tamper resistance, secure boot, and physical protections rather than remote maintenance alone.
What is an embedded system?
Options:
A system that is located in a secure underground facility
A system placed in a location and designed so it cannot be easily removed
It provides computing services in a small form factor with limited processing power
It safeguards the cryptographic infrastructure by storing keys inside a tamper-resistant external device
An embedded system is a specialized computing system designed to perform a dedicated function as part of a larger device or physical system. Unlike general-purpose computers, embedded systems are built to support a specific mission such as controlling sensors, actuators, communications, or device logic in products like routers, printers, medical devices, vehicles, industrial controllers, and smart appliances. Cybersecurity documentation commonly highlights that embedded systems tend to operate with constrained resources, which may include limited CPU power, memory, storage, and user interface capabilities. These constraints affect both design and security: patching may be harder, logging may be minimal, and security features must be carefully engineered to fit the platform’s limitations.
Option C best matches this characterization by describing a small form factor and limited processing power, which are typical attributes of many embedded devices. While not every embedded system is “small,” the key idea is that it is purpose-built, resource-constrained, and tightly integrated into a larger product.
The other options describe different concepts. A secure underground facility relates to physical site security, not embedded computing. Being hard to remove is about physical installation or tamper resistance, which can apply to many systems but is not what defines “embedded.” Storing cryptographic keys in a tamper-resistant external device describes a hardware security module or secure element use case, not the general definition of an embedded system.