Vicky's PageVicky's Page
Vivian
Recipe
Tools
English
Semester 3
Vivian
Recipe
Tools
English
Semester 3
  • Main Pages

    • Basic
    • General
    • Block Chain
  • CyberDefense Pro - 1.0 Introduction

    • 1.1 Introduction to TestOut CyberDefense Pro
  • CyberDefense Pro - 2.0 Vulnerability Response, Handling, and Management

    • 2.1 Regulations and Standards
    • 2.2 Risk Management
    • 2.3 Security Controls
    • 2.4 Attack Surfaces
    • 2.5 Patch Management
    • 2.6 Security Testing
  • CyberDefense Pro - 3.0 Threat Intelligence and Threat Hunting

    • 3.1 Threat Actors
    • 3.2 Threat Intelligence
    • 3.3 Threat Hunting
    • 3.4 Honeypots
  • CyberDefense Pro - 4.0 System and Network Architecture

    • 2.1 Regulations and Standards
    • 4.2 Network Architecture
    • Section 4.3 Identity and Access Management (IAM)
    • 4.4 Data Protection
    • 4.5 Logging
  • CyberDefense Pro - 5.0 Vulnerability Assessments

    • 5.1 Reconnaissance
    • 2.1 Regulations and Standards
    • 5.3 Enumeration
    • 5.4 Vulnerability Assessments
    • 5.5 Vulnerability Scoring Systems
    • 5.6 Classifying Vulnerability Information
  • CyberDefense Pro - 6.0 Network Security

    • 2.1 Regulations and Standards
    • 6.2 Wireless Security
    • 6.3 Web Server Security
    • 2.1 Regulations and Standards
    • 6.5 Sniffing
    • 6.6 Authentication Attacks
    • 6.7 Cloud Security
    • 6.8 Email Security
    • 2.1 Regulations and Standards
    • 6.10 Industrial Computer Systems
  • CyberDefense Pro - 7.0 Host-Based Attacks

    • 7.1 Device Security
    • 7.2 Unauthorized Changes
    • 27.3 Malware
    • 7.4 Command and Control
    • 2.1 Regulations and Standards
    • 7.6 Scripting and Programming
    • 2.1 Regulations and Standards
  • CyberDefense Pro - 8.0 Security Management

    • 8.1 Security Information and Event Management (SIEM)
    • 8.2 Security Orchestration, Automation, and Response (SOAR)
    • 8.3 Exploring Abnormal Activity
  • CyberDefense Pro - 9.0 Post-Attack

    • 9.1 Containment
    • 2.1 Regulations and Standards
    • 9.3 Post-Incident Activities
  • A.0 CompTIA CySA+ CS0-003 - Practice Exams

    • A.1 Prepare for CompTIA CySA+ Certification
    • A.2 CompTIA CySA+ CS0-003 Domain Review (20 Questions)
    • A.3 CompTIA CySA+ CS0-003 Practice Exams (All Questions)
  • B.0 TestOut CyberDefense Pro - Practice Exams

    • Section B.1 Prepare for TestOut CyberDefense Pro Certification
    • B.2 TestOut CyberDefense Pro Exam Domain Review
  • Glossary

    • Glossary
  • CYB400

    • Chapter 01
    • Chapter 02
    • Chapter 03
    • Chapter 04
    • Project 01
  • CYB402

    • lab
    • essay
  • CYB406

    • lab 01
    • lab 02
    • lab 03
    • lab 04
    • lab 05
    • lab 06
  • CYB300 Automobility Cybersecurity Engineering Standards

    • Schedule
    • Tara PPT
    • MidTerm Notes
    • Questions
  • ISO 21434

    • Introduction
    • Forward
    • Introduction
    • Content
  • CYB302 Automobility Cybersecurity

    • Week 01
    • Week 02
    • Week 03
    • Week 04
    • Chapter 5 - AUTOSAR Embedded Security in Vehicles
    • Chapter 6
    • Chapter 7
    • Chapter 8
    • How to Write
    • Review 5
  • CYB304 Project Management For Cybersecurity In Automobility

    • Unit 1 Introduction
    • Unit 1 Frameworks
    • Unit 1 Methodologies
    • Unit 1 Standards
    • Unit 1 Reqirements
    • Unit 2 Scheduling
    • Unit 2 Scheduling 2
    • Unit 2 Trends
    • Unit 2 Risk
    • Unit 2 Project Monitoring & Controlling
    • Unit 2 Budgeting
    • Unit 2 Closure
  • Project Manager

    • Resource
    • Gantt Charts
    • Intrduction
    • First Things
    • Project Plan
    • Project Schedule
    • Agile
    • Resource
  • CYB306 Cyber-Physical Vehicle System Security

    • Chapter 1
    • Chapter 2
    • Chapter 3
    • Chapter 4
    • Chapter 5
    • Chapter 6 - Infrastructure for Transportation Cyber-Physical Systems
    • Chapter 7
    • Chapter 8
    • Chapter 9
    • Chapter 10
    • Chapter 11
    • Case 3
    • Case 4
    • Discussion 4
    • Discussion 5
  • CYB308 Cybersecurity System Audits

    • Week 01
    • Week 02
    • Week 03
    • Week 04
    • Week 05
    • C 4
    • C 5
    • C 5 Business Resilience
    • C 6
    • C 6-2
    • Review
    • Questions
  • CYB308 TextBook

    • CHAPTER 1 Becoming a CISA
    • CHAPTER 2 IT Governance and Management
    • CHAPTER 3 The Audit Process
    • CHAPTER 4 IT Life Cycle Management
    • Input Controls
    • CHAPTER 5 IT Service Management and Continuity
    • Business Resilience
    • CHAPTER 6 Information Asset Protection
    • Encryption
    • Appendix A
    • Appendix B
    • Appendix C
  • https://www.itexams.com/exam/CISA

CHAPTER 4 IT Life Cycle Management

  1. What testing activities should developers perform during the development phase?

    • A. Security testing
    • B. Integration testing
    • C. Unit testing
    • D. Developers should not perform any testing
    • Answer: C. During the development phase, developers should perform only unit testing to verify that the individual sections of code they have written are performing properly.
    • C. Unit testing
    • During the development phase, unit testing is the primary testing activity performed by developers. This involves testing individual components or modules of the code to ensure they function as intended in isolation. Unit testing helps identify bugs early in the development process, reducing the cost and effort required to fix issues later.
    • Other Options:
      • A. Security testing: While important, this is typically conducted by specialized teams during the testing or deployment phases, not primarily by developers during development.
      • B. Integration testing: This is usually performed after unit testing to verify the interaction between modules or components.
      • D. Developers should not perform any testing: This is incorrect, as developers are responsible for performing unit testing during the development phase.
  2. The purpose of function point analysis (FPA) is to

    • A. Estimate the effort required to develop a software program.
    • B. Identify risks in a software program.
    • C. Estimate task dependencies in a project plan.
    • D. Inventory inputs and outputs in a software program.
    • Answer: A. Function point analysis (FPA) is used to estimate the effort required to develop a software program.
    • A. Estimate the effort required to develop a software program.
    • Explanation:
      • Function Point Analysis (FPA) is a standardized method used to measure the size and complexity of software by quantifying its functional components, such as inputs, outputs, user interactions, files, and interfaces. The primary purpose of FPA is to estimate the effort, cost, and time required for software development or maintenance based on these measurements.
    • Other Options:
      • B. Identify risks in a software program: FPA is not designed for risk assessment.
      • C. Estimate task dependencies in a project plan: This is typically done through techniques like critical path analysis or project management tools, not FPA.
      • D. Inventory inputs and outputs in a software program: While FPA involves identifying inputs and outputs, its purpose goes beyond inventorying—it focuses on estimating development effort.
  3. A project manager needs to identify the tasks that are responsible for project delays. What approach should the project manager use?

    • A. Function point analysis
    • B. Gantt analysis
    • C. Project evaluation and review technique
    • D. Critical path methodology
    • Answer: D. Critical path methodology helps a project manager determine which activities are on a project’s “critical path.”
    • D. Critical path methodology
    • Explanation:
      • The Critical Path Methodology (CPM) is used to identify the sequence of tasks that determine the minimum project duration. If tasks on the critical path are delayed, the entire project is delayed. By analyzing the critical path, the project manager can identify which tasks are responsible for project delays and take corrective action.
    • Other Options:
      • A. Function point analysis: This is used for estimating the effort required for software development, not for managing project timelines or delays.
      • B. Gantt analysis: While Gantt charts help visualize task schedules, they do not specifically highlight the tasks responsible for project delays.
      • C. Project evaluation and review technique (PERT): PERT is used for estimating project timelines with uncertainty but does not directly focus on identifying tasks causing delays.
  4. A software developer has informed the project manager that a portion of the application development is going to take five additional days to complete. The project manager should

    • A. Inform the other project participants of the schedule change.
    • B. Change the project schedule to reflect the new completion time.
    • C. Create a project change request.
    • D. Adjust the resource budget to account for the schedule change.
    • Answer: C. When any significant change needs to occur in a project plan, a project change request should be created to document the reason for the change.
    • C. Create a project change request.
    • Explanation:
      • When there is a significant change in the project, such as a delay, the project manager should follow the formal change management process. This involves creating a project change request to document the impact of the delay, assess its implications on the timeline, budget, and resources, and seek approval from stakeholders if necessary. This ensures transparency and proper tracking of changes.
    • Other Options:
      • A. Inform the other project participants of the schedule change: While important, this step should occur only after the change request is reviewed and approved.
      • B. Change the project schedule to reflect the new completion time: The project manager should not unilaterally change the schedule without following the formal change management process.
      • D. Adjust the resource budget to account for the schedule change: This step may be necessary but only after the change request is approved and its impact assessed.
  5. The phases and their order in the systems development life cycle are

    • A. Requirements definition, feasibility study, design, development, testing, implementation, post-implementation
    • B. Feasibility study, requirements definition, design, development, testing, implementation, post-implementation
    • C. Feasibility study, requirements definition, design, development, testing, implementation
    • D. Requirements definition, feasibility study, development, testing, implementation, post-implementation
    • Answer: B. The phases of the systems development life cycle are feasibility study, requirements definition, design, development, testing, implementation, and post-implementation.
    • B. Feasibility study, requirements definition, design, development, testing, implementation, post-implementation
    • Explanation:
      • The Systems Development Life Cycle (SDLC) is a structured approach to software and system development. The phases in order are as follows:
      • Feasibility Study: Determine if the project is viable and worth pursuing.
      • Requirements Definition: Identify what the system should do, capturing functional and non-functional requirements.
      • Design: Create detailed system and technical specifications.
      • Development: Write and build the actual software or system.
      • Testing: Validate that the system meets requirements and works as expected.
      • Implementation: Deploy the system into a live environment.
      • Post-Implementation: Monitor and maintain the system after deployment.
  6. What personnel should be involved in the requirements phase of a software development project?

    • A. Systems administrators, network administrators, and software developers
    • B. Developers, analysts, architects, and users
    • C. Security, privacy, and legal analysts
    • D. Representatives from each software vendor
    • Answer: B. Requirements need to be developed by several parties, including developers, analysts, architects, and users.
    • B. Developers, analysts, architects, and users
    • Explanation:
      • The requirements phase of a software development project is crucial for gathering and documenting what the system needs to accomplish. It requires input from multiple stakeholders to ensure all perspectives are considered:
      • Developers: Provide technical feasibility insights.
      • Analysts: Bridge the gap between business needs and technical specifications.
      • Architects: Define the overall structure and ensure alignment with system goals.
      • Users: Offer insights into the functional and usability requirements, as they are the end-users of the system.
    • This combination ensures a comprehensive understanding of what the system needs to achieve.
    • Other Options:
      • A. Systems administrators, network administrators, and software developers: These roles are more relevant during the implementation and deployment phases, not the requirements phase.
      • C. Security, privacy, and legal analysts: While important, these roles focus on compliance and are typically involved in specific stages, such as design or post-implementation.
      • D. Representatives from each software vendor: This applies to vendor selection or procurement processes, not gathering requirements for a specific project.
  7. The primary source for test plans in a software development project is

    • A. Requirements
    • B. Developers
    • C. End users
    • D. Vendors
    • Answer: A. The requirements that are developed for a project should be the primary source for detailed tests.
    • A. Requirements
    • Explanation:
      • Test plans are primarily derived from the requirements of the software development project. Requirements specify what the system is expected to do (functional requirements) and how it should perform (non-functional requirements). These specifications serve as the foundation for designing test cases to ensure the system meets its intended objectives.
      • By basing test plans on requirements, the project team ensures:
        • Comprehensive coverage of system functionalities.
        • Alignment of testing activities with business goals.
        • Detection of deviations from expected outcomes.
    • Other Options:
      • B. Developers: While developers provide technical insights, they are not the primary source for test plans.
      • C. End users: End users offer feedback on usability but are not the main source for formal test plans.
      • D. Vendors: Vendors may provide testing tools or guidelines but do not define the specific test plans for a project.
  8. The primary purpose of a change management process is to

    • A. Record changes made to systems and infrastructure.
    • B. Review and approve proposed changes to systems and infrastructure.
    • C. Review and approve changes to a project schedule.
    • D. Review and approve changes to application source code.
    • Answer: B. The main purpose of change management is to review and approve proposed changes to systems and infrastructure. This helps to reduce the risk of unintended events and unplanned downtime.
    • B. Review and approve proposed changes to systems and infrastructure.
    • Explanation:
      • The primary purpose of a change management process is to ensure that all changes to systems, infrastructure, or processes are properly reviewed, approved, and documented before implementation. This minimizes the risks associated with unauthorized or unplanned changes, such as system downtime, security vulnerabilities, or operational disruptions.
    • The process typically includes:
      • Submitting a change request.
      • Reviewing the request for potential impacts.
      • Approving or rejecting the request based on evaluation.
      • Implementing the approved change with appropriate tracking.
    • Other Options:
      • A. Record changes made to systems and infrastructure: While change management includes documentation, its primary goal is broader and focuses on reviewing and approving changes beforehand.
      • C. Review and approve changes to a project schedule: This is handled under project management, not general change management.
      • D. Review and approve changes to application source code: Code changes are managed through version control systems and code review processes, which are components of development, not overall change management.
  9. What is the purpose of a capability maturity model?

    • A. To assess the experience of software developers
    • B. To assess the experience of project managers
    • C. To assess the integrity of application software
    • D. To assess the maturity of business processes
    • Answer: D. A capability maturity model helps an organization to assess the maturity of its business processes, which is an important first step to any large-scale process improvement efforts.
    • D. To assess the maturity of business processes
    • Explanation:
      • The Capability Maturity Model (CMM) is a framework used to assess and improve the maturity of business processes, particularly in software development and project management. It evaluates processes based on their effectiveness, efficiency, and consistency. The model defines maturity levels that help organizations identify their current state and develop a roadmap for process improvement.
      • The Five Maturity Levels:
        • Initial (Level 1): Processes are ad hoc and chaotic.
        • Repeatable (Level 2): Processes are established but not standardized.
        • Defined (Level 3): Processes are standardized and documented.
        • Managed (Level 4): Processes are measured and controlled.
        • Optimizing (Level 5): Continuous process improvement is implemented.
    • Other Options:
      • A. To assess the experience of software developers: CMM focuses on processes, not individual skills.
      • B. To assess the experience of project managers: Similar to A, this is not the focus of CMM.
      • C. To assess the integrity of application software: Integrity is addressed through specific controls or frameworks, not CMM.
  10. The purpose of input validation checking is to

    • A. Ensure that input values are within acceptable ranges.
    • B. Ensure that input data contains the correct type of characters.
    • C. Ensure that input data is free of hostile or harmful content.
    • D. Ensure all of the above.
    • Answer: D. Input validation checking is used to ensure that input values are within established ranges, of the correct character types, and free of harmful content.
    • D. Ensure all of the above.
    • Explanation:
      • Input validation checking serves multiple purposes to ensure the integrity, security, and correctness of input data. These purposes include:
      • Ensuring input values are within acceptable ranges: This prevents errors and ensures data accuracy (e.g., a user age field should only accept values between 0 and 120).
      • Ensuring input data contains the correct type of characters: This prevents format-related errors (e.g., numeric fields should not contain letters).
      • Ensuring input data is free of hostile or harmful content: This protects against malicious attacks like SQL injection or cross-site scripting (XSS).
  11. An organization is considering the acquisition of enterprise software that will be hosted by a cloud services provider. What additional requirements need to be considered for the cloud environment?

    • A. Logging
    • B. Access control
    • C. Data segregation
    • D. Performance
    • Answer: C. In addition to business, functional, security, and privacy requirements, an organization considering cloud-based services needs to understand how the cloud services provider segregates the organization’s data from that of its other customers.
    • C. Data segregation
    • Explanation:
      • When enterprise software is hosted by a cloud services provider, data segregation becomes a critical requirement. Cloud environments often host data from multiple clients, so ensuring that an organization's data is properly isolated from other clients' data is essential for maintaining data confidentiality and security. Data segregation ensures compliance with legal and regulatory requirements and protects against data breaches or unauthorized access.
    • Other Options:
      • A. Logging: Logging is important for monitoring and auditing purposes, but it is not unique to the cloud environment—it applies to both on-premises and cloud solutions.
      • B. Access control: While access control is vital, it is not specific to the cloud; it is a general requirement for securing any system.
      • D. Performance: Performance is a consideration for all environments but does not specifically address the unique challenges of a cloud-hosted solution.
  12. System operators have to make an emergency change in order to keep an application server running. To satisfy change management requirements, the systems operators should

    • A. Document the steps taken.
    • B. Fill out an emergency change request form.
    • C. Seek approval from management before making the change.
    • D. Do all of the above.
    • Answer: D. When making an emergency change, personnel should first seek management approval, document the details of the change, and initiate an emergency change management procedure.
    • D. Do all of the above.
    • Explanation:
      • In an emergency change scenario, it is critical to follow change management requirements even under time constraints. The following steps are necessary to ensure the emergency change is appropriately handled:
      • Document the steps taken: This ensures there is a record of what was done, which is essential for accountability, troubleshooting, and audit purposes.
      • Fill out an emergency change request form: This formalizes the change and aligns it with the organization's change management policies.
      • Seek approval from management before making the change: Whenever possible, even in emergencies, obtaining management approval ensures the change is authorized and complies with governance processes.
      • Following all these steps ensures that emergency changes are controlled, traceable, and compliant with organizational policies.
  13. A global organization is planning the migration of a business process to a new application. What cutover methods can be considered?

    • A. Parallel, geographic, module by module, or all at once
    • B. Parallel, geographic, or module by module
    • C. Parallel, module by module, or all at once
    • D. Parallel, geographic, or all at once
    • Answer: A. The migration to a new application can be done in several ways: parallel (running old and new systems side by side); geographic (migrating users in each geographic region separately); module by module (migrating individual modules of the application); or migrate all users, locations, and modules at the same time.
    • A. Parallel, geographic, module by module, or all at once
    • Explanation:
      • When migrating a business process to a new application, several cutover methods can be considered, depending on the organization's needs, risks, and resources:
        • Parallel: Running the old and new systems simultaneously for a period of time to ensure the new system works correctly before decommissioning the old one.
        • Geographic: Migrating the process region by region or location by location, useful for global organizations.
        • Module by Module: Transitioning specific parts or modules of the process incrementally to manage complexity and risk.
        • All at Once (Big Bang): Switching entirely to the new system in one go, which is faster but riskier if problems arise.
      • This variety of methods provides flexibility to choose the best approach based on the organization's operational, technical, and risk considerations.
      • Other Options:
        • B, C, and D: These omit one or more viable cutover methods, such as geographic or module by module, making them incomplete.
  14. The purpose of developing risk tiers in third-party management is to

    • A. Determine whether to perform penetration tests.
    • B. Satisfy regulatory requirements.
    • C. Determine the appropriate level of due diligence.
    • D. Determine data classification requirements.
    • Answer: C. Developing risk tiers in third-party management helps an organization determine the level of due diligence for third parties at each risk tier. Because the level of risk varies, some third parties warrant extensive due diligence, while a lighter touch is warranted for low-risk parties.
    • C. Determine the appropriate level of due diligence.
    • Explanation:
      • Risk tiers in third-party management are developed to categorize vendors or third parties based on the level of risk they pose to the organization. This helps in determining the appropriate level of due diligence required for each tier. For instance, vendors handling sensitive or critical data may require more rigorous assessment and oversight than those with minimal access to the organization's systems or data.
    • Why the Other Options Are Incorrect:
      • A. Determine whether to perform penetration tests: While risk tiers may indirectly inform this, penetration testing is not the primary purpose of risk tiers.
      • B. Satisfy regulatory requirements: Risk tiers can help in meeting regulatory requirements, but this is not their primary purpose.
      • D. Determine data classification requirements: Data classification is a separate activity and not the main objective of risk tiering in third-party management.
  15. The reason that functional requirements need to be measurable is

    • A. Developers need to know how to test functional requirements
    • B. Functional tests are derived directly from functional requirements
    • C. To verify correct system operation
    • D. To measure system performance
    • Answer: B. Functional requirements should be measurable, because test cases should be developed directly from functional requirements. The same can be said about security and privacy requirements—all must be measurable because all should be tested.

CHAPTER 5 IT Service Management and Continuity

  1. A web application is displaying information incorrectly and many users have contacted the IT service desk. This matter should be considered a(n)

    • A. Incident
    • B. Problem
    • C. Bug
    • D. Outage
    • Answer:B. A problem is defined as a condition that is the result of multiple incidents that exhibit common symptoms. In this example, many users are experiencing the effects of the application error.
    • A. Incident
    • Explanation:
      • An incident is defined as an unplanned interruption to an IT service or a reduction in the quality of an IT service. In this case, the web application is displaying information incorrectly, causing disruptions for users. As a result, it qualifies as an incident that needs to be logged, diagnosed, and resolved by the IT service desk.
    • Why the Other Options Are Incorrect:
      • B. Problem: A problem is the underlying cause of one or more incidents. While this situation may later be investigated to determine the root cause (problem), the immediate issue is considered an incident.
      • C. Bug: A bug refers to a defect in the code or system. While the incorrect display might be caused by a bug, the term "incident" focuses on the service disruption from the users' perspective.
      • D. Outage: An outage refers to a complete loss of service. In this case, the web application is still operational but displaying information incorrectly, so it does not qualify as an outage.
  2. An IT organization is experiencing many cases of unexpected downtime that are caused by unauthorized changes to application code and operating system configuration. Which process should the IT organization implement to reduce downtime?

    • A. Configuration management
    • B. Incident management
    • C. Change management
    • D. Problem management
    • Answer: C. Change management is the process of managing change through a life cycle process that consists of request, review, approve, implement, and verify.
    • C. Change management
    • Explanation:
      • The change management process is designed to ensure that all changes to IT systems, such as application code or operating system configurations, are formally reviewed, approved, and documented before implementation. By implementing a structured change management process, the organization can:
      • Reduce unauthorized changes by requiring approval and tracking.
      • Minimize the risk of unexpected downtime caused by poorly planned or unapproved changes.
      • Improve accountability and transparency for all modifications to systems.
    • Other Options:
      • A. Configuration management: While configuration management tracks and maintains information about the system's setup, it does not directly address unauthorized changes.
      • B. Incident management: This focuses on resolving incidents (unexpected disruptions), not preventing downtime caused by unauthorized changes.
      • D. Problem management: This focuses on identifying and addressing the root cause of recurring incidents, but it does not control how changes are made.
  3. An IT organization manages hundreds of servers, databases, and applications, and is having difficulty tracking changes to the configuration of these systems. What process should be implemented to remedy this?

    • A. Configuration management
    • B. Change management
    • C. Problem management
    • D. Incident management
    • Answer: A. Configuration management is the process (often supplemented with automated tools) of tracking configuration changes to systems and system components such as databases and applications.
    • A. Configuration management
    • Explanation:
      • Configuration management is the process of identifying, recording, and managing all components (hardware, software, and settings) in an IT environment. It involves maintaining an up-to-date Configuration Management Database (CMDB) or similar system to track changes, dependencies, and relationships among IT assets. This ensures:
        • Accurate tracking of changes to configurations.
        • Improved visibility and control over the IT environment.
        • Easier troubleshooting and impact analysis.
        • For an organization with hundreds of servers, databases, and applications, configuration management is critical to managing complexity and ensuring consistency.
    • Other Options:
      • B. Change management: While change management ensures changes are reviewed and approved, it does not track the configurations of systems. Configuration management complements change management by documenting those changes.
      • C. Problem management: This focuses on identifying and resolving the root causes of recurring incidents but does not track system configurations.
      • D. Incident management: This deals with resolving disruptions but does not manage system configurations.
  4. A computer’s CPU, memory, and peripherals are connected to each other through a

    • A. Kernel
    • B. FireWire
    • C. Pipeline
    • D. Bus
    • Answer: D. A bus connects all of the computer’s internal components together, including its CPU, main memory, secondary memory, and peripheral devices.
    • D. Bus
    • Explanation:
      • A bus is the communication system that connects the various components of a computer, including the CPU, memory, and peripherals. It allows data to be transmitted between these components. There are different types of buses in a computer system, such as:
        • Data bus: Transfers data between components.
        • Address bus: Specifies the memory locations for data transfers.
        • Control bus: Manages control signals between the CPU and other components.
    • Other Options:
      • A. Kernel: The kernel is the core of an operating system, managing hardware and software interactions, but it does not physically connect hardware components.
      • B. FireWire: This is a high-speed interface used to connect peripherals (like external storage) but is not a system-wide component interconnection.
      • C. Pipeline: Refers to CPU architecture and how instructions are processed, but it does not connect components physically.
  5. A database administrator has been asked to configure a database management system so that it records all changes made by users. What should the database administrator implement?

    • A. Audit logging
    • B. Triggers
    • C. Stored procedures
    • D. Journaling
    • Answer: A. The database administrator should implement audit logging. This will cause the database to record every change that is made to it.
    • A. Audit logging
    • Explanation:
      • Audit logging is a feature in database management systems (DBMS) that records all changes made by users. It is typically used for tracking and monitoring user activities, ensuring accountability, and supporting compliance with security policies and regulations. Audit logs capture details such as:
        • What changes were made.
        • Who made the changes.
        • When the changes were made.
      • This helps in forensic analysis, monitoring unauthorized activities, and maintaining an accurate history of changes.
    • Other Options:
      • B. Triggers: Triggers are database mechanisms that automatically execute actions in response to specific events (e.g., insert, update, delete). While they can log changes, they are not a comprehensive audit logging solution.
      • C. Stored procedures: Stored procedures are precompiled SQL code stored in the database, used to perform repetitive tasks, but they do not inherently record user changes.
      • D. Journaling: Journaling typically refers to maintaining a log of transactions to ensure data integrity during recovery, but it is not specifically for tracking user changes.
  6. The layers of the TCP/IP reference model are

    • A. Link, Internet, transport, application
    • B. Physical, link, Internet, transport, application
    • C. Link, transport, Internet, application
    • D. Physical, data link, network, transport, session, presentation, application
    • Answer: A. The layers of the TCP/IP model are (from lowest to highest) link, Internet, transport, and application.
    • A. Link, Internet, Transport, Application
    • Explanation:
      • The TCP/IP reference model has four layers, each corresponding to specific functionality in the communication process:
        • Link Layer: Responsible for handling the physical transmission of data across network devices. It corresponds to the physical and data link layers in the OSI model.
        • Internet Layer: Handles addressing, routing, and delivering packets between devices across different networks (e.g., IP protocol).
        • Transport Layer: Ensures reliable data transmission between devices, providing error checking and flow control (e.g., TCP, UDP).
        • Application Layer: Provides network services directly to applications, enabling end-user communication (e.g., HTTP, SMTP).
    • Other Options:
      • B. Physical, link, Internet, transport, application: The TCP/IP model combines physical and data link layers into the Link layer, so this option is incorrect.
      • C. Link, transport, Internet, application: Incorrect layer sequence; the Internet layer should come before the Transport layer.
      • D. Physical, data link, network, transport, session, presentation, application: This describes the 7-layer OSI model, not the 4-layer TCP/IP model.
  7. The purpose of the Internet layer in the TCP/IP model is

    • A. Encapsulation
    • B. Packet delivery on a local network
    • C. Packet delivery on a local or remote network
    • D. Order of delivery and flow control
    • Answer: C. The purpose of the Internet layer in the TCP/IP model is the delivery of packets from one station to another, on the same network or on a different network.
    • C. Packet delivery on a local or remote network
    • Explanation:
      • The Internet layer in the TCP/IP model is responsible for:
        • Routing and forwarding packets: Ensuring that data is delivered between the source and destination across different networks.
        • Addressing: Using IP addresses to identify devices on a network.
        • Fragmentation and reassembly: Dividing packets if necessary and ensuring they are reassembled at the destination.
        • The Internet layer works across both local and remote networks by providing routing capabilities, enabling data to traverse multiple networks to reach its destination.
    • Other Options:
      • A. Encapsulation: Encapsulation is a general process performed at multiple layers, not specific to the Internet layer.
      • B. Packet delivery on a local network: This is primarily handled by the Link layer.
      • D. Order of delivery and flow control: These are responsibilities of the Transport layer (e.g., TCP).
  8. The purpose of the DHCP protocol is

    • A. Control flow on a congested network.
    • B. Query a station to discover its IP address.
    • C. Assign an IP address to a station.
    • D. Assign an Ethernet MAC address to a station.
    • Answer: C. The DHCP protocol is used to assign IP addresses to computers on a network.
    • C. Assign an IP address to a station.
    • Explanation:
      • The Dynamic Host Configuration Protocol (DHCP) is used to dynamically assign IP addresses to devices (stations) on a network. DHCP automates the configuration of network settings, such as:
        • IP address: Assigning a unique IP address to each device.
        • Subnet mask: Defining the network and host portions of the IP address.
        • Default gateway: Providing the router address for devices to access other networks.
        • DNS servers: Assigning the addresses of servers that resolve domain names.
        • This reduces manual configuration and ensures efficient use of available IP addresses.
    • Other Options:
      • A. Control flow on a congested network: This is not the purpose of DHCP; flow control is handled by protocols like TCP or network QoS mechanisms.
      • B. Query a station to discover its IP address: This is a function of protocols like ARP (Address Resolution Protocol) or ICMP.
      • D. Assign an Ethernet MAC address to a station: MAC addresses are hardware-specific and do not require assignment by DHCP.
  9. An IS auditor is examining a wireless (Wi-Fi) network and has determined that the network uses WEP encryption. What action should the auditor take?

    • A. Recommend that encryption be changed to WPA.
    • B. Recommend that encryption be changed to EAP.
    • C. Request documentation for the key management process.
    • D. Request documentation for the authentication process.
    • Answer: A. The WEP protocol has been seriously compromised and should be replaced with WPA or WPA2 encryption.
    • A. Recommend that encryption be changed to WPA.
    • Explanation:
      • Wired Equivalent Privacy (WEP) is an outdated and insecure encryption protocol for Wi-Fi networks. It is vulnerable to attacks, and its use poses significant security risks. Modern encryption standards, such as Wi-Fi Protected Access (WPA) or preferably WPA2/WPA3, provide stronger encryption and are recommended to protect wireless networks.
      • The IS auditor should recommend upgrading the encryption to a more secure standard to enhance the overall security of the wireless network.
    • Why the Other Options Are Incorrect:
      • B. Recommend that encryption be changed to EAP: Extensible Authentication Protocol (EAP) is an authentication framework, not an encryption standard.
      • C. Request documentation for the key management process: Key management is important but does not address the fundamental issue of WEP being insecure.
      • D. Request documentation for the authentication process: While useful, it does not solve the encryption vulnerability posed by WEP.
  10. 126.0.0.1 is an example of a

    • A. MAC address
    • B. Loopback address
    • C. Class A address
    • D. Subnet mask
    • Answer: C. Class A addresses are in the range 0.0.0.0 to 127.255.255.255. The address 126.0.0.1 falls into this range.
    • C. Class A address
    • Explanation:
      • 126.0.0.1 is part of the Class A address range, which spans from 1.0.0.0 to 126.255.255.255. This address can be assigned to network devices as part of a public or private network.
    • Why It's Not a Loopback Address:
      • The loopback address range is strictly 127.0.0.0 to 127.255.255.255.
      • Loopback addresses are reserved for testing and local communication within the same device. They are not routable or usable on external networks.
      • Since 126.0.0.1 falls outside the 127.x.x.x range, it is not a loopback address.
    • Other Options:
      • A. MAC address: This refers to a hardware identifier and does not resemble an IPv4 address like 126.0.0.1.
      • B. Loopback address: Incorrect, as explained above.
      • D. Subnet mask: Subnet masks (e.g., 255.255.255.0) are not valid IP addresses but rather define the network and host portions of an IP address.
  11. What is the most important consideration when selecting a hot site?

    • A. Time zone
    • B. Geographic location in relation to the primary site
    • C. Proximity to major transportation
    • D. Natural hazards
    • Answer: B. An important selection criterion for a hot site is the geographic location in relation to the primary site. If they are too close together, then a single disaster event may involve both locations.
    • B. Geographic location in relation to the primary site
    • Explanation:
      • When selecting a hot site (a fully operational backup site ready to take over in case of a disaster), the geographic location in relation to the primary site is the most critical factor. This ensures:
      • Risk Mitigation: The hot site should be far enough away to avoid being affected by the same disaster (e.g., earthquakes, hurricanes, power outages).
      • Accessibility: It should still be accessible to personnel and resources when needed.
      • The balance between proximity (for ease of access) and distance (to reduce shared risks) is key.
    • Other Options:
      • A. Time zone: While important for global operations, it is secondary to ensuring physical risk separation and accessibility.
      • C. Proximity to major transportation: This can improve logistics but is not as critical as the site's ability to function during a disaster.
      • D. Natural hazards: Avoiding natural hazards is important, but it is part of the overall geographic location consideration.
  12. An organization has established a recovery point objective of 14 days for its most critical business applications. Which recovery strategy would be the best choice?

    • A. Mobile site
    • B. Warm site
    • C. Hot site
    • D. Cold site
    • Answer: D. An organization that has a 14-day recovery time objective (RTO) can use a cold site for its recovery strategy. Fourteen days is enough time for most organizations to acquire hardware and recover applications.
    • D. Cold site
    • Explanation:
      • A Recovery Point Objective (RPO) of 14 days means the organization can tolerate up to 14 days of data loss. This indicates a relatively low urgency for immediate restoration of systems. A cold site is the most cost-effective option for such scenarios because:
    • Cold site characteristics:
      • Typically includes basic infrastructure (e.g., power, cooling, physical space).
      • Does not include pre-installed hardware or software.
      • Requires significant time to become operational, aligning well with a 14-day RPO.
      • For an RPO of 14 days, a cold site provides adequate recovery capabilities without the higher costs associated with warm or hot sites, which are designed for shorter RPOs and faster recovery times.
    • Why Other Options Are Incorrect:
      • A. Mobile site: A mobile site (a portable data center) is suitable for short RPOs but is generally more expensive and not necessary for a 14-day RPO.
      • B. Warm site: A warm site is partially equipped and can be operational faster than a cold site. It is more suited for shorter RPOs.
      • C. Hot site: A hot site is fully operational and ready for immediate use, ideal for near-zero RPOs. It would be overkill and too expensive for a 14-day RPO.
  13. What technology should an organization use for its application servers to provide continuous service to users?

    • A. Dual power supplies
    • B. Server clustering
    • C. Dual network feeds
    • D. Transaction monitoring
    • Answer: B. An organization that wants its application servers to be continuously available to its users needs to employ server clustering. This enables at least one server to be always available to service user requests.
    • B. Server clustering
    • Explanation:
      • Server clustering is a technology that connects multiple servers to work together as a single system to provide high availability, load balancing, and fault tolerance for application servers. If one server in the cluster fails, another server takes over the workload, ensuring continuous service to users without significant downtime.
    • Why Other Options Are Incorrect:
      • A. Dual power supplies: Dual power supplies provide redundancy for power failures but do not ensure continuous application service if the server itself fails.
      • C. Dual network feeds: Redundant network feeds prevent network outages but do not protect against server failures.
      • D. Transaction monitoring: Transaction monitoring tracks and logs transactions for analysis but does not ensure service continuity.
  14. An organization currently stores its backup media in a cabinet next to the computers being backed up. A consultant told the organization to store backup media at an off-site storage facility. What risk did the consultant most likely have in mind when he made this recommendation?

    • A. A disaster that damages computer systems can also damage backup media.
    • B. Backup media rotation may result in loss of data backed up several weeks in the past.
    • C. Corruption of online data will require rapid data recovery from off-site storage.
    • D. Physical controls at the data processing site are insufficient.
    • Answer: A. The primary reason for employing off-site backup media storage is to mitigate the effects of a disaster that could otherwise destroy computer systems and their backup media.
    • A. A disaster that damages computer systems can also damage backup media.
    • Explanation:
      • Storing backup media in the same physical location as the computers being backed up creates a single point of failure. If a disaster such as a fire, flood, or earthquake occurs, it can destroy both the computer systems and the backup media, rendering data recovery impossible. By storing backup media at an off-site storage facility, the organization ensures that backups are protected from localized disasters.
    • Why Other Options Are Incorrect:
      • B. Backup media rotation may result in loss of data backed up several weeks in the past: This is related to retention policies and is not the primary risk addressed by off-site storage.
      • C. Corruption of online data will require rapid data recovery from off-site storage: While off-site storage supports recovery, it is typically not the fastest solution for immediate restoration and is not the primary reason for the recommendation.
      • D. Physical controls at the data processing site are insufficient: Although physical controls are important, the main risk the consultant is addressing is disaster recovery, not physical security.
  15. Which of the following statements about virtual server hardening is true?

    • A. The configuration of the host operating system will automatically flow to each guest operating system.
    • B. Each guest virtual machine needs to be hardened separately.
    • C. Guest operating systems do not need to be hardened because they are protected by the hypervisor.
    • D. Virtual servers do not need to be hardened because they do not run directly on computer hardware.
    • Answer: B. In a virtualization environment, each guest operating system needs to be hardened; they are no different from operating systems running directly on server (or workstation) hardware.
    • B. Each guest virtual machine needs to be hardened separately.
    • Explanation:
      • In a virtualized environment, each guest virtual machine (VM) operates as an independent system, even though they share the same physical hardware and hypervisor. Hardening each guest VM is essential to:
        • Reduce vulnerabilities and risks within the guest operating system (OS).
        • Protect against attacks or breaches that could spread to other VMs or the host.
        • Hardening typically involves disabling unnecessary services, applying patches, securing network configurations, and implementing strict access controls.
    • Why Other Options Are Incorrect:
      • A. The configuration of the host operating system will automatically flow to each guest operating system: This is incorrect. Guest VMs operate independently, and their configurations must be managed separately.
      • C. Guest operating systems do not need to be hardened because they are protected by the hypervisor: While the hypervisor provides some isolation, it cannot fully protect a vulnerable guest OS from being exploited.
      • D. Virtual servers do not need to be hardened because they do not run directly on computer hardware: This is incorrect; the virtual environment still requires security measures for each VM.

CHAPTER 6 Information Asset Protection

  1. A fire sprinkler system has water in its pipes, and sprinkler heads emit water only if the ambient temperature reaches 220°F. What type of system is this?
    • A. Deluge
    • B. Post-action
    • C. Wet pipe
    • D. Pre-action
    • Answer: C. A wet pipe fire sprinkler system is charged with water and will discharge water out of any sprinkler head whose fuse has reached a preset temperature.
    • C. Wet pipe
    • Explanation:
      • A wet pipe sprinkler system is the most common type of fire sprinkler system. It has water continuously stored in its pipes, ready to be released immediately when a sprinkler head is activated due to heat. In this case, the sprinkler heads emit water when the ambient temperature reaches 220°F, a characteristic of wet pipe systems.
    • Other Types of Systems:
    • A. Deluge:
      • Pipes are empty until activated, and all sprinkler heads discharge water simultaneously when a fire is detected.
      • Typically used in high-hazard areas, such as chemical storage or aircraft hangars.
    • B. Post-action:
      • This is not a recognized fire sprinkler system type.
    • D. Pre-action:
      • Pipes are empty until an initial fire detection system (e.g., smoke or heat sensors) activates, filling the pipes with water.
      • Water is released only if a second event occurs (e.g., sprinkler head activation).
      • Often used in areas where accidental discharge could cause significant damage, like data centers or museums.
  2. An organization is building a data center in an area frequented by power outages. The organization cannot tolerate power outages. What power system controls should be selected?
    • A. Uninterruptible power supply and electric generator
    • B. Uninterruptible power supply and batteries
    • C. Electric generator
    • D. Electric generator and line conditioning
    • Answer: A. The best solution is an electric generator and an uninterruptible power supply (UPS). A UPS responds to a power outage by providing continuous electric power without interruption. An electric generator provides backup power for extended periods.
    • A. Uninterruptible power supply and electric generator
    • Explanation:
      • To ensure continuous power in a data center located in an area prone to power outages, a combination of an Uninterruptible Power Supply (UPS) and an electric generator is the best choice:
        • Uninterruptible Power Supply (UPS):
          • Provides immediate, short-term power during an outage, preventing disruptions until a backup generator starts.
          • Protects sensitive equipment from power fluctuations.
        • Electric Generator:
          • Provides long-term backup power during extended outages.
          • Automatically activates once the UPS takes over, ensuring seamless continuity.
          • This combination addresses both short-term and long-term power outages, ensuring the organization can maintain critical operations.
    • Why Other Options Are Incorrect:
      • B. Uninterruptible power supply and batteries: While a UPS includes batteries, they alone are insufficient for extended outages, as they only provide power for a short duration.
      • C. Electric generator: A generator takes time to start during an outage, leaving a gap in power supply. A UPS is needed to bridge this gap.
      • D. Electric generator and line conditioning: Line conditioning protects against voltage spikes and dips but does not provide backup power during outages.
  3. An auditor has discovered several errors in user account management: many terminated employees’ computer accounts are still active. What is the best course of action?
    • A. Improve the employee termination process.
    • B. Shift responsibility for employee terminations to another group.
    • C. Audit the process more frequently.
    • D. Improve the employee termination process and audit the process more frequently.
    • Answer: D. The best course of action is to improve the employee termination process to reduce the number of exceptions. For a time, the process should be audited more frequently to make sure that the improvement is effective.
    • D. Improve the employee termination process and audit the process more frequently.
    • Explanation:
      • Improve the employee termination process:
        • The primary issue is that terminated employees' accounts are not being disabled or deleted promptly, which poses a significant security risk.
        • Enhancing the termination process ensures that user account deactivation is integrated into the workflow, reducing the risk of unauthorized access.
      • Audit the process more frequently:
        • Regular audits ensure that the improved process is being followed consistently.
        • This helps identify any lapses or deviations from the policy in a timely manner.
      • Both actions are necessary to address the root cause (a flawed termination process) and to provide ongoing assurance through monitoring and audits.
    • Why Other Options Are Insufficient:
      • A. Improve the employee termination process: While improving the process is essential, it alone may not catch future lapses without regular audits.
      • B. Shift responsibility for employee terminations to another group: Changing responsibility does not address the root cause or ensure proper execution.
      • C. Audit the process more frequently: Auditing alone will not fix the underlying flaws in the termination process.
  4. An auditor has discovered that several administrators in an application share an administrative account. What course of action should the auditor recommend?
    • A. Implement activity logging on the administrative account.
    • B. Use several named administrative accounts that are not shared.
    • C. Implement a host-based intrusion detection system.
    • D. Require each administrator to sign nondisclosure and acceptable-use agreements.
    • Answer: B. Several separate administrative accounts should be used. This will enforce accountability for each administrator’s actions.
    • B. Use several named administrative accounts that are not shared.
    • Explanation:
      • Sharing administrative accounts undermines accountability because it becomes impossible to determine who performed specific actions. To address this, each administrator should have a unique named administrative account. This ensures:
        • Accountability: Actions can be directly traced back to the individual responsible.
        • Auditing: Logs can accurately reflect who made specific changes.
        • Compliance: Many regulations and security frameworks (e.g., ISO 27001, NIST) require unique user accounts for privileged access.
    • Why Other Options Are Insufficient:
      • A. Implement activity logging on the administrative account: Logging is important but does not solve the accountability problem if multiple people are using the same account.
      • C. Implement a host-based intrusion detection system: While intrusion detection can help identify malicious activity, it does not address the root issue of shared accounts.
      • D. Require each administrator to sign nondisclosure and acceptable-use agreements: Agreements are important for legal and policy purposes but do not provide technical accountability.
  5. An organization that has experienced a sudden increase in its long-distance charges has asked an auditor to investigate. What activity is the auditor likely to suspect is responsible for this?
    • A. Employees making more long-distance calls
    • B. Toll fraud
    • C. PBX malfunction
    • D. Malware in the PBX
    • Answer: B. The auditor is most likely to suspect that intruders have discovered a vulnerability in the organization’s PBX and are committing toll fraud.
    • B. Toll fraud
    • Explanation:
      • Toll fraud is a common cause of unexpected increases in long-distance charges. It occurs when unauthorized individuals gain access to a private branch exchange (PBX) or telecommunication system and make unauthorized long-distance or international calls, often at the expense of the organization. Toll fraud is a well-known security risk associated with poorly configured or unsecured telephony systems.
    • Why Other Options Are Less Likely:
      • A. Employees making more long-distance calls: While this could contribute, a sudden and significant increase in charges is more likely due to malicious activity rather than legitimate employee usage.
      • C. PBX malfunction: A malfunction might cause operational issues, but it is unlikely to directly lead to a surge in long-distance charges.
      • D. Malware in the PBX: Although possible, malware affecting PBX systems is less common than toll fraud as the direct cause of increased charges.
  6. An auditor is examining a key management process and has found that the IT department is not following its split-custody procedure. What is the likely result of this failure?
    • A. One or more individuals are in possession of the entire password for an encryption key.
    • B. One or more individuals are in possession of encrypted files.
    • C. Backup tapes are not being stored at an off-site facility.
    • D. Two or more employees are sharing an administrative account.
    • Answer: A. Someone may be in possession of the entire password for an encryption key. For instance, split custody requires that a password be broken into two or more parts, where each part is in possession of a unique individual. This prevents any one individual from having an entire password.
    • A. One or more individuals are in possession of the entire password for an encryption key.
    • Explanation:
      • Split custody is a key management practice where no single individual has full control over an entire encryption key or its associated password. The key or password is divided among multiple parties to ensure that no one person can misuse the encryption key independently. If the split-custody procedure is not followed, it means:
        • One or more individuals might have access to the entire password for an encryption key.
        • This introduces a significant security risk, as it violates the principle of separation of duties, potentially leading to unauthorized decryption or misuse of sensitive data.
    • Why Other Options Are Incorrect:
      • B. One or more individuals are in possession of encrypted files: This is not related to split custody, as encrypted files can often be shared securely without splitting keys.
      • C. Backup tapes are not being stored at an off-site facility: This relates to backup and disaster recovery practices, not key management or split custody.
      • D. Two or more employees are sharing an administrative account: This is an account management issue, not related to the failure of split-custody in key management.
  7. A developer is updating an application that saves passwords in plaintext. What is the best method for securely storing passwords?
    • A. Encrypted with each user’s public key
    • B. Encrypted with a public key
    • C. Encrypted with a private key
    • D. Hashed
    • Answer: D. Passwords should be stored as a hash. This makes it nearly impossible for any person to retrieve a password, which could lead to account compromise.
    • D. Hashed
    • Explanation:
      • The best practice for securely storing passwords is to use a hashing algorithm. Hashing converts passwords into fixed-length, irreversible representations (hash values). When a user attempts to log in, the entered password is hashed and compared to the stored hash.
      • Key Benefits of Hashing:
        • Irreversibility: Unlike encryption, hashing is a one-way process. The original password cannot be derived from the hash.
        • Security with Salting: Adding a unique "salt" to each password before hashing prevents attacks like precomputed dictionary attacks or rainbow table attacks.
        • Scalability: Modern hashing algorithms (e.g., bcrypt, Argon2, or PBKDF2) are designed to be computationally expensive, making brute-force attacks more difficult.
    • Why Other Options Are Incorrect:
      • A. Encrypted with each user’s public key:
        • Public key encryption is reversible with the private key and not suited for password storage.
        • It adds unnecessary complexity.
      • B. Encrypted with a public key:
        • Similar to A, encryption is not appropriate because encryption is reversible.
      • C. Encrypted with a private key:
        • Storing passwords encrypted with a private key is risky because if the key is compromised, all passwords are exposed.
        • Passwords should not rely on reversible encryption for storage.
  8. An organization experiences frequent malware infections on end-user workstations that are received through e-mail, despite the fact that workstations have anti-malware software. What is the best measure for reducing malware?
    • A. Anti-malware software on web proxy servers
    • B. Firewalls
    • C. Anti-malware software on e-mail servers
    • D. Intrusion prevention systems
    • Answer: C. Implementing anti-malware software on e-mail servers will provide an effective defense-in-depth, which should help to reduce the number of malware attacks on end-user workstations.
    • C. Anti-malware software on e-mail servers
    • Explanation:
      • The best measure to reduce malware infections from email is to deploy anti-malware software on e-mail servers. This approach ensures that emails are scanned for malware before they reach end-user workstations. By stopping malware at the source, the organization can:
        • Prevent malicious attachments or links from reaching users.
        • Reduce the reliance on end-users and endpoint protections as the first line of defense.
    • Why Other Options Are Less Effective:
      • A. Anti-malware software on web proxy servers: This is useful for detecting malware from web traffic but does not address email-based malware.
      • B. Firewalls: Firewalls filter network traffic based on rules but are not designed to detect or remove malware in email messages.
      • D. Intrusion prevention systems: IPS can detect and block malicious network traffic but is not focused on scanning email attachments or links for malware.
  9. An auditor has reviewed the access privileges of some employees and has discovered that employees with longer terms of service have excessive privileges. What can the auditor conclude from this?
    • A. Employee privileges are not being removed when they transfer from one position to another.
    • B. Long-time employees are able to guess other users’ passwords successfully and add to their privileges.
    • C. Long-time employees’ passwords should be set to expire more frequently.
    • D. The organization’s termination process is ineffective.
    • Answer: A. User privileges are not being removed from their old position when they transfer to a new position. This results in employees with excessive privileges.
    • A. Employee privileges are not being removed when they transfer from one position to another.
    • Explanation:
      • When employees accumulate excessive privileges over time, it often indicates that privilege management is not being handled properly, particularly during role transitions. As employees move to new positions, old privileges are not revoked, leading to an accumulation of unnecessary access rights, a condition known as privilege creep.
    • Key Points:
      • This situation often arises due to inadequate monitoring or failure to follow access control policies during employee role changes.
      • Excessive privileges pose a security risk, as they can enable unauthorized access to sensitive data or systems.
      • Why Other Options Are Incorrect:
        • B. Long-time employees are able to guess other users’ passwords successfully and add to their privileges:
          • Privileges are generally controlled by access management systems, not by users being able to guess passwords and modify their own privileges.
        • C. Long-time employees’ passwords should be set to expire more frequently:
          • Password expiration is unrelated to the accumulation of privileges.
        • D. The organization’s termination process is ineffective:
          • The issue here is not about terminated employees but about managing active employees’ access as they change roles.
  10. An organization wants to reduce the number of user IDs and passwords that its employees need to remember. What is the best available solution to this problem?
    • A. Password vaults for storing user IDs and passwords
    • B. Token authentication
    • C. Single sign-on
    • D. Reduced sign-on
    • Answer: D. The most direct solution to the problem of too many user credentials is reduced sign-on. This provides a single authentication service (such as LDAP or Active Directory) that many applications can use for centralized user authentication.
    • C. Single sign-on
    • Explanation:
      • Single sign-on (SSO) is a solution that allows employees to access multiple applications and systems using a single set of credentials (user ID and password). With SSO:
        • Employees only need to remember one password, reducing cognitive load and improving usability.
        • Authentication is centralized, making access management and auditing easier for administrators.
        • Security improves by reducing the likelihood of weak or reused passwords across multiple systems.
      • Key Benefits of SSO:
        • Enhances user experience by simplifying login processes.
        • Reduces password-related IT support issues, such as password resets.
        • Improves security when implemented with multi-factor authentication (MFA).
    • Why Other Options Are Less Effective:
      • A. Password vaults for storing user IDs and passwords:
        • While password managers help store and autofill passwords securely, they don't reduce the number of credentials users need to manage.
      • B. Token authentication:
        • Token-based authentication enhances security but does not address the need to reduce the number of user IDs and passwords.
      • D. Reduced sign-on:
        • Reduced sign-on refers to limiting the number of systems requiring separate authentication, but it is less comprehensive than SSO, which provides seamless access across multiple systems.
  11. An IS auditor has discovered that an employee has installed a Wi-Fi access point in his cube. What action should the IS auditor take?
    • A. The IS auditor should include this in his audit report.
    • B. The IS auditor should immediately report this as a high-risk situation.
    • C. The IS auditor should ask the employee to turn off the Wi-Fi access point when it is not being used.
    • D. The IS auditor should test the Wi-Fi access point to see whether it properly authenticates users.
    • Answer: B. Finding an unauthorized access point is a high-risk situation that the IS auditor should report immediately to management.
    • B. The IS auditor should immediately report this as a high-risk situation.
    • Explanation:
      • An unauthorized Wi-Fi access point installed by an employee is a serious security risk. It creates a potential backdoor for attackers to access the organization's network, bypassing security controls such as firewalls or intrusion detection systems. This is often referred to as a rogue access point, and it can:
        • Compromise network security by providing unauthorized access.
        • Bypass corporate security policies.
        • Expose sensitive data to unauthorized users.
        • The IS auditor must escalate this as a high-risk situation to ensure immediate corrective actions, such as disabling the device, investigating the employee's intent, and reinforcing security policies.
    • Why Other Options Are Incorrect:
      • A. The IS auditor should include this in his audit report:
        • While the issue should be documented in the report, immediate action is required to mitigate the risk before the audit concludes.
      • C. The IS auditor should ask the employee to turn off the Wi-Fi access point when it is not being used:
        • This is insufficient, as the device still poses a security risk even if turned off temporarily. Unauthorized devices must be removed entirely.
      • D. The IS auditor should test the Wi-Fi access point to see whether it properly authenticates users:
        • Testing the device does not address the underlying issue of unauthorized installation. Unauthorized access points must be removed immediately.
  12. An auditor is examining an organization’s data loss prevention (DLP) system. The DLP system is recording instances of sensitive information that is leaving the organization. There are no records of actions taken. What should the IS auditor recommend?
    • A. That management appoint a party responsible for taking action when the DLP system detects that sensitive information is leaving the organization
    • B. That management develop procedures for responding to DLP system alerts
    • C. That management discontinue use of the DLP system since no one is taking action
    • D. That the DLP system be reconfigured to stop issuing alerts
    • Answer: A. An organization using a DLP system should be acting on alerts that the DLP system generates in order to curb employee and system behavior.
    • B. That management develop procedures for responding to DLP system alerts
    • Explanation:
      • A Data Loss Prevention (DLP) system is effective only if there are clear procedures for responding to its alerts. The system’s purpose is to detect and report sensitive data leakage, but the absence of action in response to alerts diminishes its value. The auditor should recommend that management establish formalized procedures to:
      • Define roles and responsibilities for investigating and addressing alerts.
      • Specify actions to be taken when sensitive data leakage is detected.
      • Ensure proper escalation and resolution of incidents.
    • Why Other Options Are Incorrect:
      • A. That management appoint a party responsible for taking action when the DLP system detects that sensitive information is leaving the organization:
        • Assigning responsibility is important, but it must be accompanied by structured procedures to guide the response process effectively.
      • C. That management discontinue use of the DLP system since no one is taking action:
        • Disabling the DLP system removes a critical layer of data protection and exposes the organization to greater risks.
      • D. That the DLP system be reconfigured to stop issuing alerts:
        • Ignoring alerts defeats the purpose of having a DLP system and leaves sensitive data vulnerabilities unaddressed.
    • Recommended Actions:
      • Develop procedures: Create detailed guidelines for responding to DLP alerts.
      • Train staff: Ensure that employees know how to handle DLP alerts.
      • Assign responsibility: Designate individuals or teams to monitor and act on DLP alerts.
      • Monitor compliance: Conduct regular reviews to ensure procedures are followed.
    • Why B Is Better Than A:
      • A Procedural Gap:
        • The question explicitly mentions that there are no records of actions taken. This indicates a procedural issue, not just a lack of assigned responsibility.
        • Assigning responsibility alone (as A suggests) is insufficient unless accompanied by clear, formalized procedures outlining what actions should be taken and how.
      • Comprehensive Solution:
        • B emphasizes the creation of procedures for responding to DLP alerts. This is the root solution to ensure both:
          • Actions are taken when alerts occur.
          • Responsibilities are clearly defined within a structured framework.
      • Best Practice Alignment:
        • Security frameworks (e.g., NIST, ISO 27001) prioritize procedural clarity for effective incident response. Procedures ensure consistent, scalable, and measurable responses to security events.
  13. An organization’s remote access requires a user ID and one-time password token. What weakness does this scheme have?
    • A. Someone who finds a one-time password token could log in as the user by guessing the password.
    • B. Someone who finds a one-time password token could log in as the user by guessing the user ID.
    • C. Someone who knows the user ID could derive the password.
    • D. Someone who is able to eavesdrop on the authentication can log in later using a replay attack.
    • Answer: B. Someone who finds a one-time password token and then tries to log in to a system and discovers that the site does not request a password could guess the user ID and possibly be able to log in to the system.
    • D. Someone who is able to eavesdrop on the authentication can log in later using a replay attack.
    • Explanation:
      • A user ID and one-time password (OTP) token scheme provides an extra layer of security for remote access. However, it is vulnerable to replay attacks if the authentication process does not include mechanisms to prevent reuse of the captured credentials.
      • Review:
        • Highly plausible and correct. OTP tokens are vulnerable to replay attacks if the system doesn’t implement protections like unique session IDs or one-time session validity.
        • Attackers could intercept a valid OTP during transmission and use it later if replay prevention mechanisms (e.g., timestamps or session expiration) are not in place.
    • Replay Attack:
      • An attacker intercepts the user ID and the one-time password during authentication.
      • If the system does not implement protections like session expiration or nonces (unique values for each session), the attacker can reuse the intercepted credentials to gain unauthorized access.
    • Why Other Options Are Incorrect:
      • A. Someone who finds a one-time password token could log in as the user by guessing the password:
        • OTP tokens generate unique passwords that are valid for only a short time. Guessing the static user ID or password is not feasible due to the dynamic nature of the OTP.
        • Incorrect. OTP systems don’t rely on static passwords. A static password wouldn’t be guessed because OTPs are dynamically generated and change frequently.
      • B. Someone who finds a one-time password token could log in as the user by guessing the user ID:
        • Knowing the OTP token alone does not provide access unless the attacker also knows the corresponding user ID.
        • Review: Partially plausible.
          • If the system doesn’t properly validate both the OTP and user ID during the login process (e.g., no secondary password or checks are required), an attacker with access to the OTP could theoretically guess the user ID and gain access.
          • However, this relies on a significant flaw in the implementation of the system, which is not a guaranteed assumption in the question.
      • C. Someone who knows the user ID could derive the password:
        • OTPs are generated using algorithms that cannot be reversed to derive the password simply from knowing the user ID.
        • Incorrect. OTPs are generated using secure algorithms and cannot be derived from the user ID alone.
    • Mitigations for Replay Attacks:
      • Time-bound OTPs: Ensure OTPs expire quickly, making captured passwords unusable after a brief period.
      • Challenge-Response Authentication: Use unique session identifiers or nonces to ensure each authentication session is unique.
      • Encrypted Communication: Use protocols like TLS to secure the transmission of user IDs and OTPs, preventing eavesdropping.
  14. An organization has configured its applications to utilize an LDAP server for authentication. The organization has set up
    • A. Automatic sign-on
    • B. LDAP sign-on
    • C. Single sign-on
    • D. Reduced sign-on
    • Answer: D. Reduced sign-on is the term used to describe an environment where many different systems use a centralized authentication server (such as LDAP).
    • D. Reduced sign-on
    • Explanation:
      • In this scenario, the organization uses LDAP (Lightweight Directory Access Protocol) as a centralized authentication mechanism for multiple applications. This setup aligns with the concept of Reduced Sign-On (RSO), where:
        • Reduced Credential Management:
          • Users can access multiple systems using the same set of credentials, thanks to the centralized authentication provided by LDAP.
        • Not True SSO:
          • Although the credentials are centralized, users might still need to log in to each system separately unless additional mechanisms (like session tokens or identity federation) are implemented to enable seamless Single Sign-On (SSO).
        • Key Difference Between RSO and SSO:
          • RSO reduces the number of credentials users need to remember by centralizing authentication (as in LDAP).
          • SSO eliminates the need for repeated logins across systems after the initial authentication, requiring additional configurations or protocols like SAML, OAuth, or Kerberos.
    • Why Other Options Are Incorrect:
      • A. Automatic sign-on:
        • Automatic sign-on refers to scenarios where users are logged in automatically without providing credentials (e.g., via stored sessions or local authentication). LDAP does not inherently provide automatic sign-on.
      • B. LDAP sign-on:
        • While LDAP is being used for authentication, "LDAP sign-on" is not a recognized term describing the concept of centralized authentication.
      • C. Single sign-on:
        • LDAP alone does not implement true SSO. Users might still need to authenticate separately for each application.
  15. An organization has hundreds of remote locations containing valuable equipment and needs to enact a secure access control system. The locations do not have electricity. What is the best choice for an access control method that can be implemented at these locations?
    • A. Keycards
    • B. Metal keys
    • C. Cipher locks
    • D. Video surveillance
    • Answer: C. The best choice for an access control system for many remote locations is cipher locks. They do not require a power supply or remote connectivity, but they can be configured with a different combination for each user, and some retain a memory of which persons used them.
    • C. Cipher Locks Is the Best Choice:
      • Electricity Independence:
        • Cipher locks (mechanical combination locks) do not require a power source, making them practical for remote locations without electricity.
      • User-Specific Combinations:
        • Unlike metal keys, cipher locks can be programmed with unique codes for different users. This allows for better tracking and control, as codes can be changed if needed without replacing the physical lock.
      • Security Flexibility:
        • Cipher locks eliminate the risks associated with lost or duplicated keys. If a code is compromised, it can be easily changed.
      • Optional Tracking Capability:
        • Some cipher locks come with the ability to track which combinations were used, adding an audit trail for accountability, which metal keys cannot provide.
      • Why Other Options Are Less Suitable:
        • A. Keycards:
          • Keycards typically require electronic readers and power, making them unsuitable for locations without electricity.
        • B. Metal Keys:
          • While simple and reliable, metal keys cannot provide user-specific access or tracking. Lost or copied keys pose significant risks.
        • D. Video Surveillance:
          • Surveillance systems require power and only monitor access rather than controlling it directly.
Last Updated:
Prev
Review