Project Check-in
• For each Program outcome, what is the Key Performance Parameters (KPP)s measures; leading and lagging, with target dates per fiscal year for next 5 years? 'Leading indicators' are typically input oriented, hard to measure and easy to influence. 'Lagging indicators' are typically “output” oriented, easy to measure but hard to improve or influence. • What are the Programs (End-State & Retirement) align to a Organization Enterprise Architecture Segment? Not aligned to a Organization Enterprise Architecture Segment? • What Programs are listed in Organization Enterprise Architecture Segment? • What is the vision for our mission environment in Organization's future? • What mission outcomes does a Component want to achieve? • What are the gaps in the mission environment to meet our vision? • For each overarching mission objectives, are we meeting our performance measure targets? • For each mission outcomes, what progress has been made for prior fiscal year(s)? • If an mission outcomes didn't meet its target performance measure, why not? • If an mission outcomes does no longer apply to this fiscal year, why not? • For each Program objective, what is the Implementation Date for the applicable Measure/End-State? • What criteria was used for the Component to manage their portfolio? • For each Program, what is the performance measures, target dates per fiscal year for next 5 years? • If a Program didn't meet its target performance measure, why not? • If a Program does no longer apply to this fiscal year, why not? • What Programs do not support any Organization goals? • What overarching strategic mission objectives does Organization want to achieve? What are the outcomes? • What are the number of Components' alignment to each strategic mission objectives? What is the Component side-by-side view? • What are the number of Programs' alignment to each strategic mission objective? What is the Program side-by-side view? • For each Program, what percentage (%) objectives are aligned to mission outcomes? • To what extent are the Key Performance Parameters (KPP)s technically achievable and testable? • How have Key Performance Parameters (KPP)s metrics performed over the life of the service? • Which (system) monitoring tools are in place? • Are there gaps in business capability? If so, are Federal government core/priority, Organization Strategic Plan, IT Strategy Plan, etc., linked to identified gaps within the architecture? • What objective(s) does a Program want to achieve that is currently defined mission objective? Are the Program’s objectives are clear? • "What user needs will this Program (service(s)) address? Why does the user want or need this Program (service(s))? Has the Mission Needs Statement (MNS) been updated accordingly? Which research methods to determine users' requirements were used? What were the key findings? How were the findings documented? Where can future team members access the documentation?
• " • "What are the different ways (both online and offline) that people currently accomplish the task(s) today? Where does this Program fit into the larger way people currently obtain the service being offered? What are the different ways (both online and offline) that people will interact with the Program in the future? What metrics will best indicate how well the service is working for its users?
• "
• What is the scope of the Program? What are the key deliverables?
• "Are there architectural view(s) depicting the scope increments (phases) in sync with the Program schedule? Is the linkage clear how capability is being rolled out? Requirement(s) phasing?
• "
• How is the Program’s scope controlled?
• How is this investment intended to improve Organization business operations? How would the Program objective(s) be faceted? [1] Improved efficiencies, [2] reduction of paper-based processes (automation), [3] performance improvement, [4] consolidation/streamlining of redundant activities or technologies, [5] improved quality of data, products, or other deliverable(s), [6] improved customer/citizen experience or service, [7] reduced risk, [8] Increased resilience, [9] cost savings or avoidance
• What objective(s) does a Program want to achieve that is currently not a defined mission objective at this time?
• What novel (unique) capabilities does the Program provide? Are there similar capabilities in other Components? Is the Program seeking first to use existing systems, services, infrastructure, and/or platforms? (if not using existing systems, services, infrastructure, or platforms, Program must demonstrate a unique requirement and be subject to additional governance and oversight) Do other systems in the Department provide the same or similar functionality? Were these systems considered as a solution? Have these other programs been leveraged as a resource (lessons learned /experts, etc.)?
• What objective(s) does a Program want to achieve that is currently not a defined mission objective at this time?
• What were the previous year's expected end of year mission results/outcomes for the Program?
• What are the major mission results/outcomes bound by a measurable benefit that are expected to be realized by the Program? How are performance metrics managed?
• What are the milestones of the Program? Please describe the milestones. How frequent are they?
• What were milestone revisions for Program? Why the change?
• What problems/opportunities are being addressed by a Program?
• How is the Program assert to CIO priorities; e.g. network modernization, data center optimization, cyber strategy?
• How is the Program realizing efficiencies through innovative IT solutions?
• Does the Program aligns to Organization mission areas, goals, critical business priorities, and requirements found in the Organization Enterprise Architecture, Organization directives and external mandates
• "Are the requirements are clear and validated? Are the requirements documents validated? If not, what is the current status and what specifically is needed to bring to closure? Where applicable are the stakeholders actively involved in the requirements process per appropriate methodology?
• "
• Has the Program performed an analysis human systems integration? To what extent have requirements related to human performance been captured? To what extent have the roles of humans and automation been defined?
• Are efforts being achieved/completed per calendar baseline?
• What are the unique ‘Program Requirements’? e.g. must be disposed not warehoused
• Disposal type determination? e.g. warehousing is allowed, destruction is required. Who makes the determination? What criteria is used to determine type?
• Has the software system design been specified in sufficient detail that a different contractor could continue development without any additional information from the design team?
• "What is the targeted average response time for your service? What percent (%) of requests take more than 1 second, 2 seconds, 4 seconds, and 8 seconds? What is the average response time and percentile (%) breakdown (percent of requests taking more than 1s, 2s, 4s, and 8s) for the top 10 transactions? What is the volume of each of your service’s top 10 transactions? What is the percentage (%) of transactions started vs. completed?
• "
• "What is your service’s monthly uptime target? What is your service’s monthly uptime percentage, including scheduled maintenance? Excluding scheduled maintenance?
• " • What are the performance metrics defined in the contract(s) (e.g., response time, system uptime, time period to address priority issues)? • "How does your team receive automated alerts when incidents occur? How does your team respond to incidents? What is your post-mortem process? • " • Which tools are in place to measure user behavior? • What tools or technologies are used for A/B testing? • Are the installation and configuration parameters of all Commercial Off-The-Shelf (COTS) products identified? • What mechanisms/procedures are in place to assure mission outcomes are successful? • If a modernization, list each user community (and the size of the community) that may significantly benefit from this modernization. For each of the user communities listed, what benefits have been identified? • What is the count of Programs citing each risk/challenge? What are the most cited risks/challenges? • Why did a life-cycle milestone date change? e.g. termination date delayed? • What systems of the Program will be modernized? What are the milestones? • For Program modernized systems, what is anticipated cost savings and cost avoidance? • For retirement (dispose) Programs, what End-State Program will subsume its 'capability'? What Fiscal year? Have strategies been developed for retiring system(s) that this Program replaces? (I.e., removing h/w, s/w, closing out unneeded interfaces, archiving/subsuming documentation.) • What Enterprise Architecture activities are migrated on partial Portfolio system migrations? • When is a partial migration complete? • What is the detailed description of Full Operational Capability (FOC) for the Program? When did it occur, or planned? • What Programs receive/require information from a Program at Termination Date? • If Program does not comply with Organization Enterprise Roadmap (not aligned with the Organization IT Portfolio), why not? • Does the Program align to Enterprise Roadmap(s)? Mission? Technical? Is the Program aligned with the Organization IT Portfolio? • For out years, what does our Organization mission portfolio comprised? When will our End-State environment be realized? • What are the trends of the Organization mission portfolio e.g. reduction of Programs? • Who published Programs migration intent and dates? • Does this investment have a service or services that is or can be used as an enterprise service, a service across more than one component, or a service to an external agency? What Program service(s) is or can be used as an enterprise service, a service across more than one component, or a service to an external agency? Enterprise service reflected in Concept of Operations (CONOPS)? Please describe the service or services and indicate whether it is already in use or can be used. • What Programs reside in a Component's portfolio this year but not last year? Why not? • What Programs reside in a Component's portfolio last year but not this year? Why not? • What Programs retired (dispose) in previous year? • For retirement (dispose) Programs, what actions were taken and associated cost avoidance? • What are the Programs change End-State/Modernization to retirement (dispose)? • What are the Programs change retirement (dispose) to End-State/Modernization, why? • What are the Programs are listed as retirement (dispose) (replaced after 3 years) yet remain requesting funding? Why? • What are the Programs schedules have been 're-phased' or slipped? Why? • Does the proposed solution architecture align with the Enterprise Architecture (EA) direction and requirements? How does the Program understand what this question emcompasses? • Does Program follow EA Principle 'Enterprise Perspective'? ...to maximize the benefit to the whole of the enterprise? • Is the Program maximizing the use of shared architecture patterns? • What supporting function does the Program provide? e.g. ERP, common service, master data, reporting/analytics, tech enabler • How does the Program map to the Organization EA Functional Portfolio? Organization EA Primary Functions/Segments? • Is the Program being managed in context of architecture artifacts? • Does the Program architecture reflect current direction? • Was the Organization Enterprise Architecture used to identify gaps/shortfalls in capabilities between the current state and the end state? • Does the Program align to Organization mission areas, goals, critical business priorities, and requirements found in the Organization Enterprise Architecture, Organization directives and external mandates? • How is the Program managing internal and external dependencies? • Are the Program's data, reference data, and information exchange models aligned to the Organization Enterprise Architecture? • Are the program interfaces (technical, programmatic, schedule) identified and documented for all dependent systems/program/organizations? • Does the proposed Program architecture work across Components to limit duplication? Does the investment currently use or plan to use existing Organization systems/services and/or existing external or industry-provided Federal Government systems/services? If Yes, indicate which system/service and describe. • Does the Program align a Segment Architecture(s)? Which Architectures? • How is the Chief Architect involved in decisions? • Do the architectural artifacts fit the Program? Are any missing? • Do the architectural artifacts reflect Program planning? Are dependencies and evolutionary paths identified? Is there resolution to system level? • How is the architecture communicated with the stakeholders? How is information about stakeholders maintained? Is the stakeholder’s viewpoint depicted in architectural artifacts? • Are EA viewpoints used for analysis of Program challenges? • Who approves architectural artifact changes? Does the PM sign-off? Does a configuration board signoff? • How is decision rationale pertaining to architectural artifact changes documented? • Who is responsible for making architectural artifact changes? How is the work partitioned? • What inputs are provided to the architects? What is the architecture-formation process? The process if the input is unsatisfactory or lacking? • Is the architecture updated actively maintained and advanced? What is the timing? When were the architectural artifacts’ last updates? • How is the architecture shared and communicated? Where is it published? • How are the architectural artifacts used; only to communicate information or as a method to govern the Program? How is Program compliance with the architecture guaranteed? • Does the architecture communicate best practices to Program team members? • What is the relationship between architecture and testing? • What are the unresolved issues pertaining to the architecture? • What architecture authoring tools are used? What architecture convention(s)/ framework/standard(s) are used? • What Programs assert to a Business Process Reengineering (BPR) effort? Requesting funds? • What was the progress against a Program's Business Process Reengineering (BPR) plan? • For each Program, has Business Process Reengineering (BPR) been completed? • Have future opportunities been identified to increase efficiency by using technology and Business Process Reengineering (BPR)? • To what extent has the Program revised the applicable business and administrative processes to improve the opportunity to integrate the elements of the solution? • To what extent have analysis and/or experimentation and modeling and simulation been conducted with the Program to identify initial user interface, manpower, skills, training, and safety concepts? • How well do the Functional Requirements trace to Operational Requirements? Functional = What the Program is supposed to do; e.g. detect contraband, verify identification. Operations = How to run the Program's system; e.g. abnormally detection, logging • Will the Program be impacted by a accounting audit (exclusive of Material Weakness criteria)? • What Laws, Regulations, and Policies (LRP) provide authority for the Program? Have statutory and/or regulatory authority for the mission(s) is properly cited? • What Laws, Regulations, and Policies (LRP) impact Program efforts? e.g. accounting principles • How does the Program learn of new or changed laws, regulations and polices? How are these impacts reviewed and then managed within the Program? • Has a LRP (Laws, Regulations, Policies) compliance plan been developed? • Have the complete list of applicable Laws, Regulations and Policies (LRP) in the Organization Enterprise Architecture been mapped? • What are the Certification requests not recommended by Program? • Has applicable Personally Identifiable Information (PII) planning and analysis been performed? • Does the Program collect personal information from the user? Does it collect more information than necessary? How is the user notified of this collection? • Could the data be used in ways an average user wouldn’t expect? Will any of the personal information stored in the system be shared with other services, people, or partners? • How does a user access, correct, delete, or remove personal information? • How can someone from the public report a security issue? • Does the Program employ Lean Business Agile (LBA) practices? Does Program adopt an iterative and incremental development methodology? • Does the Program foster continuous improvement of the IT environment with regard to planning, implementing, and maintaining all layers of the EA (business, application, services, data, technology, and security). • Has the Lean Business Agile (LBA) been used in developing requirements? • Does the Program have a requirements backlog? Is the backlog prioritized periodically based on recent release and stakeholder feedback? • Is a software tool used to gather, track, plan, and manage requirements backlog? Which version control system is being used? How are bugs and issues tracked and tickets issued? What tool is used? How is the feature backlog managed? What tool is used? How often do you review and reprioritize the feature and bug backlog? • "How does the Program collect user feedback during development? How is that feedback used to improve the Program/service? At each stage of usability testing, which gaps were identified in addressing user needs? • " • Has the Lead Business Authorities (LBA) been directly involved in prioritizing the requirements backlog? • Has the Program established a current state baseline from which to begin Solution Engineering? Are Function Points used? Function Points used correctly? If not using Function Points, what is being used, and how does this compare to Function Points? Are Story Points used? • How does planned vs. actuals compare using Function Points? Does the story make sense? Is there a sharp rate increase/decrease, why? Is agile working for the program? (Agile is not always best) • Has a process been defined to approve and prioritize sprint content & release? • What is the frequency of deployments of code to production? • How many days or weeks are in each iteration/sprint? • How long does it take for a production deployment? • How long does it take for a new team member to start developing? • Do Program’s contracts support Lean Business Agile (LBA)? Encourage LBA? • Does the Program plan to scale Agile development from small development teams to several concurrent development teams (using SAFe, DAD, or other techniques)? • Do the Program's contracts support Agile methodology implementation? • Has Program characterization been documented based on the business volumetrics? What is the demand or usage pattern for the Program's service? How many simultaneous users could the Program's system(s) handle, planned/most recent capacity test results? • Does the approach taken by the Program scale to meet mission needs? To what extent is the planned system capable and/or scalable to accommodate current and projected demands in data volume, users, or transactions? How much capacity is available in the hosting environment? • What is the scaling strategy for the Program when demand increases suddenly? Has the Program been designed to scale based on demand? • What is the estimated maximum number of concurrent users who will want to use the system? How many simultaneous users could the system handle, according to the most recent capacity test? • How does the Program's service perform when usage exceeds the expected target usage volume? Does it degrade gracefully or catastrophically? • Are Program system-level resource utilization monitored in real time? e.g. response time, latency, throughput, and error rates. Are automated alerts based on this monitoring occur? Are concurrent users tracked in real-time, and monitoring user behaviors in the aggregate to determine how well the Program meets user needs? Are metrics published? • "What is the demand or usage pattern for the Program/service today? What happens to the service when it experiences a surge in traffic or load? How much capacity is available in the hosting environment? How long does it take to provision a new resource, like an application server? How the service been designed to scale based on demand? • " • How have Program system-level resource utilization performed over the life of the service? • What is the level of Disaster Recovery Plan (DRP) that is currently maintained for the system? • Is a waiver is in place for the Program to accept the risk of a lower level of disaster recovery than required for the system? If Yes, explain reason for waiver and expiration date. • What type of agreement is in place for Program's primary service provider to provide disaster recovery services? • What is the level of readiness of an alternate site for recovery of system? Does it include all capabilities within system boundary, including minor applications and subsystems? • What is the month and year of the current system disaster recovery plan for the Program? • What is the scope of the disaster recovery plan regarding all systems within the system boundary for the Program, including minor applications and subsystems? • What is the maximum amount of time (in hours) to operational status before the Program's system has a failure impact on supported functions? • What is the frequency of data back up for full backups? For incremental backups? • What are the types of storage media used for data and source code backups? • What is the location; city and state, where backup data and system source code is located? • What are the dependencies on procurement of location or equipment or existing facility must have proprietary equipment or software to reconstitute the Program's system? • Where is the planned/current hosting or Cloud Service Provider (CSP) for Program's production systems? • If Program's system(s) aren't leveraging a cloud architecture, why not? What are the physical, application, or funding constraints for migrating to a cloud environment? Is cloud appropriate? • If Program systems are planning to leverage a cloud architecture, what is the approach to migrating to a cloud environment? If using a vendor, what is the name of the provider, and what type of service contract will be used? Is the Continuity of Operations (COOP) defined such that it complies with laws, regulations and polices (LRPs)? • What is the alternate hosting or Cloud Service Provider for where a Program's system(s) would be reconstituted? If the solution is dependent on procuring space of equipment, provide details. • Where is the alternate hosting location (also include location of Cloud Service Provider (CSP), if applicable); city and state? • What are the alternate business operations (identify if dependent on teleworking)? • What are the type of network services in place for accessing alternate recovery site from the alternate business sites? • What are the network requirements for reconstituting environments? • What are the services that are outside of Program's system boundary that are required to reconstitute the system? List services and then describe reason for dependency. • What are the additional mission systems required for operational status for the Program? List systems and then describe reason for dependency. • What are the challenges or obstacles to optimal recovery for the Program's system(s)? • What are the amount of annual disaster recovery funding costs; including facility, equipment, backup, and communications? • To what extent has the Program defined what constitutes a system “failure”? • What is the amount of funding needed to implement required level of disaster recovery, if needed, that is not in FY 2019 budget for the Program? • What are the plans for changes or implementation of disaster recovery services? • Is hazard mitigation, enhanced preparedness, ensuring effective emergency response, and recovery included in needs statement? • To what extent has disaster recovery been factored into IT strategy? Concept of Operations (CONOPS)? • Are roles and responsibilities pertaining to contingency operations clearly defined? Have all participants confirmed commitment to contingency planning? Responsibilities in onboarding procedures? • Does the Program have a Contingency Plan? Does it address known and potential threats? • What scenarios and use cases have been defined? What threat (e.g., natural disaster, illegal activities, cyber and terror) or business activity did the scenarios describe? How did the scenarios enable defining alternative methods to prevent, intervene, or respond to the threat or business activity? • Have interdependencies been mapped? What dependencies (other programs/systems) does the program have? Are they a risk? Do they support/interface the chosen technology? How does the mapping influence communications? • How complex are the planned interfaces? Is this risk captured in the Risk Register (RR)? • To what extent has disaster recovery been factored into testing? Linked to requirements? • Is there a linkage between managed risks, architecture, and infrastructure? • What would be the impact of a prolonged Program downtime window? What would be the impact of a catastrophic data loss? • Has the Program addressed reliability? What are the Program's service(s) monthly uptime target? What is the monthly uptime percentage, including scheduled maintenance? Excluding scheduled maintenance? • How is the Program paying for hosting infrastructure (e.g., by the minute, hourly, daily, monthly, fixed)? • Is the Program's services hosted in multiple regions, availability zones, or data centers? • Did contingency planning leverage the Organization Enterprise Architecture to assure known dependencies were accounted for? • Are there Service Level Agreements (SLAs) for provisioning internal services? External services? Do the SLAs include remedies; for failure to provide acceptable performance, time frames and escalation procedures. Define a process for monitoring, tracking, evaluating performance and resolving poor performance? Does the SLA include critical timelines, e.g. how quickly DHD has access to its agreed-upon resources, hot site, network recovery, fail-over, recover and restart downed system via a recovery service. • When was the month and year of the latest system disaster recovery test for each system of the Program? How did actuals time/data meet the defined requirements? If requirement not met, what is the plan? • What type of disaster recovery test(s) that were conducted? If more than one test was conducted, explain. • What was the result of the latest disaster recovery test? If failed or passed with conditions, explain. • Does the Program partner with Component and HQ organizations to institute and maintain processes for continuous process improvement? • Does Program implement SecDevOps/Continuous Integration/Continuous Delivery? • What percentage (%) of the code base is covered by automated tests? • How long does it take to build, test, and deploy a typical bug fix? How long does it take to build, test, and deploy a new feature into production? • Does operations or test results perform in accordance with operational or functional requirements? • Is the Program’s software tested independently concurrent with development (i.e., no completion of the full software release before handoff to Quality Assurance testers)? • Have all the operational/functional requirements been reviewed by the acceptance test team to ensure that the requirements are clear, meaningful, and testable? Including Program documentation in addition to the Operational Requirements Document (ORD). • Are the infrastructure requirements defined for the Program? Have the decomposed system requirements been reviewed by the acceptance test team to ensure the requirements are clear, meaningful, and testable? • Has user acceptance testing identified any gaps in required capabilities? • "What test tools are used? Which deployment automation or continuous integration tools are used? • " • How does the IT Program adhere to the department-wide IT security Program? • Does the Program comply to the Federal Information Security Management Act (FISMA)? What FISMA application or system has been identified? • Does the program have a patching plan/methodology in place? • Any outstanding security alerts? Are all patches up to date (preventing zero day attacks) with interface software? • Does Program follow Enterprise Architecture (EA) Principle 'Security in Depth'? • How does the Program identify cybersecurity/resilience concerns? Is cybersecurity adequately addressed/planed into the program? • Does the Program have an appropriate System Security Plan (SSP)? • Does the Program have an appropriate Security Risk Assessment (SRA)? • Have cybersecurity/resilience concerns design into the Program from the beginning? • What is the threat landscape for the Program and planned threat assessment activities? Is threat modeling/management prepared and achieved at the Program or elsewhere, Organization Component, DoD, etc.? What approach is the Program taking to manage threats? • Does the Program address cybersecurity risks in manner identified via process and policy? What is the Program status with cybersecurity testing? Planning? • Are Program cybersecurity risks continually closed out or mitigated? • Are Program's cybersecurity risks captured within the Component's Risk Assessment Report (RAR)? • Are appropriate cybersecurity stakeholders involved in identifying cybersecurity risks? Is identification rationale for recognition sound? • What is Risk Management Framework (RMF) implementation strategy and associated activities? • Has the Organization cybersecurity risk governing body provided the overall cybersecurity risk recommendation for the Program? • If edge technology (e.g. sensors, mobile), are these listed? If not, why not? Which edge devices were not considered part of the security testing? • Was the Organization Enterprise Architecture's security features (also known as controls) addressed in the security authorization process to assure the investment will be granted an authorization to operate? • Have cybersecurity risks been appropriated identified, evaluated, and addressed, as required for obtaining an Authority To Operate (ATO)? • Any new hardware added to the configuration after security testing, since last ATO? • In coordination with the component CISO, has the Program established a process for the continual updates to the Risk Assessment Report (RAR), post- ATO (continuous monitoring)? • Is ATO on the critical path? What is the status in obtaining the necessary ATOs? • Does the Program follows data management policies, access procedures, and standards to maximize information security? • Is there an Information Security Continuous Monitoring (ISCM) waiver? • Does the Program have an updated Security Requirements Traceability Matrix (SRTM)? • How does the Program plan to secure its network communication and data? • If external cloud or cloud hybrid, how does the Program plan to secure its data? • Does the Program have performed a Cyber Resilience Review (CRR)? • Have appropriate classified, open source, and operational history sources been consulted when characterizing the threat to the mission? • Are the threat actors and intent mapped to the appropriate parts of the operational level technical architecture in the Operational Requirements Document (ORD)? • Are the threat actors and intent mapped to the appropriate parts of the data architecture in the Operational Requirements Document (ORD)? • Are operational impacts of threats well-defined and decomposed into observable and quantified supporting measures? • Has the Program performed an analysis of information security needs and does it justify the FIPS 199 information security categorization? What is the current overall security assurance level (SAL) for each of the systems in the Program? What is the ratings for Confidentiality, Integrity, Availability, for each system? • Does the system design provide the security reports needed to audit and monitor the Program in production? • If 'standalone' (non-networked) systems how are the patches scheduled (reducing zero day attacks)? • In respect to security, what is the maturity of each system? What is the next step? • What cybersecurity assessment tool(s) are being used with the Program? • What Cybersecurity standards were selected? How compliant is the Program for each standard selected? Who selected which standards need to be addressed? • Does the Program report on Organization Cybersecurity Program Health Report? If so, what is the Program’s current rating? What is the supporting justification for the rating? ii. For each Program’s systems, what is the overall rating, and what are the Scorecard Metrics ratings for Software Asset Management (SWAM) defects, Vulnerability defects, Configuration Settings Management, Malware? What is the justification for the ratings? Any changes from last review cycle, if so, why? • Cybersecurity Maturity – What is the Program’s cooperation environment? [1] the Program does not have mechanisms in place to coordinate with partners, [2] the Program understands its role within the ecosystem, but has not formalized its coordination mechanisms, [3] the Program understands its dependencies with partners and has in place risk-based mechanisms to respond to events, or [4] the Program manages risks and actively shares information with partners to improve cybersecurity before a cybersecurity event occurs. • Cybersecurity Maturity – Addressing the Program’s risk management, what best describes the environment? [1] managing cybersecurity risk has not been established, with cybersecurity risk management implemented on a case-by-case basis, [2] risk management is not across all aspects of the Program, but approved processes are defined and performed on an informal basis, [3] Approved procedures are in place, personnel trained, processes are implemented as intended and reviewed, or [4] Cybersecurity risk management is part of the culture of the Program, where information is shared, processes evolve with a continuous awareness of activities on Program systems and networks. Incident response capabilities are part of the Program’s solution. • Cybersecurity Maturity – What is the maturity of the Program’s risk management? [1] Not formalized, [2] Approved but may not be established across the Program, [3] Formalized and updated, or [4] Incorporates Risk Management across all aspects of the Program, linked to Component’s overall risk mitigation process, uses advanced cybersecurity technologies and practices through a process of continuous improvement? • Cybersecurity Maturity – What is the maturity of SecDevOps for the Program? Are embedding continuous verification and detection to stop untrusted or compromised users, devices and workloads from accessing applications and the network used across all systems? • Cybersecurity Maturity – What cyber security Defense in Depth layer(s) does the program enable? • Cybersecurity Maturity – What capabilities (per the cyber maturity model) does the program enable? • Are User and Entity Behavior Analytics (UEBA) and machine learning (ML) used to help to create a baseline for trusted workload access? • "Where is the Program in moving toward a zero trust architecture? • ... Are all stakeholders represented in the architecture? Data? Network? Instrumentation/sensor devices? Mobile devices, Applications? and workloads affected? • ... Is how the data is used and accessed (roles) well documented? • ... Are Laws, Regulations, and Policies (LRPs) mapped to data flows? • ... Does the Program make use of micro-segmentation; to create secure zones in data centers and cloud deployments that allow you to isolate workloads and protect them individually? Are there zones within the Program using micro-segmentation to address legacy applications? • ... Does the Program implement software-defined access (SD-Access) to enforce application and network access based on dynamic context; mapping context and devices to scalable groups thus simplifying end-to-end security policy enforcement? • ... Does the Program/Component have a cybersecurity ontology to support information integration and cyber situational awareness in cybersecurity systems? If so, is the ontology used by different cybersecurity systems for information sharing and exchange? If so, is the ontology used between threat modeling and defense tactics incorporated? What attack taxonomies have been adopted? " • "Does the Program need to incorporate managed endpoint detection? • ... What protection components are included in the Program’s endpoint security solution? • ... Is there a single management console for all the endpoint security components? • ... What percentage of devices are covered today? Planned at end-state?
• " • Is the Program following the Organization ‘Cybersecurity Systems Engineering - Implementation Guide’? If not, why not? • "For non-IT (hybrid IT) with embedded IT, how is the Program addressing cyber? • " • "Has sustainability been adequately planned to include resource needs (staff and cost)? • " • "How and how often is the Program and its system(s) and service(s) tested for security vulnerabilities? When was the last test time and result for each? • " • How does the Program identify cybersecurity/resilience testing processes? • What kind of cybersecurity threats will/have been tested? • What methods will/were used to test cybersecurity threats? What are the resulting risk mitigations? • How will/are testing results and recommendations incorporated into the Program? • "When was the date of the last risk assessment report? Are the results published quarterly? • ... When was the date of the last version of the Program’s Systems’ security plan(s)? Are the plans published quarterly? • ... When was the date of the last Threat Assessment and Vulnerability Assessment? Is the assessment published semi-annually? • " • Did the program pass Office of Chief Information Security Officer (OCISO)’s automated tests? If not, what is the status? • Does the Program design employs vendor-neutral open architecture standards? • Does the Program utilizes standards; architected for interoperability? Which ones are cited/used? • Does the Program follow Organization Enterprise Architecture data management policies, access procedures, and standards to ensure data remains available in shared environments? • Does Program have Logical Data Model (LDM)? Have the data supporting the business processes been specified to a conceptual level? Logical Design Document (LDD)? In Mobius? • Does Program have a Data Management Plan (DMP)? Have data management requirements been defined? In Mobius? • Does Program have a Data Quality Plan (DQP)? In Mobius? • Does the Program have an updated Data Insertion Package (DIP) • Has Program created and filed all appropriate Acquisition documentation that is required based on current lifecycle? • Does the Program have details about how each system will function at Full Operational Capability (FOC)? System Design Doc (SDD)? • Does the Program identify an applicable updated Mission Need Statement (MNS)? What specific Doctrine, Organization, Training, Materiel, Leadership and Education, Personnel, Facilities, Regulations, Grants, and Standards (DOTMLPF/R/G/S) recommendations does/will this Program address? • Does the Program identify a updated Capability Development Plan (CDP)? In defining the maturity of specific technologies being considered for proposed materiel solutions, to what extent does the Capabilities Development Plan (CDP) support adequate evaluation of mature or feasible technologies? • Does the Program have an updated Operational Requirements Document (ORD)? Are operational requirements (as described in the ORD) valid, validated and complete (by Sponsor), testable, and measurable? Do they support the original mission need as stated in the Mission Need Statement (MNS)? Has requirements interdependency been considered and/or analyzed? Has an ORD been developed that captures Initial Operational Capability (IOC)? Minimum Viable Product (MVP)? Full Operational Capability (FOC)? • Has the Concept of Operations (CONOPS) adequately defined the “to be” business process? • Does the Program have an updated Concept of Operations (CONOPS)? • Does the Program have an updated Requirements Traceability Matrix (RTM)? • Does the Program have an updated Functional Requirements Document (FRD)? • Does the Program have an updated System Requirements Document (SRD)? • If Commercial Off-The-Shelf (COTS), what are the dependencies? Are the dependencies of COTS products identified in the architecture? • Which interfaces are using an proprietary industry standard? Is the version up to date? If not, why not? Is there a standard and is not being used, why not? • Which interfaces are using an open standard? Is the version up to date? If not, why not? Is there a standard and is not being used, why not? • What other known Programs (Organization and non-Organization) use each interface? Has there been exchange of information, code, lessons learned? • Are Program's interfaces defined well enough to drive solution development? What are the data exchange dependencies for the Program? • Is there a type of interface for every user/machine with the Program? • What are current identified Program interfaces, which are proposed/planned? • Internal interfaces: If asked, the same question, would trading partner list this interface as well? • External interfaces: Which are public facing? • External interfaces: Are all security patches up to date with interface hardware and/or software? • Is the interface shown on an architecture artifact? Which artifact(s)? • What are all of the interfaces, and include all relationships internal and external to Organization? • Are all the availability calculations (algorithms) agreed upon and documented in the Service Level Agreements (SLAs)? • Are there interfaces without Service Level Agreements (SLA), if so, what are they? Are SLA being met, if not why not? Can Organization HQ assist? • Are the Program's technologies identified consistent with the target Organization Technical Reference Model (TRM)? What items are approved in the TRM? Component’s TRM? For software not in a TRM, what is the schedule for getting software approved? Any software being used and is on the ‘Not approved’ list, why? • Is the Program leveraging Technical Reference Model (TRM)-approved software where possible? • Are modifications to the EA necessary to accommodate the Program and have they been through the Technology Insertion (TI) Decision Request Process? Does the Program have an updated Technology Insertion Package? • Have all created services been added to the Organization Service Catalog and submitted to the EA PMO for registry in the service component reference model? • Is the hardware shown on an architecture artifact? Which artifact(s)? • Is the software shown on an architecture artifact? Which artifact(s)? • Does the proposed Program system(s) design reflect a modular architecture? • Does the Program map system functions to capabilities they will support at Full Operational Capability (FOC)? • Does the Program have an updated “as-is” architecture diagram and business processes defined? Are they reflected in the Operational Requirements Document (ORD)? • Does the Program have an updated “to-be” (end-state) architecture diagram and business processes defined? Are they reflected in the Operational Requirements Document (ORD)? Are dependencies shown/described? • Does the Program have the updated “to-be” (end-state) architecture diagram depict the evolution/phases of the Program, as the Program is to be managed? • Have business processes been specified to a logical level for the Program? Have the Program processes been documented (e.g., use cases, flow diagrams)? Are all system and functional requirements accounted for in the design? • Do the Program's priorities listed for each requirement accurately represent the business' capability needs? • Have the functional requirements been logically decomposed to an acceptable level of detail (at least to the major subsystems or software components) in the Program's system requirements? Have all functions in the logical design been allocated to the system design? • Have requirements been updated based on the user review of the proof-of-concept, or technology demonstrations? Is the requirements baseline stable and configuration controlled? • Will the Program provide all of the business capability as planned? Have any new business capabilities been identified? • Does the Program include all items assigned to it for each release? • Does Program follow EA Principle 'Minimal Technical Complexity'? ..sharing or acquiring services, infrastructures? Is the Program using shared services, data consolidation or for the use of cloud computing in the Investment “to-be” aligned with the Organization Enterprise Architecture? • What percentage of the Program's capabilities are manual processes? Is there an opportunity to automate? • How is the Program leveraging new technologies? Have disruptive technologies been considered? Have existing R&D activities been considered when identifying where materiel solutions may address the capability gaps or business need? Describe how the investment is leveraging new technologies. • Is machine learning being applied to Program? Have processes been reviewed for opportunities? • What is the number of instances of a Program systems, e.g. changes impact what number of installations? Where are they located? • Does the Program, where possible, leverages existing systems, services, infrastructure, and/or platforms? • What 'Hosting and Platform Services' are planned or implemented by the Program? • What 'Digital Identify Management and Access Control Services' are planned or implemented by the Program? • Does the Program have system(s) have or have planned mobile applications? If Yes, describe your scope, audience, and objectives. • To what extent has the Program's capability been assessed for technical feasibility and maturity (i.e., is the technical solution already possible or possible in the near term)? • What technical risks associated with the Program have been identified relevant to their integration? • Has an illustration depicting the conceptual network been developed and documented for the Program? Has the technical infrastructure been specified to a conceptual level? Have locations and types of infrastructure components been identified and documented? • Does the Program include all technology assigned to it for each release? • What is the Program's development stack? Why was this stack chosen? • "What items will/are made available to the public as open source? If the codebase has not been released under an open source license, explain why. What components are made available to the public as open source? • " • "What datasets are made available to the public? • " • Does the Program have an updated Service Reuse Plan (SRP)? • What extent have the Critical Technology Elements (CTEs) been identified for each Program's system/solution considered? • To what extent have the Critical Technology Elements (CTEs) been assessed, both individually and together, to determine their maturity in the specific application for the proposed materiel solution? • To what extent have the Critical Technology Elements (CTEs) assessments been included as input to the Analysis of Alternatives (AoA)/Alternatives Analysis (AA)? To what extent have evaluation criteria been developed based on applicable technical and mission/business needs to discriminate among technical alternatives? • Does the system connect to a Organization Network? If so, which one? • If not and on a Organization network, which one (as well)? • Do the network meet the technology’s requirements? • Are there plans to move to Organization OneNet in the future? • "What supporting technologies is the Program planning to leverage? Are there any risk from these selections? • " • What are all the associated hardware being used? • Is the hardware shown on an architecture view? Which view(s)? • If edge technology (e.g. sensors, mobile), are these listed? If not, why not? Which edge devices were not considered part of the security testing? • Any new hardware added to the configuration after security testing, since last ATO? • Any new software added to the configuration after security testing, since last ATO? • What are all the associated software being used? Which are embedded? • Where is the current IT hardware/software located? • Any licensing/migration dates which are considered a risk, which products, and what is the go forward strategy? Are there problems with a vendor? A product to be supported in the near future? • Does Program match their data artifacts based on Organization Enterprise Architecture alignments? • Has the Program implemented data stewardship to proactively manage authoritative sources, data interpretation, context, and document in Organization Enterprise Architecture? • Has the Program's data architecture been reviewed to assure the design will meet capacity functional and performance requirements? • Does Program follow EA Principle 'Data as an Asset'? ...emphasizing data quality and sharing? To what extent have concepts for data storage, data tagging, and data sharing been defined? • Does the Program have system(s) considered a “big data” implementation? • To what extent will the data need to be modified/reformatted to support the new data structure and will the modification be able to be completed in an automated way? To what extent will legacy data need to migrate into the new system so that the old system can be decommissioned? To what extent will legacy data need to be cleaned up before migration? Does the data conversion plan (documented in the data management plan) account for possible cleansing and data quality issues as well as performance impacts to the existing Data Architecture? • Does the Program's data retention requirements meet the business need? • Has the Program itemized data architecture alternatives? Have the alternatives been categorized, prioritized, and cost-justified? • Is the data required by the Program already available or will it be made available? • Which database(s) are the Program using? Why were they chosen? • What datasets will/are made available to the public? Publishing location? • Have sharing opportunities been identified to reduce redundancy and increase data integrity? • For Program modernized systems, who are the user communities (and the size of the community) that may significantly benefit from this modernization? • Who are the Program's stakeholders? What organizations use the Program? Who are your users? What organizations does the Program need to be successful? Who are actively involved in the Program, is affected by the Program's outcome, or can influence the Program's outcome? Is the identification rationale/approach sound? • Which users will have the most difficulty with the service? Why? Have the difficulty been addressed? • How often are users feedback solicited? Are the Program's findings about user goals, needs, behaviors, and preferences documented, and shared with Program leadership? • What are the user's pain points in the current way people accomplish the task? If a user needs help while using the service, how do they go about getting it? • How does the Program measure customer satisfaction? • How does the Program/service’s design visually relate to other government services? • Are prioritized list of tasks the user is trying to accomplish, also known as “user stories” recorded and managed? What metrics will best indicate how well the service is working for its users? • If consolidation, which Program interface will be browned out? Have stakeholders been notified and agreement as to dependency mitigation planned? When/how it the legacy Program being switched browned out? • Are approved operational requirements documented where user requirements are documented? • Was the Organization Enterprise Architecture referenced when identifying impacted stakeholders? • Has the Program performed/planning an agency/interagency-wide survey to determine if the solution may be used to improve mission capabilities common to Organization Components or other federal agencies? • Have users and operators been fully engaged in developing and validating the needs and requirements as reflected in the Mission Need Statement (MNS)? Operational Requirements Document (ORD)? Concept of Operations (CONOPS)? • Do the reporting requirements ensure that the business users get the information they need? • What is the Life Cycle Cost Estimate (LCCE) Rough Order of Magnitude (ROM) Cost for Program? • Is there a clear breakout of funding to architecture for the Program? • What is the Rough Order of Magnitude (ROM) Cost for mission outcomes? • What is the current (spend) cost per goal? • What was the actual obligations in prior year(s) (PY) for each Program? • What Programs are budgeted for > $1M over their FYHSP? Programs spend will spend > $1M over lifetime? • What are the Cost Drivers of Component mission Operations of Organization? • What is Organization spending its money on IT investments in the upcoming year (in each segment, Component, etc.)? • For each Program, what 'Year of Funds' are being used? • For each Program, what resources will be used by fiscal year and appropriation? • How does the requested dollar compare to their Budget Request or was appropriated during the year of execution for out-of-cycle? • What is the trend Organization spending its money on IT investments (in each segment, Component, etc.)? • What significant funding amount changes from prior year? Why? • What is Organization budget landscape; requested (pending), approved (certified), denied (unapproved)? • What is Organization out-of-cycle budget request landscape; pending, approved, unapproved? • What would the impact be on the Component ('plan B') if the budget was reduced? How will the reduction affect Programs and portfolios? • What is the mitigation/resolution(s) for each Program risk? • Is the threat configuration traceable from management to device? For each threat managed Component applicable clearly documented through threat modeling, risk management, vendor specifications, integration testing and deployment (schedule)? Please identify the responsible party for each throughout the linkage. e.g. tracking sensors' ability to detect requirements may reside 'at a level' higher than the Program if the Program is viewed as a procurement. • Any licensing/migration dates which are considered a risk, which products, and what is the go forward strategy? Are there problems with a vendor? A product to be supported in the near future? • What are the Program's resulting risk mitigations? • Does the Program maintain a Risk Register (RR)? Are mitigations defined and reviewed periodically? Is the program appropriately managing & mitigating? Are the risk ratings improving over time, if not why? • Does the Program have an updated Risk Management Plan (RMP) • Have all changes to policies and/or regulations or business practices that require long lead times and impact on the Program been identified and included in the plans, and is the likelihood of such changes been included in the risk analysis? • What assumptions is the Program making with identified risks that affect enterprise risks? (Enterprise risks that are unknown to the program) • How is the Chief Architect involved in decision making with the Program? • What level of oversight is in place? e.g. configuration management board? • Are there any potential cost overruns or schedule delays? If so, is the (possible) breach due to the technology solution chosen? If so why? Lack of expertise on the Program Mangement Office (PMO)? Maturity (or lack) of the Technology selected? Poor planning that did not consider the technology? • Are there any potential cost overruns or schedule delays? If so, is the (possible) breach due to scope creep or an undefined scope? • Are there any potential cost overruns or schedule delays? If so, could the (possible) breach have avoided by better planning? • What challenge does the Machine Learning (ML) solve? • Is the intent of Machine Learning (ML) to increase performance (detection), reduce costs (predictive maintenance, reduce inventory) , decrease response time, or other outcome(s)? • What is the clear and realistic way of measuring the success of the Machine Learning (ML) initiative? • Does the Machine Learning (ML) reside in a procured item/application/solution or developed in house? • If the Machine Learning (ML) is procured, e.g. embedded in sensor product, what items are included in the contract to future proof the solution? Let Organization use implementation to gain better capability in the future? Contract items to protect Organization reuse data rights? • What analytics is the Machine Learning (ML) resolving? Descriptive (what happened?), Diagnostic (why did it happen?), Predictive/Preventive (what could happen?), Prescriptive (what should happen?), Cognitive (what steps should be taken?) • What is the current inference/prediction/true positive rate (TPR) rate? • How perfect does Machine Learning (ML) have to be to trust it? What is the inference/prediction rate performance metric for the Program? • What is the false-positive rate? How does Machine Learning (ML) reduce false-positives without increasing false negatives? What is the false-positive rate performance metric for the Program? Is there a Receiver Operating Characteristic (ROC) curve; plotting the true positive rate (TPR) against the false positive rate (FPR) ? • Has the data been identified for Machine Learning (ML) (current application or for future use) initiative(s)? Is the data labelled, or require manual labeling? • Have the key features to be used in the Machine Learning (ML) model been identified? If needed, what are the algorithms used to combine ML features? What is the approximate number of features used? • How are the dataset(s) used for Machine Learning (ML) training, testing and Validation managed? Are logs kept on which data is used for different executions/training so that the information used is traceable? How is the access to the information guaranteed? • Are the dataset(s) for Machine Learning (ML) published (repo, marketplace) for reuse, if so where? • What Machine Learning (ML) model type(s) are used? Regression, K-Nearest Neighbors (KNN), Graph Neural Networks, reinforcement, rule-based • What are the Machine Learning (ML) architecture specifics, e.g. ensemble methods used, graph network, or distributed learning? • Are the Machine Learning (ML) models published (repo, marketplace) for reuse, if so where? • Is the Machine Learning (ML) model reused from a repository (repo, marketplace)? If so, which one? How are you notified of updates? How often is the repository checked for updates? • Is transfer learning used? If so, which Machine Learning (ML) models are used? What mission specific dataset(s) are used to tune the ML model? • Are Machine Learning (ML) service(s) are used for inference/prediction? • What Machine Learning (ML) languages, libraries, scripting, are implemented? • What tools are used for the AIOps? Please identify those on-premises and online services? • Are the Machine Learning (ML) languages, libraries, scripting, and AIOps applications registered in the Organization Technical Reference Model (TRM)? • What optimizers are used? Is augmented machine learning (AugML) or automated machine learning (AutoML) used? • When the Machine Learning (ML) model is updated, how is it determined that the performance was indeed increased for the better? • What benchmark standard(s) are the Machine Learning (ML) model compared/scored? e.g. General Language Understanding Evaluation (GLUE) • How often is the deployed Machine Learning (ML) process monitored or measures re-evaluated? • How is bias accounted for in the Machine Learning (ML) process? How are the dataset(s) used are assured to represent the problem space? What is the process of the removal of features/data that is believed are not relevant? What assurance is provided that the model (algorithm) is not biased? • Is the model (implemented or to be implemented) explainable? How so? • Has role/job displacement due to automation and/or Machine Learning (ML) implementation being addressed? • Are User and Entity Behavior Analytics (UEBA) and machine learning (ML) used to help to create a baseline for trusted workload access? • Is machine learning (ML) being used for abnormality detection? Security? • Is machine learning (ML) used protect the Program against targeted attacks, often referred to as advanced targeted attacks (ATAs) or advanced persistent threats (APTs)? • "If the Program is implementing machine learning (ML), is the Program implementing an AIOps pipeline/toolchain? • " • If the Program is implementing machine learning (ML), does the Program depict the AIOps pipeline/toolchain applications in their tech stack? • Has the Program where AI is used in the SecDevOps architecture? e.g. software testing • If the Program is implementing machine learning (ML), is the data management reflected in the AIOps pipeline/toolchain processes/architecture? • If the Program is implementing machine learning (ML), are the end-to-end visability and bottleneck risks for AIOps pipeline/toolchain reflected in the risk register with mitigation strategy for each risk? • What is the context of this review? Is the Program requesting an ARB decision or is this a status check-in? • "What is the Asset Type? IT, hybrid-IT (mixed IT), embedded IT, non-IT, standalone, services, primarily procurement? • " • If non-IT or hybrid-IT, does the Program use, analyze, store or transmit data of any kind? If so how/why? And if so why was it designated a non-IT or hybrid-IT Program? • If non-IT or hybrid-IT, is there evidence of “shadow” IT? If so why was this need not meet by “normal” IT resources? • What 'Reviewed Type' is being discussed? new, modernization, migration, retirement (dispose)? • What are the anticipated cost savings and cost avoidance on the agency, mission, and personnel that accrue from the collective efforts to modernize or retire this system? • What type of deployment rollout? What is the scope of Initial Operational Capability (IOC)? Minimum Viable Product (MVP)? Full Operational Capability (FOC)? • How long did it take to ship the Minimum Viable Product (MVP)? If it hasn’t shipped yet, when will it? • What ‘sister’ Program(s) support the Program?; e.g. this Program is the sensor, wall, ship and _____ is the sister Program with the technology components. Is there imbedded IT? Is the related/imbedded IT managed by a separate program? • What is the current state, how is the capability achieved today? • "What is the impact… • … when Program is successful? e.g. eliminate mainframe (provide timeframe) • … if Program is not approved to move forward or Program is unsuccessful? " • "What are the Enterprise issues? How can CTO resources support (help improve) this program? • ... What are the enterprise topics/resolutions from previous discussions with the Program? • ... How are Organization initiatives impacted? • Issues: Required policy changes for Organization transformation, Leveraging shared mission and business services, Common technologies, Lessons learned with disruptive technologies, Implementing shared mission processes, Procedures which apply cross-components, Impact of Laws, Regulations, and Policies (LRP) on Organization, Data Standardization, Resource / service termination, Architecture, Enterprise Roadmaps, and/or Tools • " • When Program returns for next review what do we expect to see? • Has the Program been reported negatively in the press since last review? If so, why? • "Who is the product owner? What organizational changes have been made to ensure the product owner has sufficient authority over and support for the project? What does it take for the product owner to add or remove a feature from the service? • " • Is the Program Manger (PM) appropriately certified? If not, what are the plans? • "Does the Program Manger Office (PMO) have applicable experience with... • ... cloud solutions? • … building popular, high-traffic digital services? • … developing mobile applications? • … developing web applications? • … using automated testing frameworks? • ... with modern development and operations (SecDevOps) techniques like continuous integration and continuous deployment?" • Does the Program Manger Office (PMO) have (enough) qualified staff on board to support the chosen technology’s implementation? • Has the Program performed an Operational Analysis (OA) in the past 12 months? If so, what was its outcome? What is the risk to Key Performance Parameters (KPP)/Critical Operational Issues (COI) achievement at Acquisition Decision Event (ADE)-3? • What other factors are relevant to this program’s health? • Does the technical approach align with funding (appropriations/fees) availability and assumptions? Are shortfalls and assumptions clearly identified? • Are there likely infrastructure upgrades/improvements required prior to operations? Has the Program/Component adequately planned for this? in the Integrated Master Schedule (IMS)? • Is this a joint effort? How do you recommend managing, if joint? • Does the Program schedule support adequate testing? • If a Letter of Aggreement (LOA) is required to support the upcoming acquisition decision event, is it complete? If not complete, what is the timeline for completion?