Introduction

The urgency of quantum readiness has escalated dramatically. NIST’s multi-year effort to standardize post-quantum cryptography is expected to release the first set of PQC algorithm standards in 2025. This will give enterprises and vendors a stable target for the first time – meaning the era of simply researching PQC is over, and the era of deploying will have begun.

At the same time, national security agencies and regulators around the world have issued concrete guidance and timelines pressuring organizations to act. For example, the U.S. Cybersecurity and Infrastructure Security Agency (CISA), NSA, and NIST jointly urge all organizations (not just government) to start creating quantum-readiness roadmaps, conducting cryptographic inventories, and engaging vendors now. Their warning is motivated by the “harvest now, decrypt later” threat: adversaries could be stealing encrypted data today with the intent to decrypt it in the future using a quantum computer. Any sensitive information that needs to remain confidential for years (financial records, intellectual property, personal data, state secrets) is at risk if we do nothing, because stored ciphertext can be retroactively broken once quantum decryption is feasible.

Unlike the gradual migrations of cryptography past, the quantum threat introduces a one-way deadline: once sufficiently powerful quantum computers exist, any data still protected by old algorithms (RSA, ECC) will be exposed almost immediately. Experts debate the exact timeline for a cryptographically relevant quantum computer, but many estimate perhaps around 2030 or early 2030s is plausible for breaking RSA-2048. Governments aren’t betting on the later side of that range – hence the aggressive target dates (e.g. U.S. National Security Memorandum 10 requires NSM systems to switch to PQC by 2035, and now the EU and UK are aiming for 2030-2035 for broad adoption).

The message is clear: the safe window to prepare is this decade, and the heaviest lift (inventory and planning) should happen now, before the threat is at our doorstep.

From a complexity standpoint, transitioning to quantum-safe crypto is a massive, multidisciplinary effort – but it’s manageable if broken into pieces and tackled by the right experts. Large enterprises already have teams that handle cryptography and security: PKI teams, network security, application security, data governance, compliance, etc. Crypto-agility is about coordinating these existing functions toward a common goal (algorithm agility and rapid rollout) and filling any knowledge gaps regarding the new PQC tools.

Importantly, one does not need deep mathematics or quantum computing knowledge to drive this change. Practitioners must understand how to inventory cryptographic usage, how to swap libraries/protocols, how to upgrade infrastructure like HSMs and CAs – all of which are extensions of classical IT and security management. A network engineer doesn’t need to know why lattices resist quantum attacks, they just need to know how to deploy (for example) a hybrid TLS key exchange and measure its performance. A PKI manager doesn’t require quantum physics; they need to update certificate profiles and tooling for larger keys and signatures.

In short, the barrier to action is lower than many assume. Most enterprises can get started using their current staff – perhaps an IAM specialist here, a devops engineer there – by giving them a clear mandate and some targeted training on post-quantum crypto specifics. Upskilling can be done via standards documentation, community resources, vendor training, and hands-on labs, many of which are freely available (and we will highlight these for each skill area).

The remainder of this guide is organized as follows: First, I break down the core skill domains that make up a successful crypto-agility and quantum readiness program, from governance and inventory through engineering and compliance. Under each skill, I outline what capabilities are needed, why it matters, who in your organization could take it on (existing roles to upskill), and tips for training/upskilling (including any relevant certifications or resources). I also list key personal attributes or strengths that make someone effective in that area, and how you might evaluate their performance in that role. Use the internal navigation below to jump to any section of interest.

In this guide:


Program Governance and Strategy

Skills and Knowledge

This is the foundational layer – establishing a crypto-agility program with executive backing and cross-functional coordination. Key competencies include:

Crypto-agility as a living program

Treat cryptography as a lifecycle-managed security control, not a one-off project. This means chartering a crypto steering committee or working group that has representatives from architecture, Infosec, AppDev, privacy, legal, and procurement.

This body needs the authority to set enterprise-wide crypto policies and approve exceptions. It will own the cryptographic standards, decide when algorithms or key lengths are deprecated, and drive the PQC migration.

Part of this skill is the ability to translate abstract “quantum risk” into concrete business impact at the service or application level – e.g. “If algorithm X is broken, these are the business services or compliance obligations at risk, and here’s how that maps to our risk register.”

Inventory-first mindset

Make an enterprise cryptographic inventory non-negotiable. You can’t manage or migrate what you don’t know you have.

This skill entails mandating the creation of a cryptography inventory (sometimes called a cryptographic bill of materials, or CBOM) that enumerates all crypto assets in the organization – algorithms, protocols, keys, certificates, and libraries in use.

It also requires putting governance around keeping that inventory updated (continuous discovery processes, change management hooks so new systems update the CBOM). U.S. federal guidance explicitly requires this continuous inventory, and industry bodies recommend the same.

Risk-based prioritization and roadmap

Develop criteria to rank systems by crypto risk and “impact horizon.” Not all applications need to migrate at once; a skilled program lead will categorize systems by sensitivity of data, the length of time the data must remain confidential, and the complexity of upgrading that system. For example, long-lived secrets (like healthcare records or intellectual property) and high-value services should be prioritized for PQC (especially against harvest-now-decrypt-later threats). Systems approaching end-of-life might be de-prioritized if they’ll be decommissioned before quantum breakage occurs.

Tying prioritization to business context is crucial – mapping crypto upgrades to continuity of critical business services, contractual or regulatory requirements (e.g. payment systems, national security, privacy laws), and even tech lifecycle (coordinate with planned tech refreshes).

Policy and funding integration

Embed crypto-agility into the organization’s governance documents and budget processes. This means updating security policies or enterprise architecture blueprints to include algorithm agility principles (e.g. a policy that new systems must use “crypto-agile” designs – configurable algorithms, support for approved PQC once available, etc.)

It also means institutionalizing budget line items for cryptographic modernization: a savvy CISO will align the PQC program with existing budget cycles (maybe rolling it into a digital transformation or tech risk budget) so that funding is earmarked for discovery tools, new HSMs, contractor support, etc.

Essentially, governance skill includes justifying and obtaining resources by articulating the risk in terms executives understand (e.g. “Without action, in 5 years we risk non-compliance and potential breach of long-term sensitive data”).

Why it Matters

Strong governance and strategy set the tone and pace for everything that follows. Without top-level sponsorship and defined responsibilities, PQC efforts can stall in analysis paralysis or turf wars.

Regulators now expect this programmatic approach: for instance, OMB M-23-02 makes a cryptographic inventory and migration plan mandatory for federal agencies, and we see many private-sector companies voluntarily mirroring those requirements internally. The UK’s National Cyber Security Centre explicitly advises organizations to “establish a multi-disciplinary quantum-safe working group” and set target timelines for each phase of migration. The European Commission’s 2025 roadmap similarly urges Member States and industries to synchronize around national PQC migration strategies.

In short, governance is no longer just best practice – it’s becoming an expected part of risk management. A well-run program will ensure your organization is not caught flat-footed by auditors or regulators asking “What’s your plan for quantum risk?” Moreover, governance drives efficiency: a central team can create reusable standards, tools, and architectures for PQC so that individual project teams don’t all reinvent the wheel. It also enforces consistency (e.g. everyone agrees which PQC algorithms are approved, what constitutes acceptable hybrid encryption, etc.), which is vital for interoperability and compliance.

Proof points

The momentum in policy is clear. The U.S. government’s joint quantum-readiness fact sheet (CISA/NSA/NIST) emphasizes creating a formal “quantum-readiness roadmap” now, as well as performing a detailed inventory as first steps. The NCSC’s phased migration timeline (2028 discovery complete; 2031 critical systems migrated; 2035 full transition) effectively serves as a governance template – it’s telling organizations to project manage this over a decade.

A savvy CISO with good governance skills will use such external directives as leverage to get internal buy-in (“We have to do this; let’s form the steering committee and get moving by following these published guidelines”). Organizations that have treated cryptography as a board-level concern (e.g. banks, some tech companies) generally move faster and avoid nasty surprises like unknowingly using a weak algorithm for years.

In summary, governance and strategy provide the scaffolding for a successful PQC migration, ensuring it’s proactive, funded, and aligned with business priorities rather than reactive and ad-hoc.

Candidate Roles & Upskilling

The person leading Program Governance is often a senior security manager or architect – for example, an enterprise security architect, a Deputy CISO, or the head of security risk management. In many organizations, an existing risk officer or compliance manager can be upskilled to take on the PQC Program Lead role, since they are already skilled in translating technical issues to business risk and herding cross-functional teams.

If you have a CISM (Certified Information Security Manager) or CISSP-ISSMP on staff, those certifications indicate management and program management skill in security – they’d be strong candidates to drive this.

Another potential source is someone from your Enterprise Architecture committee or Office of the CISO who has experience setting security technology standards.

Upskilling for this role involves getting familiar with the emerging standards and roadmaps: for example, reading NIST’s NCCoE Special Publication 1800-38 (draft volumes on quantum readiness) for guidance on how to structure a migration program.

It also involves keeping tabs on sector-specific guidelines – a banking CISO’s designate should follow updates from regulators like the FDIC, FFIEC, or ECB on crypto requirements; an energy sector lead might engage with DOE or ENISA guidance.

Certifications alone won’t cover PQC, but a foundation in risk governance (e.g. ISACA’s CRISC for risk and control, or ISC2’s CISSP for broad security knowledge) provides a baseline. From there, the upskilling is about learning the crypto context: attending workshops or webinars on PQC transition (many governments host them), participating in forums like the Cloud Security Alliance’s quantum-safe initiatives, or even joining standards groups as an observer to learn the lingo. The NIST NCCoE’s practice guide (SP 1800-38A/B/C) is essentially a free manual on how to structure discovery and migration – a great training resource for a program lead.

Key Attributes

Effective governance leads tend to have excellent communication and coordination skills – they can talk to engineers about bit sizes and to executives about risk in the same day. Other useful attributes include:

  • Strategic thinking: able to set a multi-year vision and juggle near-term wins with long-term requirements (e.g. planning for 2035 now).
  • Influence and leadership: since this role often has to persuade stakeholders without direct authority (especially in a decentralized company), the ability to build consensus and motivate others is crucial.
  • Organizational savvy: knowledge of how to navigate budgeting, how to align the PQC program with existing initiatives (cloud migration, digital transformation) so that it isn’t siloed, and how to use external pressure (regulations, customer requirements) to justify internal action.
  • Attention to policy detail: a good governance lead will be detail-oriented in policy drafting – making sure new crypto policies are precise (e.g. defining what “quantum-safe” means, setting specific deadlines for inventory updates, etc.). Thoroughness and clarity here prevent confusion down the line.
  • Risk framing: skill in quantifying or qualifying the crypto risk to the business, to prioritize effectively. This might involve scenarios (e.g. “If we don’t upgrade system X, what’s the worst-case if quantum breaks it?”) and comfort with uncertainty (since quantum timelines are probabilistic).

Performance Evaluation

How do you know if your Program Governance and Strategy function is doing a good job? Some measurable indicators include:

  • Cryptographic Inventory Completion: Has the team produced an initial enterprise cryptography inventory (CBOM) covering a majority of systems? And do they update it regularly (e.g. annually or via continuous processes)? If after a few months no inventory exists, governance is lacking.
  • Published Roadmap/Plan: A tangible output is a written migration roadmap or strategy document approved by stakeholders. If the program lead can get a plan in front of the board or risk committee outlining timelines (even tentative) for PQC transition, that’s a sign of progress.
  • Policy and Standards Updates: Check if internal policies, standards, or reference architectures have been updated to incorporate algorithm agility or PQC requirements. For instance, a new crypto standard that says “All new development must use TLS 1.3 and support hybrid key exchange” or an updated PKI Certificate Policy including PQC algorithms. Performance can be gauged by whether these updates happened and how widely they’ve been communicated.
  • Resource Allocation: A more qualitative metric – has the program secured budget and staff? If by the next fiscal cycle the program lead managed to get funding for tools (say a CBOM scanner or HSM upgrades) and perhaps training for staff, it indicates they effectively made the case.
  • Cross-functional Engagement: You might measure this by the activity of the steering committee or working group – e.g., does it meet regularly, produce minutes or action items, and are all the key departments attending? If the privacy officer and procurement manager are now actively collaborating with security on crypto, that’s a cultural change and success.
  • Progress on Priorities: Based on the risk-based prioritization, is the team hitting milestones? For example, if “Inventory critical systems by Q2” or “Pilot PQC in one environment by Q4” were goals, did those happen? Regular status reports or dashboard to the CISO/board can be used to track these deliverables.

Cryptographic Discovery and CBOM (Cryptography Bill of Materials)

Skills and Knowledge

Once governance mandates an inventory, the actual work of discovering all cryptography in the enterprise begins. This skill area is about technical discovery techniques and the tools to create a Cryptography Bill of Materials (CBOM) – effectively a catalog of all cryptographic components in your systems. Key competencies:

Discovery across all layers and environments

The team must be adept at scanning and enumerating crypto usage in a variety of forms. This includes network/protocol scanning – e.g., using tools to scan for supported TLS versions and cipher suites on servers, enumerating SSH configurations, IPsec/VPN settings, etc. It also includes collecting all digital certificates (perhaps by pulling from certificate stores, network appliances, or using ACME/internal CA logs to aggregate issued certs).

Application and code discovery is another aspect: using static application security testing (SAST) tools or code searches to find calls to crypto libraries (like usage of OpenSSL, BouncyCastle, etc. in codebases), and dynamic analysis (DAST) to observe what crypto protocols a running app uses. Don’t forget storage and data – identifying encrypted databases or files, which encryption algorithms they use, and where keys are stored.

Cloud and DevOps environments add a layer: scanning cloud KMS policies, finding secrets in vaults, checking if any serverless functions use outdated crypto, etc.

And importantly, this skill extends to legacy and non-IT environments: mainframes (are they using COBOL routines with 1980s crypto?), Operational Technology (OT) or IoT devices (hardcoded credentials or proprietary crypto), and vendor-provided services.

A robust discovery capability means using multiple methods – network scanning, credentialed scans on hosts, code review, parsing config files, querying HSMs/KMIP for key info – to leave no stone unturned.

Building the Cryptography Bill of Materials (CBOM)

Discovery outputs must be organized into a machine-readable inventory, often referred to as a CBOM. Skills here include understanding data modeling for cryptographic assets – listing algorithms (and key lengths or parameters), cryptographic libraries or modules in use (e.g. OpenSSL 1.1.1 vs OpenSSL 3.x, BoringSSL, etc.), all the certificates and their details (issuer, subject, expiration, algorithm), keys (with attributes like type RSA/ECC and size, whether they are hardcoded or in HSMs), and contextual usage (this key is used for TLS on these servers, that algorithm is used in this application for database encryption, etc.). The CycloneDX standard (a popular SBOM format) introduced support for CBOM in version 1.6, which provides a JSON/XML schema for exactly this information.

So a skill here is familiarity with SBOM/CBOM formats and tools that generate them. The CBOM should be queryable and version-controlled – meaning the team can answer “Where do we use RSA-2048?” or “Show differences in crypto usage between v1 and v2 of this product” by querying this inventory. This often involves feeding discovery data into a centralized repository or CMDB.

SBOM integration and supply chain linkage

Modern enterprises rely on a lot of third-party software. A crucial advanced skill is integrating the CBOM with Software Bills of Materials (SBOMs) from your vendors or your own software builds. An SBOM lists all software components; a CBOM lists cryptography. By linking the two, you can identify, for example, that a particular deployed container image includes OpenSSL 1.0.2 with RSA, or that an IoT firmware uses hardcoded ECC P-256 keys. This integration helps pinpoint transitive crypto risk – e.g. if a library in your software uses an outdated algorithm, SBOM+CBOM linkage will flag it. It also helps in enforcement: you might set CI/CD pipeline checks that reject builds which include disallowed crypto (like MD5 or RSA-1024) by reading the CBOM.

The skill here is partly technical (using tools to merge SBOM and CBOM data, writing scripts or using CycloneDX APIs to correlate them) and partly analytical (understanding how a vulnerability in an algorithm or library affects your software components).

Automation and continuous discovery

Given the dynamic nature of IT, the discovery team should also build or leverage automated processes. Skills include deploying crypto discovery tools – some open-source or commercial tools can automatically scan code repos for crypto usage, or inspect binaries for cryptographic constants. Understanding how to instrument CI pipelines or production telemetry to catch new introductions of cryptography is also valuable. For example, you might train your security information and event management (SIEM) to alert on any use of deprecated TLS configurations in network logs. In summary, treat discovery as an ongoing capability, not a one-time audit.

Why it Matters

Without an accurate cryptographic inventory, all other efforts are essentially flying blind. You can’t prioritize systems for PQC migration if you don’t know what algorithms they use or where your sensitive cryptographic hotspots are. A thorough discovery and CBOM practice gives you visibility – it’s the foundation of being agile. Concretely, imagine trying to rotate out RSA to PQC without knowing all the places RSA is embedded (it could be in code, in vendor appliances, in certificates used by partners connecting to you…). That scenario leads to expensive surprises and potential outages.

Regulators recognize this necessity: CISA has made ongoing cryptographic inventory a recommended best practice, and the very first step in OMB’s roadmap is inventory. The UK NCSC guidance heavily emphasizes discovery as phase one of any migration. Furthermore, a CBOM lets you enforce policies like “no deprecated algorithms.” It’s also invaluable for incident response – if a weakness is found in algorithm X, you can instantly list all systems using X to assess exposure.

Additionally, in supply chain security, customers and regulators may start asking for evidence of crypto hygiene in products – a CBOM is how you answer those questions (analogous to how SBOMs are now being asked for to address software supply chain risk). CycloneDX’s CBOM spec exists precisely because industry realized this gap. By having the skill to produce and utilize a CBOM, your organization can demonstrate crypto-agility readiness externally and manage it internally.

A quick example: when TLS 1.0 and 1.1 were deprecated, organizations with a good inventory could easily find which servers still allowed those protocols and schedule upgrades. Those without were scrambling with scanners at the last minute. Now multiply that challenge by every algorithm in use and add new PQC algorithms that need addition – it’s obvious that manual tracking (spreadsheet approach) will fail. Automated, comprehensive discovery reduces risk of missing something important, like an admin interface using hardcoded RSA keys that nobody knew about until an incident.

Where to Upskill & Tools

The people handling crypto discovery are often from existing AppSec, vulnerability management, or network security teams – basically those who already use scanners and manage inventories. A natural candidate is a security analyst/engineer who runs vulnerability scans or configuration audits, since CBOM tools are analogous.

Upskilling here can start with learning the CycloneDX CBOM format and how to generate it. CycloneDX provides an official specification and even tooling; for instance, CycloneDX’s v1.6 release notes and guide explain how to represent cryptographic elements. There are also community tools and scripts to extract crypto info from binaries (e.g., oqs-detect for PQC usage, or simple scripts to scan code for use of java.security or OpenSSL APIs). The U.S. NIST NCCoE’s draft SP 1800-38B is specifically about cryptographic discovery – it details a functional test plan for crypto discovery tools and a reference architecture for doing it at scale.

An upskilling path could be: read SP 1800-38B to understand what types of tools and data to gather, then experiment with open-source tools in a lab. Some commercial vulnerability management suites are adding crypto discovery features – training on those (via vendor documentation or courses) could help. For instance, if your organization uses a tool like Tenable, it might have checks for weak ciphers; knowing how to leverage and customize those goes a long way.

In terms of formal training, secure software development or architecture courses now increasingly mention SBOM and might touch on crypto. The Linux Foundation offers a free course on SBOM fundamentals which could be extended to CBOM.

Communities like OWASP also have projects and discussions around identifying crypto in apps. Participating in the OWASP CycloneDX working group or even just following their GitHub could be a hands-on way to build expertise.

Industry certifications like ISC2’s CSSLP (Certified Secure Software Lifecycle Professional) or SANS/GIAC GCSA (Cyber Security Automation) cover software inventory and automation, which is tangentially helpful. While there isn’t yet a “CBOM certification,” someone well-versed in SBOM/DevSecOps concepts will adapt quickly to CBOM. Encourage the team to also learn from your own environment: for example, if you have internal cryptographic standards or past crypto audits, study them to see where gaps were.

Candidate Roles

Typically, a senior security analyst or an application security engineer can spearhead this.

If your org has an Identity and Access Management (IAM) team that manages PKI or crypto for authentication, those analysts might be familiar with certificate inventories and can extend into broader crypto discovery. (For instance, an IAM/IGA analyst who handles digital certificates could be upskilled to use CBOM tools to inventory all certificates and associated algorithms in the enterprise – expanding their scope from just identity certs to all cryptography).

Likewise, a DevOps engineer interested in security could be great here: they understand CI/CD and can integrate CBOM generation into build pipelines (ensuring every build spits out a CBOM along with an SBOM). So, look at who manages your config management databases (CMDB) or asset inventories – adding crypto fields might be just an incremental task for them.

Key Attributes

People succeeding in discovery roles tend to be detail-oriented and somewhat “obsessive” about completeness. They enjoy sleuthing through systems to find hidden secrets. Important attributes include:

  • Curiosity and investigative mindset: Treating the discovery like a treasure hunt or forensic investigation ensures they keep digging in corners others might ignore (like that old SNMPv3 config or a hardcoded key in a script).
  • Scripting and automation skill: A lot of discovery involves parsing outputs, mass-scanning IP ranges, etc. Someone who can write a quick Python script to parse thousands of config files for crypto settings will excel.
  • Analytical rigor: They should carefully validate findings – e.g., confirm whether an identified crypto instance is actually in use or a false positive. Being able to cross-check and not just blindly trust tool output is key.
  • Comprehensiveness and patience: Building a full inventory can be tedious. The individual must be patient and methodical, documenting as they go. They can’t be satisfied with “we scanned most servers”; they strive for coverage of all.
  • Collaborative:* Discovery folks will need to reach out to application owners, dev teams, and vendors to get info (tools won’t catch everything, sometimes you have to ask the software owner what crypto it uses). So being able to communicate and get information from others diplomatically helps.

Performance Evaluation

To measure success in cryptographic discovery and CBOM:

  • Coverage of Inventory: What percentage of the organization’s systems/applications have been assessed for crypto and included in the CBOM? If after a set period, only 50% of applications are inventoried, that might be a concern. You can set targets like “Inventory 100% of external-facing systems and 80% of internal systems by end of Q2” and track progress.
  • Accuracy and Detail: Evaluate the CBOM’s completeness. Does it list algorithm, key length, library, usage context for each entry? Are there gaps or generic “unknown algorithm” placeholders? One metric could be running a spot-check audit: pick a handful of critical systems and independently verify their crypto usage, then see if it matches what’s in the CBOM. Fewer discrepancies mean the discovery is working well.
  • Frequency of Updates: If the environment changes (new microservice deployed, new software version), is the CBOM updated quickly? Performance can be measured by how automated/continuous the process is – e.g., number of days between a change and it reflecting in the inventory. A good program might aim for at least quarterly refreshes if not realtime integration.
  • Tooling Integration: Check if the discovery process is integrated into existing workflows. For example, have they integrated CBOM generation into CI/CD pipelines for new software releases? Are vulnerability scans now feeding crypto findings into the inventory automatically? Successful integration indicates the team has operationalized discovery.
  • Policy Enforcement: A derived measure – has the organization been able to set or enforce any policies based on the CBOM? For instance, “no RSA-1024 allowed” – and then using the inventory to verify compliance. If yes, that means the CBOM is trusted and useful. If not, maybe the inventory isn’t yet comprehensive or reliable enough.
  • Reduction in Unknowns: Over time, the number of “unknown crypto” instances should drop. Early inventories often have “unknown cipher in legacy app” as entries. A good performance sign is that the team investigates and resolves those (figures out what that unknown was or eliminates it).

PKI, KMS and HSM Engineering

Skills and Knowledge

This domain covers the infrastructure and processes around key management and digital certificates – essentially the backbone services that provide cryptographic keys and identities. As we transition to PQC, these services must be updated and new algorithms incorporated. Key skills include:

Key Management Lifecycle Mastery

A practitioner should know classical key management inside out – generation, distribution, rotation, storage, backup, and destruction of keys – and how PQC algorithms might alter those processes. NIST’s foundational documents like SP 800-57 (Key Management) and SP 800-130 (Designing a Cryptographic Key Management System, CKMS) remain the guideposts.

For PQC, one needs to pay attention to key sizes and formats. Also, key derivation and wrapping: some PQC keys might be too large to handle with existing key-wrapping mechanisms or have constraints (certain PQC private keys might be “non-exportable” in current HSMs because of length or algorithm).

The skill here is to adapt key management procedures – for example, updating how you do key backup if a private key is now much larger, or ensuring split knowledge/dual control can still be done (maybe the HSM’s secret sharing needs an update for bigger keys).

Additionally, understanding crypto period and strength: PQC keys can have different security margins, so determining appropriate rotation intervals might change. A strong engineer will align all this with policy (see NIST SP 800-131A for algorithm transition guidance).

CKMS design and documentation

If your organization maintains a formal CKMS (Cryptographic Key Management System) document per NIST SP 800-130, the skill here is to update that design for PQC. This involves ensuring the documentation covers new algorithms, how hybrid keys will be handled, how the system will generate and distribute PQC keys, etc. Even if you don’t have a formal CKMS doc, thinking in that structured way (covering all requirements NIST outlines) ensures nothing is missed. For example, how will you ensure interoperability if some systems use classical crypto and others PQC during a transition? The PKI/HSM team must design for a period where both coexist (hybrid solutions).

Public Key Infrastructure (PKI) modernization

PKI is heavily impacted by PQC, especially with digital signature algorithms. Skills here include updating Certificate Policies (CP) and Certification Practice Statements (CPS) to allow new algorithms (like ML-DSA Dilithium or stateful hash-based signatures) – defining what algorithms and key sizes are approved for certificates, what extensions might be needed (for instance, for hybrid certificates or alternative OIDs for PQC algorithms).

Engineers should understand X.509 certificate format changes: e.g., how to represent a Dilithium signature in X.509 (the object identifiers and parameters) as standardized by IETF LAMPS.

Also, planning the rollout of PQC in the CA hierarchy: will you have a separate PQC-capable intermediate CA, or a hybrid certificate authority that signs using both classical and PQC signatures? How will you handle revocation for potentially larger certificates (CRLs might blow up in size if PQC certs are big; maybe focus on OCSP)?

Also, consider certificate chain validation: ensuring all software that consumes your certificates can handle the new algorithms – this might involve working with vendors to support ML-DSA in their trust stores. Running parallel PKI infrastructures during transition is likely (issuing “dual certificates” or hybrid certs), so designing that with minimal disruption is a key skill.

HSM and KMS roadmap and integration

Most organizations use Hardware Security Modules (HSMs) or cloud Key Management Services (KMS) to store and use keys. PQC algorithms may require firmware updates or new API support on these platforms. A skilled engineer will survey the HSM/KMS landscape: know which of your existing HSM models support which PQC algorithms or when they will. It’s also about APIs: ensuring your applications that use PKCS#11, JCE (Java Cryptography Extension), or CNG (Crypto Next Gen in Windows) have been updated to call the new algorithms. The engineer should plan firmware upgrades and testing well ahead, because HSM upgrades can be sensitive operations requiring coordination (and often purchase of license features for PQC).

If your organization doesn’t use physical HSMs, then focus on cloud KMS (e.g. AWS KMS, Azure Key Vault) – watch their roadmap for PQC support and possibly engage in their early access programs. Also, consider the need for cryptographic hardware performance: PQC operations might be slower or consume more CPU; perhaps plan for more HSM partitions or clustering if needed.

Crypto-agility in key management

The overarching skill is designing key management and PKI with agility in mind. That means building in algorithm agility – e.g., if you develop a custom key management service, it should be able to support new algorithms via configuration, not hard-coded to RSA/ECC. It also means planning transitions – for instance, if you have data encrypted under RSA keys in an archive, how will you re-encrypt that with PQC keys?

The engineer might need to script mass key migrations or build tooling for re-issuing certificates in bulk. They should also know about specialized PQC use cases like signature schemes for code signing or firmware (which have different trade-offs: stateful vs stateless as discussed below). For example, stateful hash-based signatures (LMS/XMSS) are already approved (NIST SP 800-208) for certain uses like code signing, and stateless signatures (SPHINCS+ a.k.a. SLH-DSA) might be preferred for wider PKI use to avoid state management issues. The team must incorporate such schemes where appropriate – maybe use LMS for firmware if devices and HSMs support it, or plan to use SPHINCS+ for document signing if Dilithium sizes are an issue, etc.

Why it Matters

Without strong PKI/HSM engineering, a post-quantum project can fail in very practical ways. You might have a great new algorithm, but if your HSM can’t generate or protect those keys, or if your CA can’t issue a certificate that all your systems trust, you’re stuck. PQC often involves larger key and signature sizes, which can break assumptions in existing systems (like buffer sizes, certificate field lengths, etc.). The PKI/HSM engineers are the ones who foresee and mitigate these issues.

For example, a PKI team that updated their test CAs to Dilithium noticed a 15KB certificate size increase – they might realize they need to raise certificate size limits in certain databases or ensure the enrollment system can handle bigger CSRs.

If your enterprise’s crypto backbone isn’t ready, you’ll be a bottleneck for all other teams (e.g. the network team might want to deploy hybrid TLS, but the PKI team says “we can’t issue a Kyber certificate yet”). Conversely, if PKI/HSM engineering is on point, they become enablers: providing test certificates, setting up parallel PQC certificate hierarchies for pilot programs, and ensuring secure storage for new keys.

Also, compliance and assurance hinge on this – if you use HSMs, you likely need them to remain compliance-certified (FIPS 140-3, Common Criteria). Knowing vendor roadmaps and certification timelines helps you plan when a PQC-enabled HSM firmware will be FIPS-validated so you can deploy it in production requirements.

Another aspect: customer and partner trust. If your PKI issues certificates to customers (like a public CA or an IoT device issuer), you need to handle PQC smoothly or risk service outages or loss of trust (imagine bricking IoT devices with bad PQC certs). Therefore, the reliability and planning from PKI/HSM engineers directly impact business continuity and trust.

Where to Upskill & Certifications

The roles here are usually existing PKI administrators, security infrastructure engineers, or cryptographic module specialists. Often, these individuals might already have certifications like (ISC)² SSCP or CISSP (with focus on crypto), or vendor-specific training (e.g. Thales or Entrust HSM admin courses).

Upskilling specifically for PQC means getting familiar with the new algorithms’ operational characteristics. A good starting point is NIST’s documentation: SP 800-131A (latest revision) will give guidance on transitioning algorithms (they’ll likely update it to include PQC categories), SP 800-57 for key management recommendations (stay updated on any PQC mentions), and SP 800-208 for stateful hash-based signatures (LMS/XMSS) if you have use for them. These are somewhat academic reads, so pairing them with practical experience is key.

One practical route is using the free tools/simulators provided by vendors: for example, Utimaco’s Quantum Protect simulator is a free software that simulates an HSM supporting PQC algorithms. An engineer can play with generating PQC keys and issuing test certs using that simulator without any hardware. Thales and Entrust have documentation and whitepapers on how to use their PQC features – getting into vendor tech forums or workshops is useful. Many HSM vendors also offer training sessions or certifications (like “Certified nShield Engineer”) – these may now include PQC topics or at least one can ask questions about PQC during training.

Another upskill avenue: Open-source experimentation. Projects like OpenQuantumSafe (liboqs) and its integration with OpenSSL can allow an engineer to set up a mini CA or sign documents with PQC algorithms in software, which builds understanding that can transfer to the enterprise environment. For instance, an engineer might use OpenSSL + oqs-provider to create a Dilithium-signed CSR and then try to import it into a test Microsoft CA to see what breaks. This kind of hands-on lab work is invaluable and free. IBM has published tutorials (as mentioned in earlier sections) on using quantum-safe crypto with their tools – those can be a guided exercise.

If looking for a formal credential to signal competence: there’s not a specific PQC PKI cert yet, but Cloud Security Alliance (CSA) and ETSI have some training around quantum-safe cryptography (ETSI’s Quantum-Safe workshop materials, CSA’s Quantum Safe Security Working Group papers). Completing those or contributing to those groups can significantly enhance knowledge.

Candidate Roles

Typically your IAM or security infrastructure team has people who manage Active Directory Certificate Services, Venafi or other certificate management, HSM admins, etc. These are prime candidates. An IAM analyst with a background in certificates can grow into a PKI/PQC engineer by focusing on algorithm updates. If you have a database security person or cloud security engineer who manages KMS, they could also step up – especially for cloud KMS, understanding how to use KMS encryption context with PQC keys, etc.

So, you don’t necessarily need a “cryptographer” by title – you need the person who knows how keys flow through your systems. Those folks exist in most enterprises (though sometimes they’re unsung heroes in the backrooms). Encourage them to attend PQC webinars (Entrust, Thales, and others frequently host free webinars on post-quantum updates to their products – very useful).

Key Attributes

This role benefits from a mix of deep technical expertise and process discipline:

  • Rigorous and detail-oriented: PKI and key management are unforgiving domains (a single wrong parameter can lock out systems, a missed backup can mean irrecoverable loss). An ideal engineer here is the type who documents procedures, double-checks configurations, and tests in lab before production.
  • Security-minded: Obviously they need a strong trust mindset – maintaining the integrity of keys and following strict processes (they will be the ones insisting that we do a ceremony for root CA key generation with witnesses, etc.). They should be the kind of person who is comfortable with responsibility of high-value keys.
  • Adaptable: While they must respect procedures, they also should adapt to new tech. PQC is new, so a rigid “we’ve always done it this way” person won’t fit. We want someone excited (or at least willing) to learn new hardware, new algorithms, and integrate them.
  • Problem-solver: Integrating PQC into legacy systems will present weird problems (like “our HSM doesn’t support X directly, how do we work around it?”). A creative problem-solver who can find solutions (maybe using an intermediate software crypto or partnering with a vendor to find a way) is valuable.
  • Communication: They often have to explain to application teams why certain things must be done (like why the CA needs to be upgraded, or why a certain certificate can’t be issued the old way). Being able to articulate technical constraints to others and work with them to find solutions is important.

Performance Evaluation

How to gauge if PKI/KMS/HSM engineering is succeeding:

  • PQC Capability Delivery: A tangible outcome: Has the team enabled the organization to actually use PQC algorithms in any capacity? For example, do you now have a test Certificate Authority that can issue Dilithium or SPHINCS+ certificates? Is your HSM firmware upgraded and keys generated? If after a year the answer is no (still waiting on hardware, no PQC keys in sight), that’s a red flag. A good team will have at least a pilot implementation running (even if just in a lab environment) for others to leverage.
  • Documentation and Standards: Check if the team updated the relevant docs – e.g., an updated Certificate Policy that includes allowed PQC algorithms, an internal key management guideline that says how to handle PQC keys (like key size thresholds, usage of hybrid key exchange). If these are published internally, it shows proactivity.
  • Seamless Key Management Operations: Introduce some PQC operations and see if the key management process handles them without issue. For instance, attempt to backup a PQC key using your established procedure – does it work? Evaluate if the team has modified procedures to accommodate (performance can be measured by whether normal ops like backup, restore, key rotation, etc., now include PQC keys smoothly). If, say, a PQC private key could not be backed up and that issue languished, that’s a sign more work needed.
  • Interoperability Testing: A great measure is whether the PKI/HSM team has done integration tests with other teams. E.g., did they work with the network team to successfully install a hybrid TLS certificate on a test web server? Did they demo that a PQC-signed code can be verified by a client? If the PKI team is actively supporting others in testing use-cases, that’s excellent performance.
  • System Trust and Uptake: Are internal or external relying parties accepting the new certs/keys? For example, if the PKI team rolled out a hybrid root CA certificate (classical+PQC signature), measure if major applications or devices trust it. If a significant portion of systems can’t read the new certificate format, maybe the PKI team’s planning didn’t account for that (so that’s feedback). But if most are fine or the team provided necessary patches, that’s good. Essentially, the fewer “surprises” encountered by others when using the new crypto, the better the PKI/HSM groundwork was.
  • Security and Compliance Posture: Ultimately, ensure no regressions: the introduction of PQC shouldn’t weaken security controls. Monitor if any audit findings pop up related to cryptography (e.g., “unapproved algorithm found” or “CA key not properly protected”). If the PKI/HSM team can implement PQC while still passing audits (internal or external), that’s a success metric. Similarly, if an external compliance (like FIPS, WebTrust for CAs, etc.) is maintained through the transition, that’s a clear indicator of robust process.

Sidebar – PQC Signatures for Signing: One special consideration the PKI/HSM team will handle is choosing which post-quantum signature algorithms to use for different purposes. There are stateless schemes like Dilithium (ML-DSA, FIPS 204) and SPHINCS+ (SLH-DSA, FIPS 205), and stateful schemes like LMS and XMSS (profiled by NIST in SP 800-208). Stateless signatures are easier to use (no state tracking per signature) but tend to have larger signatures (Dilithium signatures are ~2-3KB; SPHINCS+ ~8-15KB). Stateful signatures (LMS/XMSS) have smaller signatures (a few hundred bytes to 1KB) but require careful state management (you must not reuse a signature key index). The NSA’s CNSA 2.0 guidance actually mandates stateful hash-based signatures (LMS/XMSS) for certain use cases by 2025 – because they are conservative and already standardized, especially for firmware signing. However, stateful schemes are not suitable for high-volume or multi-party use (too easy to mess up the state). So, the PKI team might decide: use LMS or XMSS for firmware or code signing in controlled environments (one key per device or product, with an on-disk counter for state, possibly facilitated by HSMs that handle LMS indices), but use Dilithium or SPHINCS+ for general certificates where many distributed clients (browsers, etc.) need to verify signatures without state concerns. Part of the skill is implementing these in parallel: maybe your secure boot process uses an LMS key (which your HSM now supports after a firmware update), while your user authentication certificates plan to use Dilithium in the future. We highlight this because it’s an area where one size may not fit all – the PKI/HSM team will likely maintain multiple algorithms for different needs, which is new (today we mostly live with just RSA or ECDSA for all signatures). Performance and system constraints will guide choices (for example, microcontrollers might not handle Dilithium’s size well, so you use LMS with a few thousand signature limit, which is fine for firmware updates). Keeping track of these trade-offs and evaluating them is very much a PKI engineering function.


Protocol and Application Engineering

Skills and Knowledge

This is where the rubber meets the road for using new cryptography in real systems – updating protocols like TLS/SSH and the software that implements them. Key skills include:

Network Protocol Upgrade (TLS, IPsec, SSH, etc.)

Engineers in this domain need to implement or enable hybrid cryptographic modes in protocols. For example, TLS 1.3: enabling a hybrid key exchange cipher suite (like X25519 + Kyber) so that the TLS handshake performs both a classical Diffie-Hellman and a PQC key exchange and combines them.

Skills include knowing how to configure libraries like OpenSSL to use an Open Quantum Safe provider or similar, understanding TLS handshake internals (ClientHello, ServerHello sizes, etc.), and being able to debug handshake failures.

Also, planning for PQC-based authentication in the future – meaning TLS certificates signed with Dilithium or another PQC signature. This requires ensuring the protocol implementation accepts the new signature OIDs and doesn’t choke on larger certificate messages. Similar for SSH – perhaps adopting the hybrid key exchange patches or new algorithms when they become available (e.g., an “[email protected]” hybrid KEX exists for SSH as an example). For IPsec/VPN, perhaps using IKEv2 with PQC (there are drafts and implementations to use post-quantum key exchanges in IKE). The engineer should track IETF drafts from groups like TLS, LAMPS, IPsecME for the latest on how protocols are incorporating PQC.

The skill is both coding (or at least configuring) and performance tuning these protocols, which often means dealing with larger message sizes. For instance, a TLS ClientHello that includes a Kyber public key gets significantly bigger – this might need adjusting buffer sizes or tolerance for fragmentation (as we’ll discuss).

Application integration of new crypto libraries

Many applications use crypto via libraries (OpenSSL, BoringSSL, crypto libraries in Python/Java, etc.). Application engineers must integrate PQC either by upgrading to versions of libraries that support it (e.g. OpenSSL 3.0+ can support providers like OQS) or by adding new libraries (like liboqs, or using hybrid-KEM libraries in code).

Skills include understanding APIs for PQC algorithms. For example, how do you call a Kyber key generation? If using OpenSSL with OQS, it might be as simple as specifying a new cipher suite string. If using a cloud KMS that now offers PQC keys, how to call that. In some cases, custom protocol adjustments are needed: consider an application that does custom encryption of data – the engineer might need to swap out RSA encryption for a combination of classical+PQC (maybe encrypt data with AES, but wrap the AES key with a hybrid of RSA and Kyber to protect against both threats). This requires careful design to avoid breaking functionality or security.

Also, updating secure boot or code signing processes in applications or devices: e.g., if an application verifies signatures on plugins or firmware, it must be updated to understand new signature types.

X.509 Certificates and Identity Ecosystem

A specific subset is handling certificate formats with PQC. Engineers should be able to create and parse certificates containing PQC public keys or signed by PQC algorithms. IETF’s LAMPS WG has drafts defining object identifiers for Dilithium, etc. The skill is updating certificate parsing logic if you have custom code, or ensuring your library (e.g. BouncyCastle, OpenSSL, CryptoAPI) is updated to handle those. Also handling certificate chains that might mix algorithm types (e.g., a root CA that is RSA, intermediate CA that is hybrid, leaf that is PQC). The validation logic must be agile enough. If your enterprise runs a Public Key Infrastructure or uses certificates extensively (think 802.1x, code signing, document signing), protocol engineers must test those ecosystems with PQC certs. For instance, “Can our VPN system accept a client certificate signed with Dilithium?” or “Does our PDF signing tool handle a SPHINCS+ signature certificate?” These are protocol/application integration questions.

Performance tuning and optimization

PQC algorithms, especially in protocols, can introduce higher latency or CPU usage. Engineers should measure and optimize. For instance, in TLS, adding a Kyber KEM might increase handshake time slightly – is your connection timeout logic tolerant of that? Do you need to adjust TLS session resumption parameters to mitigate repeated handshakes? If a handshake grows from 1KB to 12KB (worst-case with PQC auth + PQC key exchange), how does that affect a mobile client on a slow network? These engineers should run benchmarks in environments representative of your user base (high latency networks, etc.) to gauge impact. A known empirical result: Google found a 4% median increase in TLS handshake latency on desktop when adding a Kyber key exchange, primarily because the larger handshake gets split into more TCP packets. Cloudflare noted that when the ClientHello goes beyond the typical size (e.g., >1.5KB), some network middleboxes malfunction (they assumed a single packet handshake and dropped or corrupted the rest), causing connection failures.

Your protocol engineers need to know these kinds of issues and design mitigations: perhaps implement ClientHello padding strategies, enable Maximum Fragment Length extension in TLS, or simply be ready to deploy updates to those faulty middleboxes (which might be under your control). Also, some PQC operations are CPU-intensive (though many like Kyber and Dilithium are quite fast, especially with optimized libraries).

Engineers should ensure cryptographic operations are not bottlenecking the app – maybe offload heavy ops to hardware if available, or use asynchronous processing where possible. Essentially, a performance-minded approach: test with representative loads (maybe simulate 1000 TPS of handshakes) and observe any tail latency or throughput issues, then iterate.

Legacy fallback planning

While pushing the envelope, protocol engineers must also design fallbacks or compatibility modes. Not all clients or servers will support PQC at the same time. For example, if you turn on a hybrid cipher suite on a server, old clients might not understand it and fail to connect. Good practice is to maintain a negotiation mechanism: in TLS it’s fine because client and server negotiate the cipher suite – but ensure the order and preferences are such that you prefer hybrid when both support it but can gracefully drop to ECDHE if the client doesn’t support PQC. Likewise, if deploying PQC-based certificates, you might consider “dual-stack” solutions (serving both a classical and a PQC certificate on different ports or different domains, etc.) for compatibility during transition. Designing these fallbacks without weakening security (no forced downgrade attacks – you want to avoid an attacker tricking both sides to not use PQC when they actually could). Also, explicitly track where you cannot enable PQC yet (maybe a partner connection that uses an old TLS terminator) – and document those as exceptions to be addressed later.

Why it Matters

This is where early deployment stories provide valuable lessons. In 2022-2023, companies like Google and Cloudflare implemented hybrid key exchange in real products and reported their findings. Chrome’s experiment enabling X25519+Kyber by default (for some fraction of traffic) uncovered that a mere ~1KB of extra data in the ClientHello led to a noticeable latency increase (4% median on desktop) and broke some out-of-spec network appliances. They had to introduce an opt-out for enterprise because some corporate middleboxes couldn’t handle it. Cloudflare likewise tested post-quantum handshakes to origins and saw that if the handshake size exceeded one packet, a subset of connections failed due to buggy hardware. These real-world issues underscore why protocol engineers must be vigilant: PQC isn’t just plug-and-play in protocols, especially in the complex Internet ecosystem. You need to be ready to troubleshoot weird issues at the intersection of crypto and network (which requires both cryptography knowledge and general networking/debugging savvy).

On the positive side, the feasibility was demonstrated: Chrome showed no significant issues in high-bandwidth scenarios (and none at all in web vital metrics), and AWS’s tests of hybrid TLS found only ~0.5ms added latency and ~2.3KB added traffic per handshake in typical cloud region scenarios. That’s negligible for most internal use cases. So the performance can be fine, but you only know if you measure it in your context. If you’re doing something like IoT with very constrained bandwidth, a naive adoption could slow things until you adjust. Similarly, applications like VPN or VOIP that set up many handshakes might need tuning (maybe using session resumption more aggressively to avoid repeated PQC handshakes). The skill in this team is to take these lessons and proactively incorporate them: add telemetry in your apps to measure handshake times and success rates when you pilot PQC; implement features like session tickets or connection pooling to amortize handshake costs (as recommended by AWS).

Another aspect: IETF standards progress. The engineers should keep an eye on draft protocols like the IETF’s work on “Hybrid key exchange in TLS 1.3” and “PQ algorithm identifiers for X.509”. These will eventually become RFCs that your vendors (OpenSSL, browsers, etc.) implement. Being aware of the exact standards ensures compatibility. For example, if you deploy a non-standard scheme and the standard differs, you might have to redo some work. Fortunately, big players are influencing the standards with their experiments (the X25519Kyber768 draft that Chrome used is informing the final approach).

In summary, protocol and app engineers are the ones who bring PQC to life for end-users. If they do it well, users shouldn’t notice much difference other than continued security. If done poorly, users may experience failures or slowdowns, which can erode trust in the new tech (and give ammo to those resisting change). So it’s a critical, hands-on role for making the migration smooth.

Where to Upskill & Roles

This role naturally falls to network security engineers, software developers/architects of security-critical systems, and DevOps/SRE folks who handle deployment of network services. For instance, the engineers who manage your TLS termination (like those who configure F5s or Envoy proxies) would be prime to lead hybrid TLS deployment. Similarly, your application architects who built custom encryption layers would need to learn PQC integrations.

Upskilling can involve working with open source libraries: OpenQuantumSafe’s liboqs and OpenSSL provider is an excellent playground. A developer can write a simple client-server using OpenSSL + OQS to see a hybrid handshake in action and familiarize themselves with any gotchas (like how the ciphers are named, etc.). Also, following the IETF working groups (TLS and LAMPS) – the mailing lists and draft documents are public. Reading those drafts is like getting the blueprints of what will come. Engineers might even participate by testing draft implementations and giving feedback. There are also academic and industry papers on performance of PQC in protocols (some by AWS engineers, Cloudflare’s blog posts by Bas Westerbaan, Google’s security blog posts). These provide deep dives and are often approachable for an engineer audience.

Certification-wise, there’s not specific PQC protocol certs, but a strong background in network protocols (perhaps CCNP Security or similar) or secure programming (like CSSLP again) helps. Some specialized training may come from vendor events: e.g., Cloudflare and Google have presented at conferences (like NDSS, RSA Conference) about their PQC experiments. Attending or viewing those talks can be very educational. If an engineer has primarily an application development background, they might need a refresher on network security basics (PKI, TLS handshake 101) – ensure they have that baseline.

Candidate roles

likely SSL/TLS engineers (if you have a dedicated team for that), web server admins (the ones who configure Apache/Nginx with TLS settings), VPN appliance admins, application developers who implement features like S/MIME or who maintain libraries that use crypto. Even mobile app developers might need upskilling if the mobile app does cryptographic operations (e.g., if you’re embedding PQC in an app-to-backend communication). A noteworthy candidate is any developer of your product’s client software – they’ll need to integrate PQC support to match the server.

Key Attributes

  • Strong engineering fundamentals: They should deeply understand how cryptographic protocols work (not just at config level, but on the wire format, state machine, etc.). This typically comes from either experience or a keen interest in reading RFCs/source code.
  • Debugging and problem-solving: As mentioned, PQC integration could surface unusual bugs. The engineer must be relentless in troubleshooting – using packet captures, reading TLS library logs, etc., to pinpoint issues. A knack for solving tricky, low-level issues is great.
  • Performance-conscious: They should naturally think about efficiency – if a new algorithm is 5x slower, can we mitigate it? Are there vector instructions or better compilers we can use? Not every developer thinks at this level, so having someone who does (often a systems or network programmer mindset) is useful.
  • Collaboration: Protocol changes often require coordination between client and server teams, and sometimes with external partners (if you have B2B encrypted channels). The engineer needs to work well across boundaries – e.g., convincing a product team that enabling hybrid TLS is okay and helping them do it.
  • Security mindset: Obviously, they should be careful to maintain security – e.g., not introduce regressions like turning off certificate verification “just to get it working” or something dangerous. They must balance compatibility with not weakening things inadvertently. Attentiveness to cryptographic security (like not mixing up random seeds or parameters) is crucial.

Performance Evaluation

  • Successful Pilot Deployments: The clearest metric is whether the team has managed to deploy PQC in at least a test or limited production scenario without breaking things. For example, enabling a hybrid cipher on a test website and seeing that all modern browsers connect fine (and collecting metrics). Or issuing a PQC-signed software update and it verifies correctly on devices. If pilots are successful and move to broader deployment, that’s a big green check. If pilots resulted in major outages or had to be rolled back, examine why – was it avoidable?
  • Measured Impact and Optimization: A good protocol engineering team will not only deploy but also measure and report the impact. So, did they produce a report or data showing “PQC handshake adds X ms latency under these conditions, which is within acceptable range” or “We tweaked parameter Y to reduce overhead by Z%”? If yes, they’re on top of performance. If no one knows what the impact was, that’s a gap.
  • Issue Resolution: Inevitably, something will go wrong (like the middlebox scenario). Evaluate how quickly and effectively the team identified and resolved it. Did they have monitoring that caught handshake failures? Did they then implement a workaround (perhaps temporarily disable PQC for certain client IP ranges or devices, with a plan to fix the root cause)? Quick mitigation and long-term fix equals good performance.
  • Compatibility Coverage: How broad is the support achieved? For instance, after implementation, do 99% of clients negotiate the new cipher, or only 50%? If it’s low, maybe some popular client (like an older browser version) wasn’t considered. A high adoption rate indicates the team accounted for most compatibility issues or timed their rollout well (maybe waiting until Chrome and Firefox had support).
  • Code Quality and Security: If they wrote or modified code, has it been reviewed for security and reliability? Perhaps have an external pentest or internal code review for any custom PQC implementation. If no issues found or minor issues fixed quickly, good. If major flaws (like memory leaks or even security bugs) are found, that’s a sign the team might need more expertise or caution.
  • Documentation and Knowledge Sharing: Are the new configurations and processes well documented for the operations teams? Protocol engineers often implement and then hand off to ops to maintain. If they produced clear runbooks (e.g., “How to generate a hybrid cert with our CA” or “What ciphersuites are allowed and why”), it shows maturity. You could measure this by how smoothly the ops team can pick up these changes (few support tickets or confusion).
  • Innovation and Feedback: An exceptional team might contribute back to the community – for instance, by providing feedback on IETF drafts or even contributing code to open-source (like adding PQC support in an open-source VPN). While not required, this is a metric of leadership in the field, which can be reputationally good for the company as well.

Security Assurance, Testing and Performance Engineering

Skills and Knowledge

This domain focuses on validation – ensuring that the new cryptographic implementations interoperate across different systems, are secure against known attack vectors (like side-channels, malleability), and perform within acceptable parameters under stress. Key skills:

Interoperability testing across platforms

With new algorithms, you’ll often have multiple implementations (OpenSSL’s vs WolfSSL’s vs BoringSSL’s, etc.). A skilled test engineer will set up a matrix of interoperability tests. For example, test TLS 1.3 with hybrid KEM between an OpenSSL server and a WolfSSL client, or between an F5 load balancer (which maybe uses its own crypto library) and an AWS CloudHSM backend. They should cover all combinations relevant to your environment: various TLS libraries, various client OS versions (Windows SCHANNEL, Linux, etc.), and also network devices like firewalls, WAFs, proxies to ensure they pass the new traffic correctly. If you use CDNs or cloud services, include them (Cloudflare, Akamai have PQC support – ensure your integration works).

Also think of non-web protocols: test a post-quantum SSH client from one vendor to a server from another, etc. The idea is to catch any mismatches in how algorithms are implemented or encoded. Interop testing skills involve scripting a lot of automated connections and parsing results, as well as understanding protocol logs to pinpoint if a failure is due to algorithm negotiation, certificate acceptance, etc.

Side-channel and implementation security testing

PQC algorithms are new – some may have different timing or side-channel considerations than classical ones. For instance, lattice-based cryptography (like Kyber, Dilithium) often uses techniques to be constant-time, but a test engineer should verify that the implementations used are configured in a secure mode (e.g., not using an experimental fast mode that isn’t constant-time). They might use tools or write custom tests to check that operations do not leak obvious timing information (for example, if an operation on a secret key always takes the same time regardless of key or plaintext, versus if time varies in a way that could hint at the data).

Additionally, fault injection resilience: will a malformed PQC public key or ciphertext crash your server? Fuzz testing the new code paths is essential. Skills include using fuzzing frameworks (like AFL or libFuzzer) on any new parsing logic (X.509 with PQC OIDs, CMS with large signatures, etc.), and interpreting results (does the system handle weird inputs gracefully?).

Also, testing signature malleability or uniqueness: e.g., ensure that a PQC signature scheme doesn’t allow trivial alteration (most won’t, but testing ensures no implementation bug allows it). For stateful signature schemes, ensure that using the same state twice is properly prevented (simulate a failure scenario). All these require a deep security mindset and familiarity with how to test cryptographic implementations beyond just functional tests.

Robustness and failure-handling

The engineers should test how the system behaves if something goes wrong in the crypto. For example, if a handshake fails (maybe due to one side not supporting PQC), does the system fall back or log an error and move on? If a certificate is encountered with an unknown algorithm, do clients log and refuse (expected) but also give admin a useful message? Testing should include negative scenarios: try to connect with a bad signature, see that it’s properly rejected.

Also test rollback scenarios: if we deploy PQC and need to disable it quickly (maybe a vulnerability is found in a PQ algorithm), can we? Are there feature flags or config toggles, and have those been tested in production-like conditions? Having a “crypto kill-switch” tested is part of assurance.

Performance and load testing

Beyond just one-off performance measurement, the team should do scalability testing for the new crypto. For instance, run high volume of TLS handshakes with hybrid algorithms and measure CPU usage on your servers, to ensure you size your hardware or cloud instances correctly. Or test how many signing operations per second your updated HSM can do with Dilithium vs ECDSA – if slower, perhaps plan more HSMs for a given signing service.

The skill involves using load testing tools (like JMeter, custom scripts, or protocol-specific load generators) and collecting metrics (CPU, memory, latency percentiles). Also, test under adverse conditions: for network protocols, simulate high latency or packet loss to see if larger handshake messages cause any issues (like fragmentation problems). For hardware, maybe simulate one HSM being slow or failing during a signing burst – does the system queue properly, etc. Essentially, ensure the system’s reliability and performance at scale with PQC, not just in a single test.

Benchmarking and tuning

The assurance role should also establish baseline benchmarks for cryptographic operations (e.g., it takes X ms to do a Kyber key exchange, Y ms for Dilithium sign on our hardware) and keep track if future updates improve or degrade that. For example, compilers or libraries might optimize code – retest when you update versions. They should feed findings back to developers: e.g., if a particular parameter set (Kyber-768 vs Kyber-512) has significantly different performance, highlight that for decision making (Kyber-512 might be faster but less secure; if 768 is fine speed-wise, default to that for more security, etc.).

Similarly, measure effect on user experience if applicable (maybe measure page load time or transaction throughput after enabling PQC). Those with a slight background in data analysis will do well, as they might parse large log files or metrics dumps to glean insights.

Why it Matters

If development is done without thorough testing, you risk deploying a crypto system that might technically function but fail under edge conditions or be full of holes. With something as fundamental as cryptography, failures can be catastrophic (system outages, security breaches). Early assurance testing has already revealed interesting things: Cloudflare’s experiments, for example, validated that hybrid key exchanges with NTRU or SIKE were performing well and had negligible handshake delays, but then SIKE got broken by cryptanalysis (not something a test could catch, but shows the need to be agile). AWS’s research gave confidence that Kyber was one of the top performers among PQC KEMs. Without that data, organizations might either assume too much overhead (and avoid deploying when they actually could) or assume too little overhead (and overload systems). Testing is the reality check.

Another aspect: Security assurance is about trust. When adopting brand new algorithms, management and auditors will want evidence that it’s done right. Having a robust suite of test results – interoperability logs, fuzzing outcomes, performance graphs – helps demonstrate due diligence. It also catches things that development teams might overlook. For instance, a dev team might implement hybrid TLS and not realize a certain vendor’s client will break; a dedicated test can find that before customers do. Or maybe an HSM driver has a bug with the new algorithm under high load – better found in testing than in production under peak traffic.

From a side-channel perspective, history has shown that even standardized algorithms can have pitfalls in certain implementations (e.g., many early RSA implementations had timing leaks). PQC algorithms are more complex in some cases (lattice math etc.), so one wants to ensure the implementations used have protections (constant-time, etc.). A test engineer might verify claims in documentation by actual experiments (like using tools to detect timing differences). Also, the concept of “constant-time” in lattice code might differ (e.g., sometimes trade-offs are made). It’s wise to be paranoid and test on actual hardware since microarchitectural effects can be tricky.

On interoperability: In the early days of any new protocol, interop issues are common. For example, one implementation might interpret an encoding slightly differently. Assurance engineers who catch these and work with vendors to fix them save a lot of headache later. They also ensure that when you turn on PQC by default, you won’t suddenly find out that, say, your core banking software’s TLS stack can’t talk to your updated API gateway because of an encoding mismatch.

Where to Upskill & Roles

This role is usually filled by QA engineers, penetration testers, or performance engineers in the security or engineering org. Often, a senior SRE (Site Reliability Engineer) or performance specialist might take the lead on performance aspects, while a security QA or crypto engineer focuses on correctness and security testing. Upskilling in PQC assurance can involve participating in interop test events – occasionally, organizations like NIST or industry consortia hold “test days” for PQC implementations. If those exist, send someone to them.

Another path: use open test suites. For example, OpenQuantumSafe provides some integration tests (like they integrate with OpenSSL and produce known-good results; you can extend those). Also, learning how to use formal test vectors: NIST typically provides known-answer test vectors for algorithms (to verify any given implementation). Ensuring all algorithms in use pass their test vectors is a basic step.

For side-channels, one might upskill by using tools like differential timing analysis scripts or even hardware like ChipWhisperer if testing embedded devices. If the enterprise has a hardware lab, leverage it. For fuzzing, tools like OSS-Fuzz (Google’s project) might already be fuzzing OQS or others – reading their findings can guide your own. There are community resources where people post issues discovered in early PQC libraries; staying informed through those channels (GitHub issues, crypto forums) helps.

Performance testing skills might require knowledge of specialized benchmarking tools or writing custom ones (maybe using Go/Java/Python to simulate a lot of clients). If not already known, training on using cloud infrastructure to simulate large-scale loads is useful (for example, using AWS EC2 to spin up 1000 clients hitting a test endpoint to measure TLS throughput).

Candidate roles

An ideal person could be a pen tester or red teamer with interest in crypto, as they have the break-it mentality for side-channel and robustness tests. A QA lead in the software team could pivot to cover PQC if they have interest in deep technical testing (especially if they have done performance or security testing before). Sometimes, academic partnerships help – some companies collaborate with university researchers to test PQC implementations (because academics might have expertise in cryptanalysis and side-channels). So a liaison or including results from academic studies can supplement internal efforts.

Key Attributes

  • Thoroughness: They need to be the kind who think of all the corner cases and weird scenarios (What if two PQC handshakes overlap? What if the key exchange message is lost and retransmitted? etc.).
  • Patience and perseverance: Testing can be repetitive and sometimes you have to run things for a long time to catch rare bugs. The engineer must not cut corners (e.g., “We tried 10 handshakes, looks fine” is not enough; maybe try 10 million if scripting).
  • Skepticism: A good assurance tester never fully trusts the claims until tested. If a vendor says “constant-time,” they try to verify it. If everyone says performance impact is negligible, they still measure.
  • Technical curiosity: Understanding why something failed or is slow is crucial. The best testers often debug into the code to figure out the root cause of an interoperability issue, rather than just reporting “X and Y don’t work”.
  • Collaborative but independent: They should work with developers and other teams to communicate issues, but also operate somewhat independently to maintain an objective view. They might need to challenge developers (“we need to fix this leak”) even if devs feel it’s fine. That takes a bit of assertiveness combined with evidence-based communication.

Performance Evaluation

  • Test Coverage: Measure what percentage of new crypto components have dedicated tests. For example, do we have tests for all PQC cipher suites we enabled? Did we test all major client types (browsers, mobile apps, etc.)? If significant gaps (e.g., “we never tested iOS client with the new TLS”), that’s a miss. Good performance is comprehensive coverage of scenarios.
  • Issues Detected Pre-production: A successful assurance phase is often judged by catching and resolving issues before go-live. If the team can enumerate a list of bugs or performance bottlenecks they identified and that got fixed, that’s great. If major issues only came to light after deployment (and were not in the test results), that suggests testing was incomplete or not realistic enough.
  • No major surprises in production: Related to above – if after rollout there are minimal incidents attributable to the new crypto, it means testing was effective. If you had to roll back or patch in a hurry due to something testing should have caught, that’s a negative.
  • Continuous Improvement: Does the team incorporate new findings and retest as things evolve? For instance, if a new version of OpenSSL or a new HSM firmware comes, do they run the regression tests? If a new PQC algorithm variant is standardized, do they plan tests for it? A performance metric can be number of regression runs or updates to test cases per quarter.
  • Documentation of results: Are the test results and methodologies well documented and reproducible? If an auditor asks “how do you know your PQC implementation is secure and interoperable?”, the team should have a test report or at least a collection of evidence. If that’s readily available and clear, kudos. If not, then the assurance work, even if done, isn’t visible or verifiable.
  • Team engagement: Often assurance teams are separate, but for something new, they should be engaging closely with dev/protocol teams. So qualitatively, measure how many defects or suggestions they contributed to the dev process. If the dev team acknowledges them as instrumental (like “they found this and we fixed it before release”), that’s a sign of good integration.

Data Governance, Privacy and Compliance

Skills and Knowledge

This domain ensures that the policies around data protection and regulatory requirements are aligned with cryptographic changes. It’s a bit less hands-on-technical and more about mapping and planning, but very crucial. Key competencies:

Data classification and longevity

The team (often privacy officers or data governance folks) should link data classification policies to cryptographic requirements in a post-quantum context. Specifically, they must identify which data in the organization has a long confidentiality requirement – e.g., personal data that must remain confidential for X years due to law, or trade secrets that are valuable indefinitely, state secrets with long declassification timelines, etc.

Skills include working with business units to estimate how long certain data needs to stay secure. This is important because of the harvest-now, decrypt-later threat; any data whose sensitivity lasts, say, 5-10+ years is a candidate for near-term PQC protection. The governance person should update retention schedules and encryption policies to reflect this. For example, if a certain database contains patient health info that must remain confidential for 20 years (perhaps minor’s records), then the encryption of that DB should move to PQC sooner rather than later. Also, they might push initiatives to shorten the sensitivity period of some data (if you can delete or tokenize data earlier, you reduce the risk window).

Regulatory requirements mapping

Many industries have encryption mandates (PCI DSS for credit card data, HIPAA for health, GDPR for personal data in EU, etc.) This skill involves reviewing how those might evolve with PQC. Some regulators may explicitly require PQC by certain dates (the US is hinting at it via OMB directives for federal, EU with 2030 for critical infra). So a compliance specialist will track laws, regulations, and standards to ensure the enterprise’s cryptography roadmap meets any external deadlines or expectations. They should be able to map each system’s crypto upgrade plan to the relevant compliance control. E.g., if PCI requires “strong encryption,” at some point RSA might not count as strong – be ahead of that. If new standards (like ISO or NIST guidelines) incorporate PQC readiness, incorporate those into audits.

Skills include translating government roadmap timelines to internal policy (e.g., say “All sensitive data systems must have a migration plan in place by 2025” in your policy because you anticipate regulators expecting it). They should also ensure assurance artifacts (like risk assessments, audit checklists, vendor questionnaires) include questions about PQC.

Privacy and contractual considerations

Privacy officers should consider if the introduction of PQC changes any consent or cross-border data transfer issues. Likely not much, but for example, if you rely on encryption as a mitigator for GDPR (encrypted data can sometimes be transferred more freely), you’d want to ensure your encryption remains strong against future threats. They should update any public-facing privacy commitments (“we protect your data with state-of-the-art encryption, including post-quantum algorithms where appropriate” could be a forward-looking statement). Contractually, procurement might require vendors to commit to providing PQC upgrades (but the privacy/data governance team can advise on that requirement to legal).

Also, ensure breach response plans consider quantum (if an adversary got encrypted data, is that a breach now if it’s quantum-vulnerable? Possibly yes, if we expect it to be decrypted later).

Crypto-agility in policies and standards

The data governance skill also includes updating internal encryption standards and key management policies. For instance, an internal standard might have specified “AES-256 and RSA-2048 are approved.” Now it should specify the approved PQC algorithms and maybe disallow some aging ones. The person should craft language that is agile: e.g., “Use NIST-approved algorithms of either conventional or post-quantum variety as listed in [annex]” and keep that annex updated. They also might incorporate requirements that systems be crypto-agile (like requiring support for algorithm changes in design phase). Essentially baking in adaptability to future unknowns (maybe alternate PQC in case one breaks).

Audit and artifact updates

Many organizations produce artifacts like System Security Plans (SSP), encryption inventories for regulators, or respond to audits and RFIs about cryptography. The skill here is to incorporate PQC readiness into those. For example, if an auditor asks “Do you use industry-standard encryption?”, now the answer might include “Yes, and we are in process of transitioning to NIST’s post-quantum standards according to timeline X.” The governance professional should prepare statements, evidence (like the cryptographic inventory and roadmap) for such purposes. They might also update third-party risk assessments to ask vendors how they are preparing for PQC, since your data may reside with them. This extends the influence outward (some regulators will expect you to flow down crypto requirements to vendors).

Why it Matters

Data governance and compliance provide the business and legal rationale for doing all this and help avoid nasty surprises like being out of compliance or breaching contractual obligations. For example, consider GDPR’s requirement for “appropriate technical and organizational measures” to protect personal data – one could argue that as PQC becomes a known necessary measure, failing to plan for it could be seen as negligence in a few years.

On the flip side, showing regulators that you have a concrete PQC migration plan can earn goodwill and perhaps avoid penalties after an incident (“we encrypted, but the attacker got ciphertext – however, we had a plan to upgrade that encryption and were following NIST guidance” might go better in post-quantum breach litigation than “we never thought about quantum risk”).

Additionally, focusing on data lifetime prioritizes resources: without governance, a dev team might waste time PQC-protecting trivial data that doesn’t need it, while neglecting something like long-term archives of sensitive info. The privacy/governance perspective ensures focus on what matters – e.g., “Our HR system holds PII that needs protection for decades (employee SSNs, etc.), whereas some telemetry data we delete in a week might not need urgent PQC.” That nuance can save effort and provide a risk-based approach to the rollout.

Another key point: by involving privacy and compliance early, you also address the human element – for example, training and awareness. Privacy officers can include quantum-safe practices in their training (so employees know company is moving that direction, etc.). They also ensure that customer commitments are kept. If you promise clients you use top-notch security, eventually not having PQC could conflict with that promise when PQC is considered top-notch.

Also, certain industries (like government contractors) may soon require compliance with things like NSA’s CNSA 2.0 suite which includes PQC algorithms for classified systems with deadlines in the 2020s. Data governance folks tracking these ensure your enterprise systems meet those if applicable (for instance, a defense supplier’s products might need to use CNSA 2.0 algorithms by 2025 in contracts).

Where to Upskill & Roles

Likely roles here are Privacy Officers, Data Protection Officers (DPOs), Compliance Managers, Information Governance leads, and Security Policy writers. They might not need deep cryptography training (though understanding basics is necessary), but they should upskill on the specifics of PQC standards and timelines. For instance, reading the executive summaries of documents like the EU coordinated roadmap for PQC or the NCSC roadmap. Also, any sector-specific guidance: e.g., the US Health Sector Coordinating Council might issue healthcare-specific PQC guidance; a compliance person should watch for that.

Training could involve attending webinars by law firms on “quantum computing and data privacy” (some legal conferences cover that emerging topic), or reading analysis from groups like ENISA on crypto transitions.

They should also connect with the security team’s work (governance and tech teams) to get facts (like inventory results) that feed into their risk assessments. An upskilling idea: have them sit in on some of the crypto steering meetings or read NIST’s NCCoE guidance to understand what’s expected technically so they can align policies.

Candidate roles

If you have a Chief Privacy Officer or Compliance Officer, they would be the champion. If not, perhaps a risk manager who handles policies. In some cases, the CISO themselves (if wearing governance hat) does this, but ideally you have dedicated GRC (Governance, Risk, Compliance) personnel.

Key Attributes

  • Attention to policy detail: They need to translate technical things into policy without errors. That means being precise about algorithm names, dates, etc., in policy documents.
  • Forward-thinking: Good governance folks anticipate where regulations will go rather than just reacting. So someone who follows emerging trends (say, they know that in a year or two regulators will probably mandate inventories) and acts now.
  • Communication: They often have to justify to leadership or auditors the need for budget or exceptions. Being able to articulate quantum risk in plain language and tie it to legal and business impacts is key.
  • Organizational influence: They should successfully influence data owners to take this seriously. For instance, if they say “we need to protect archive tapes with PQC encryption,” they must persuade the IT archive team or business owners to invest in that. That often involves building a case with compliance requirements or risk scenarios.
  • Diligence: Ensuring every relevant policy and contract clause is updated is painstaking. A diligent, checklist-oriented approach helps (e.g., search all vendor contract templates for encryption clauses and update them).

Performance Evaluation

  • Policy Updates Issued: A tangible metric: has the organization updated its security policies/standards to incorporate crypto-agility and PQC? If within a year of starting the program, there are published documents (like a revised encryption standard, a new section in data protection policy about long-term confidentiality/PQC), that’s a win. If nothing changed on paper, things might fall through cracks later.
  • Data Prioritization Completed: Did the team deliver an analysis of which data sets/systems are high priority for PQC (based on sensitivity and longevity)? If yes, and those priorities are guiding the technical rollout, good job. If not, technical teams might be guessing what to do first.
  • Regulatory Compliance Achieved/Maintained: If any external mandates came due (for example, say by 2024 a regulator required a cryptographic inventory or PQC plan), measure if you met that with minimal issues. A positive indication is regulators giving positive feedback or at least no findings on your crypto readiness.
  • Audit Readiness and Response: If audited (internal or external) on crypto agility, did the organization pass? You might simulate an audit: ask an internal audit to review crypto readiness, see what they find. If the compliance and governance folks have prepared well, audit should find compliance with emerging best practices (like existence of inventory, roadmap, etc.).
  • Third-Party Management: Check how many of your critical vendors have committed to PQC readiness in contracts or assessments. If after efforts, a significant number now have clauses or timelines in their contracts for crypto-agility (like “Vendor will support NIST PQC algorithms by 202X”), that’s evidence the governance team extended the posture to supply chain.
  • Training & Awareness: Perhaps evaluate if employees (esp. devs, architects) have been informed about crypto-agility guidelines. This could be a metric like “90% of architects attended a briefing on our crypto-agility policy”. Governance can drive such awareness. If no one in dev knows about it, governance hasn’t percolated down.
  • Reduced Data Exposure Risk: Harder to measure directly, but maybe track if any data stores were reclassified or had retention reduced because of this effort (that’s a form of risk reduction). For example, if they discovered certain archives that actually didn’t need to be kept and deleted them sooner – that’s quantifiable risk drop (less data for attackers to harvest and hold).

Procurement and Vendor Management

Skills and Knowledge

This area ensures that the organization’s vendors and suppliers (as well as procurement processes) are on board with crypto-agility. Key competencies:

Embedding crypto-agility requirements in RFPs and contracts

Procurement officers (with guidance from security) should update RFP templates, security questionnaires, and standard contracts to include requirements for PQC and SBOM/CBOM. Skills include knowing what to ask vendors: e.g., “Provide a cryptography bill of materials for your product, and a timeline for supporting NIST PQC algorithms”. Also, including language that vendors must maintain cryptographic agility – meaning if an algorithm is broken or deprecated (including by quantum advances), the vendor must have a plan to patch or upgrade in a timely manner. This might reference external standards (like requiring compliance with NSA’s CNSA 2.0 for products in certain categories by specific dates). The procurement team needs to understand enough to evaluate vendor responses – e.g., if a vendor says “We use 4096-bit RSA and that’s enough,” the team should flag that as not meeting future-proof requirements. They may incorporate CNSA 2.0 guidelines or others as baseline: for instance, specifying that any product used beyond 2030 must support at least one of the FIPS 203/204/205 algorithms.

Evaluating vendor roadmaps and evidence

Vendors will often claim “we’re working on it.” The skill here is to critically evaluate and get evidence. For hardware/software vendors, ask if they have done any interoperability testing, if they have prototype support available for testing, or if they have certification timelines. For example, if buying a new VPN appliance, ask the vendor: “Does it support hybrid key exchange now? If not, when? Have you tested it with any clients? Is there a firmware upgrade path or do we need new hardware?” Similarly, for cloud services, press account reps for their timeline on enabling PQC (many cloud providers have public statements on their efforts – use that in evaluation). The ability to read a vendor’s security whitepaper and spot if PQC is even mentioned is useful. Also, include as criteria in scoring RFPs: vendors with credible PQC readiness get a higher score. This motivates them to prioritize it.

Supply chain SBOM/CBOM integration

Ensure that the procurement process obtains SBOMs from suppliers and, where possible, CBOMs. As SBOM becomes standard (e.g., US executive orders are pushing SBOM for government vendors), extend that to cryptography. A skill here is working with suppliers to maybe produce a custom report: if a vendor can’t give a full CBOM, perhaps at least ask, “List all cryptographic algorithms and libraries in your product.” This can go into a risk register. Procurement or vendor management staff should track these and highlight any use of soon-to-be-obsolete crypto (like if a product uses RSA-1024 or SHA-1, that’s an immediate red flag independent of quantum!). The ability to maintain a database of vendor crypto info and map it to transition plans is key.

Vendor labs and pilot testing

Some vendor management may entail organizing joint testing – e.g., ask your TLS inspection appliance vendor to run a test with you enabling PQC ciphersuites to see if it works, before you need it in production. That means coordinating resources and perhaps NDA for early firmware, etc. The vendor manager needs the relationship and influence to get those kinds of cooperation.

Staying aligned with standards like CNSA 2.0

For sectors dealing with government, NSA’s Commercial National Security Algorithm (CNSA) Suite 2.0 outlines a timeline and approved interim algorithms (like they still allow certain classical algorithms until PQC is ready, then switch). A procurement specialist should mirror those requirements in procurement for any systems that need to align (e.g., if you build solutions for government, ensure the components meet CNSA 2.0 by the required dates).

Budgeting for upgrades

In procurement planning, anticipate that some hardware or software may require an upgrade or replacement to support PQC. The skill is to incorporate that into refresh cycles and budgets. For example, if you have a bunch of IoT devices deployed that cannot be upgraded to PQC, you might plan to replace them by 2030 and write that into capital plans. Communicate with vendors on their plans – if they have a new model coming that supports PQC, plan when to switch.

Incident response with vendors

If a cryptographic emergency happens (say a PQC algorithm is suddenly found weak or a major vulnerability in an implementation), vendor management ensures that suppliers respond quickly. Having contractual language for critical patches (like “vendor will remediate critical crypto vulnerabilities within X days”) is part of this.

Why it Matters

Your security is only as strong as your weakest supplier. If you do everything right internally but a critical third-party software you use doesn’t upgrade its crypto in time, that could become your compliance or security problem. By baking crypto-agility expectations into vendor relationships now, you reduce the risk of being caught with unsupported software or hardware when quantum attacks materialize. We have seen analogous situations historically: e.g., when SHA-1 was deprecated, many organizations had to drop vendors that were slow to add SHA-2 support. The quantum transition could be even bigger; you don’t want to rip-and-replace major systems at the last minute because the vendor failed to act. Using procurement muscle can push vendors to prioritize PQC if they know customers demand it.

Also, consider long product lifecycles: some equipment (like in telecom or OT) stays in use for 10-20 years. Procurement now needs to ensure anything being bought in 2025 will not be a security liability by 2030. If a vendor has no quantum-safe roadmap for a product expected to be in use in 2035, maybe choose a different vendor or plan to replace earlier.

Additionally, regulators are likely to extend PQC expectations to third parties. For example, if a bank outsources some processing, regulators will want to know that the service provider is also crypto-agile. Ensuring your contracts cover that will both protect you and satisfy examiners.

Finally, obtaining SBOM/CBOM from vendors helps your own inventory completeness. It may reveal things like an embedded component using vulnerable crypto that you didn’t know about. Having that transparency is increasingly part of supply chain security best practices (CISA pushes SBOM; extending it to CBOM is a natural next step).

Where to Upskill & Roles

This is typically handled by Procurement officers, Vendor risk management teams, Legal (contract negotiators), and Third-party risk assessors. They may need some education on cryptographic concepts so they can ask the right questions. Possibly provide them with a checklist or questionnaire template (security team can help craft one).

Upskilling can involve attending supply chain security workshops or specific training like Certified Third Party Risk Professional (CTPRP) – though PQC might not yet be in such courses, the principles of updating requirements are. Encourage procurement folks to engage with industry groups; e.g., many industry consortiums (like FS-ISAC for financial services) are discussing quantum risk – they could glean how peers are handling it with vendors.

Candidate roles

If you have a vendor risk management committee, add the quantum risk to their radar. The head of procurement or category managers for IT purchases are key – they can include requirements in RFIs. Also, legal counsel who work on contracts should be briefed to add PQC language.

Key Attributes

  • Detail-oriented contract knowledge: They must embed technical requirements precisely in legal language. Not everyone can word “cryptography bill of materials” correctly in a contract; someone who takes care to get terminology and obligations right is needed.
  • Assertiveness with suppliers: Some vendors might downplay the need (“Oh, that’s far off”). Procurement needs to be firm that it’s a requirement. If they have good negotiation skills and can leverage the business (like “we won’t purchase if you don’t commit to X”), that’s effective.
  • Collaboration with technical teams: They should work closely with security and engineering to understand what’s needed and verify vendor claims.
  • Forward-looking budgeting: Good procurement planners think of total cost of ownership. Here that includes potential future upgrades for PQC.
  • Risk assessment skill: Many vendor managers have questionnaires for various domains (data handling, etc.). Add crypto risk – they need to assess if a vendor’s response of “we’ll do it later” is acceptable or high risk, and escalate accordingly.

Performance Evaluation

  • Contractual Coverage: Check what percentage of new contracts or renewals in the past year include crypto-agility/PQC clauses. If that number is high (say 80-100% for critical vendors), the procurement team has institutionalized it. If it’s ad-hoc or missing, that’s a gap.
  • Vendor Roadmap Tracking: Does the team maintain a list of critical vendors with their stated PQC readiness status and timelines? If so, that’s great. If asked, “when will Vendor X’s product support PQC?” the team should have an answer or at least documented they asked.
  • Supply Chain Inventory Integration: Measure if SBOMs/CBOMs from vendors are being collected and used. Maybe count how many vendor products you have CBOM data for. The goal should be an increasing trend.
  • No “straggler” surprises: When the org is ready to transition, ideally all key suppliers are ready too. If, during testing or later rollout, you discover a crucial product cannot support a needed algorithm and there’s no update, that means procurement/GRC didn’t catch it earlier. Avoiding such scenarios is a success metric.
  • Vendor Risk Scoring: Many orgs have a risk score for vendors. If crypto-agility is now a factor in those scores (e.g., a vendor not having a plan is marked as higher risk), and some management decisions have been made based on that (like choosing one vendor over another or pressing a vendor to remediate), then vendor management has effectively integrated the risk.
  • Budget Alignment: If procurement foresaw that certain systems would need replacement or upgrade for PQC and budgeted accordingly, measure if those funds were requested/approved. It indicates proactive planning. If in 2029 you suddenly realize you need millions to swap gear, that means it wasn’t budgeted and thus procurement planning fell short.
  • Communication: Internally, do project managers and purchasing agents know to include PQC in their evaluation? If yes (perhaps through updated procurement policies or training sessions held), that’s good performance by the procurement leads.

Building the Team: Key Roles

Bringing together all the skill domains above, a large enterprise will form a multi-disciplinary team for crypto-agility and PQC readiness. Here are the typical roles and how their responsibilities map to the skill stack (some individuals may wear multiple hats in smaller organizations):

  • Executive Sponsor (e.g. CISO or CIO): Provides top-level backing, secures budget, and sets risk appetite. This person doesn’t do hands-on work but is accountable to the board for reducing quantum risk. They chair the crypto steering committee and make final calls on priorities and risk acceptance (for instance, deciding if certain lower-risk systems can accept residual classical crypto risk longer). Their support ensures everyone else can do their job without political roadblocks.
  • PQC Program Lead / Program Manager: The central coordinator (could be a senior manager in security or IT). Runs the day-to-day of the program: tracking the inventory progress, ensuring the roadmap milestones (like those in the phased plan) are met, reporting status to executives, and coordinating among teams. They translate the strategy into execution, maintain the backlog of crypto-agility tasks, and resolve inter-team issues. Essentially, they keep the momentum and ensure no domain falls behind.
  • Enterprise Crypto Architect: A senior technical expert who designs the overall approach – from choosing which hybrid modes to use, to setting architectural standards (e.g., “we’ll use a dual PKI approach” or “we’ll require support for both classical and PQC during transition”). They understand all domains enough to make decisions that fit together. For example, they decide on what algorithms are recommended, how key management will integrate, how applications should request PQC from KMS, etc., and document reference architectures or patterns for teams to follow. This role often authors the updated crypto standards and reviews project designs for compliance.
  • PKI/HSM Engineers: The specialists running certificate authorities, HSMs, and key management systems. They implement the changes in those systems – generating new PQC test CAs, configuring HSM partitions for new algorithms, updating scripts or tools that developers use to get certificates or keys. They handle the nitty-gritty of issuance and key storage. During pilots or rollout, they’ll be the ones, say, creating a Dilithium CSR for a web server and getting it signed, or enabling an HSM’s PQC firmware and testing it’s working. They also likely manage any stateful signature keys (keeping track of usage so as not to reuse states).
  • Protocol and Application Engineers: Developers or sysadmins focused on upgrading software and services. One subset focuses on network protocols – e.g., updating TLS configurations enterprise-wide (load balancers, app servers) and ensuring things like SMTP, VPN, etc., can use PQC. Another subset is application developers who embed cryptography in application logic – they modify code to call new libraries or adjust data formats for PQC. They are also in charge of performance optimizations at the application level (like adjusting timeout values or message sizes as needed). They work closely with architects and PKI to ensure the certs/keys they need are available.
  • AppSec and DevSecOps Teams: They integrate crypto-agility into the development pipeline. For instance, they might put a check in CI so that if a developer accidentally uses a disallowed algorithm (like hardcoding an RSA key), the build fails. They update static code analysis rules to flag insecure or non-agile crypto usage. They also maintain SBOM/CBOM processes – e.g., ensuring each build produces an SBOM, and maybe a CBOM section, and that these are reviewed for policy compliance (no weak crypto). Essentially, they act as quality control to enforce the policies set by the crypto architect. They also may assist in the discovery process by scanning code for crypto.
  • Data Governance & Privacy Officer: Focuses on the data side as discussed – making sure that the data that needs PQC gets it first, and that plans align with privacy obligations. They also handle communications: if customers or regulators inquire about “Are you quantum-safe?”, this role crafts the response and ensures it’s accurate (with input from technical teams). They update data protection impact assessments if needed to consider quantum threats for certain datasets.
  • Vendor Management & Procurement: Ensures external products and services keep up. They coordinate with all the technical roles to know what vendor support is needed by when (e.g., if the PKI engineer says “we need our Database product to support PQC client certs by 2026,” procurement can engage the vendor). They track those commitments and possibly maintain a risk register of vendors with crypto weaknesses.
  • Operations & Incident Response: The ops teams (SOC, NOC, system admins) monitor the health of cryptographic systems – for example, certificate expiry, any errors in handshakes, etc. They also need “break-glass” procedures if something goes wrong: say a PQC algorithm causes a major outage, do they know how to disable it quickly? Or if a cryptographic incident occurs (like someone publishes a attack on a algorithm you use), the IR team assesses impact (using the inventory to see where it’s used) and coordinates remediation. They need to be trained on new alerts – e.g., set up monitoring for handshake failure rates (as an indicator of potential issues). They also handle routine tasks like updating configurations across thousands of servers, which is non-trivial. So they need good automation to roll out cipher changes or certificate replacements rapidly if needed.

Each of these roles is crucial; some organizations might combine them (one person might be both the crypto architect and PKI engineer in a smaller company, for instance, or the program lead might also be the enterprise architect in one). The key is that all these perspectives are represented. Crypto agility is a team sport – it spans management, engineering, operations, and compliance.


Additional Resources for Skill Development

  • Open Source and Reference Implementations: Hands-on practice is invaluable. The Open Quantum Safe (OQS) project provides liboqs (a C library of PQC algorithms) and integration into OpenSSL 3 (via the oqs-provider) which allows you to stand up test TLS servers/clients and X.509 certificate experiments today. This is a free playground to let your engineers get familiar with PQC APIs and measure performance in a controlled setting. Likewise, open-source toolkits like WolfSSL and BoringSSL have PQC support in progress – engaging with those projects (even just testing their branches) can surface interop issues early.
  • Standards and Working Groups: Following standards development keeps you ahead. The IETF TLS Working Group and LAMPS Working Group (for certificate and PKI) are where much of the transition mechanisms are being defined (hybrid key exchange drafts, composite certificate formats, etc.). Their mailing lists and meeting minutes are public. Allocating someone to monitor those (and even contribute, if possible) means you’ll know how browsers and others plan to implement things. Similarly, NIST’s NCCoE has a project on migration to PQC – their Special Publication 1800-38 volumes (A, B, C) are essentially how-to guides with example solutions. They also host workshops – participating in those will connect you to a community of practitioners tackling the same issues.
  • Government and Industry Guidance: We’ve cited many – to recap a few key ones: The U.S. OMB M-23-02 memo is a straightforward read to understand what baseline the government expects (inventory, planning, funding). The UK NCSC “Timelines for PQC Migration” gives a strategic view of phases through 2035. The European Commission’s Coordinated Roadmap 2025 shows political commitment and timelines (notably the 2030 target for critical infra). Ensuring your strategy aligns with these can also help argue for resources (“regulators say we should do X by Y, we are doing it”). Also watch sector-specific bodies – e.g., the U.S. Financial Services ISAC or Health ISAC often release sector-tailored guidance; NSA’s CNSA 2.0 for national security systems is useful even if you’re not in that realm, as it sets a high bar for algorithms and timing.
  • Community and Training: The Linux Foundation’s OpenSSF (Open Source Security Foundation) has been active in supply chain security – they offer free courses on subjects like SBOM and secure development which tie into crypto-agility (for example, understanding SBOM helps with CBOM). They also have a Post-Quantum Cryptography Working Group under the Cloud Security Alliance, where practitioners share progress and tools. Joining such alliances or at least reviewing their whitepapers (CSA published a nice primer on PQC for executives) can provide ready-made messaging and technical pointers. For developers, large tech companies have been publishing tutorials: e.g., IBM’s Developer network has tutorials on implementing quantum-safe crypto (given IBM’s involvement in PQC, these are quite practical for using OpenSSL with PQC on IBM Cloud, etc.).
  • Vendor Support and Labs: Engage with your key vendors’ early access programs. Thales, Entrust, Utimaco, IBM, AWS, Microsoft, Cloudflare – all have beta programs or at least demo environments for their quantum-safe features (some we mentioned, like Utimaco’s simulator, or Cloudflare’s offer to let researchers test on their network). By working in these labs, your staff will gain experience and also influence the products. It’s much easier to ask a question and get support when you’re part of their official testing group. Also attend vendor webinars dedicated to PQC; for instance, cloud providers have sessions about how to use their services in hybrid mode.
  • Academic Collaboration: If you have R&D budgets, consider sponsoring or collaborating on academic research in quantum-safe cryptography relevant to your industry. Some organizations partner with universities to evaluate performance or side-channel resilience of PQC in their specific use cases (e.g., a bank might fund a study on PQC in blockchain or in core banking transactions). This can give you early insight and talent exposure.

Essentially, the knowledge is out there – tap into it rather than going it alone. The field is evolving fast, and being connected to the community will help you stay updated. Starting now and learning from each other is the way to compress that time.


Conclusion

The advent of quantum computing represents a once-in-a-generation shift for cybersecurity. However, as we’ve outlined, the path to quantum readiness is navigable with the right combination of skills, planning, and proactive execution. By leveraging existing strengths – the people and processes you already have – an enterprise can evolve its cryptographic foundations without needing a PhD in quantum physics on staff. In fact, quantum-proofing your organization is less about radical new technology and more about disciplined security management: inventory your assets, keep your systems updated, plan for change, test thoroughly, and iterate.

Crucially, the time to act is now. Standards are in place, and threat advisories from top agencies warn that waiting until quantum computers arrive is far too late. By beginning the transition today, you’re not only protecting against tomorrow’s decryption threats but also strengthening your agility to handle any cryptographic change (even unforeseen ones). Organizations that build crypto-agility into their DNA – treating cryptography as a living control with owners, budget, and metrics – will be those that can swiftly swap out algorithms when needed, whether due to quantum breakthroughs or classical vulnerabilities. This adaptability will soon be seen as a hallmark of good security governance, much like patch management and incident response are now.

The reassuring message is that you likely already have the talent needed to start this journey. Your security architects, network engineers, PKI admins, developers, and GRC officers – with some targeted upskilling – are fully capable of executing a PQC migration. It’s about focusing their efforts and fostering cross-team collaboration under a unified roadmap. Encourage your teams to experiment and learn; celebrate early victories (like that first successful hybrid TLS connection, or the first PQC-signed firmware pushed to devices). These build momentum and confidence.

Geneva, Switzerland+41 22 593 96 88

Singapore+65 6829 2349

Chicago, IL, USA+1 (312) 761-4818

Dubai, UAE+971 4 291 2306


Privacy Preference Center

Share via
Copy link