Introduction
Within the cybersecurity community, there is significant confusion about what a “quantum readiness” program really entails and how much it might cost. Some of my cybersecurity peers assume that defending against quantum threats will be as simple as clicking an “Update” button like any other security patch, while others fear it’s an almost impossible overhaul that could run into billions of dollars. (Indeed, the U.S. federal government estimates its own PQC migration will cost about $7.1 billion by 2035.)
The reality lies between these extremes: transitioning to post-quantum cryptography (PQC) is neither a trivial software update nor an unattainable feat, but it is an enormous, multi-year effort – likely “the largest and most complex digital transformation [an] organization has ever undertaken”.
In simple terms, quantum computers powerful enough to break current encryption (so-called cryptographically relevant quantum computers, or CRQCs) are expected by the 2030s. When they arrive, today’s common public-key algorithms (RSA, ECC, Diffie-Hellman, etc.) could be defeated, exposing any data or systems secured by those algorithms. Large enterprises like telecom providers rely heavily on these classical cryptosystems – in everything from network authentication and SIM cards to internal IT and customer-facing portals.
Preparing for the quantum threat therefore demands a comprehensive quantum-readiness program. This program is far more involved than a routine patch; it touches every layer of technology and operations and will span many years of coordinated work. Experts caution that moving to PQC is a marathon, not a sprint: one industry survey found most organizations optimistically expect to finish in ~4 years, but in reality experts believe 8–10+ years is a more realistic timeline for doing it properly. Some are saying that a large telco would likely need 15-20 years. Past cryptographic transitions (for example, migrating from 1024-bit RSA to 2048-bit, or from SHA-1 to SHA-256) each took on the order of a decade – and those were much simpler than the post-quantum migration we face now.
Unlike a one-off event (say, Y2K remediation, which had a fixed deadline and a straightforward fix), becoming quantum-safe is an open-ended process with no single “flag day.” There is no universal cutoff where everyone flips to PQC overnight; instead, organizations must systematically identify and overhaul cryptography across potentially thousands of systems – often while those systems remain in service. And there is a large variety of potential fixes for each quantum vulnerability.
To give you a better picture what a quantum-readiness program might look like in a large enterprise, I’ll try to illustrate in on an example of a large (fictional) telecommunications company. Over the last 10 years I worked with quite a few telecom providers on various phases of these programs, so the lessons learned come straight from the field. I’ll try to cover major aspects of a program, but this is by no means an exhaustive list (my latest such integrated program plan in a telco had over 120,000 tasks!). Telecom is a useful case study because telcos operate vast, cryptography-dependent networks and supply chains, making them representative of the challenges any large enterprise will face.
The goal, however, is broader than just telecom: by walking through the key phases, components, and timeline of a telco’s quantum migration initiative, I hope anyone can gain some idea of what’s ahead of them. I tried to keep the challenges more generic and applicable to any large enterprise, but if you are interested in telecom-specific challenges, see my other post: “Telecom’s Quantum‑Safe Imperative: Challenges in Adopting Post‑Quantum Cryptography.”
Program Scope and Duration: Why It’s a 10+ Year Journey
For a large telecommunications company (or any similar-scale enterprise), achieving full quantum readiness is a decade-long journey (or longer). This is not a simple software patch but a multi-year transformation that must be woven into all IT, OT and business operations. One major telco that began quantum-safe efforts early has been at it for nearly 10 years and is still nowhere close to finishing its PQC upgrades. Industry leaders consistently warn that 5-10 years is a realistic, maybe even an optimistic, timeframe for large organizations to migrate to post-quantum cryptography (or otherwise mitigate the quantum threat) – closer to 10 years if it’s done thoroughly and safely. Many even argue that telecom CISOs should plan with 15-20 years.
Several factors drive this long duration:
Enterprise-Wide Impact
Cryptography is embedded everywhere in a telco’s environment – from core network components, base stations, switching equipment and authentication systems to consumer devices and apps, web portals, internal IT databases, IoT sensors, and third-party platforms. And every single device, app, system has to be assessed and in some ways mitigated.
Upgrading every instance of vulnerable cryptography means touching an enormous number of applications and devices, across different business units and technologies (IT, networks, and OT). This broad scope makes the effort sprawling and complex. It’s not just the “obvious” places like TLS in websites or VPN connections; even things like smart building controls, badge readers, backup systems, and mobile SIM cards might be running algorithms that need replacement.
The sheer scale of a telco’s asset base (potentially millions of devices and endpoints) ensures the project cannot be quick.
Multiple Layers of Cryptography
To make matters worse, every asset, system or an application likely has multiple layers of cryptography. In a single call, for example, from the moment a user’s device connects to the network, through call setup (or SMS delivery), across roaming interfaces, and into backend billing, hundreds of cryptographic mechanisms are at work. I broke that use case down in much more detail here: “Cryptography in a Modern 5G Call: A Step-by-Step Breakdown.”
No “One-Click” Fix
There is a dangerous misconception that one can simply swap out algorithms and be done – nothing could be further from the truth. There is no one-button update that makes an entire organization quantum-safe. A PQC migration isn’t going to be like installing a normal security patch; it demands years of effort and meticulous planning. Every use of RSA, ECC, DSA, and other quantum-vulnerable algorithms must be identified and addressed, often with different solutions for different use cases. New post-quantum algorithms (like CRYSTALS-Kyber, Dilithium, etc.) are not drop-in replacements in most contexts – data structures, message sizes, and performance characteristics all change. Many systems will need software modifications or even hardware upgrades to support them.
In short, there is no shortcut. Organizations must be prepared to “comb through” their infrastructure and retrofit or replace cryptography piece by piece.
Gradual Transition (Live Systems)
A telecom must remain operational 24/7, so the crypto transition has to occur gradually and with backward compatibility. There is no downtime window to take everything offline for an upgrade. This means for many years the organization will run classical and post-quantum cryptography in parallel – phasing upgrades carefully to maintain interoperability. It’s akin to “changing the engines on an airplane in mid-flight,” which complicates scheduling and coordination immensely. For example, if you upgrade a core network node to use a PQC algorithm, but its peers in other networks or customer devices don’t understand that algorithm, communication could break.
Thus, strategies like hybrid encryption (using classical+PQC together) are required as interim measures. The need to coordinate upgrades across thousands of endpoints (with minimal disruption) inevitably stretches the timeline. Careful choreography is needed to ensure that at no point does security or service availability fall apart during the migration.
External Dependencies
Telecoms depend on global standards and vendor support, which can significantly slow down the process. They cannot unilaterally change certain systems until vendors and industry bodies provide PQC-compliant solutions. For instance, mobile network standards (3GPP) and broadband protocols must officially incorporate PQC before carriers can fully deploy them.
Equipment manufacturers and software vendors need to build PQC into their products, and many are waiting for final standards or strong customer demand. A telco might be ready to upgrade, but if a critical vendor’s router or base station doesn’t yet support the new algorithms, the telco’s hands are tied. Aligning with these external timelines often stretches the program.
Similarly, industries coordinate on this threat: the GSMA Post-Quantum Task Force (launched 2022) now includes 50+ corganizations collaborating on standards and guidelines. Until such standards and vendor roadmaps mature (which is happening, but gradually), a telco’s pace is constrained by the slowest links in its supply chain.
Internal Coordination & Culture
Human factors and organizational alignment are often the toughest hurdles, and they can add years of delay. Unlike past tech upgrades that stayed within one team’s remit, quantum readiness cuts across silos – it’s as much a business transformation as a technical one. Especially if you incorporate crypto-agility into your quantum readiness program, as you should. Achieving it demands strong executive sponsorship, cross-functional teams (IT, network engineering, security, risk, legal, procurement, etc. all have roles), and a clear governance structure to steer the effort. If those elements are lacking, progress will stall.
In practice, many organizations struggle with divergent opinions and turf wars: for example, some technology leaders remain skeptical and insist “quantum computers will never arrive” (or not soon enough to worry), so they drag their feet. Others resist the crypto inventory process because they fear an audit will expose vulnerabilities on their watch – they perceive it as a personal criticism of their past work, leading to defensiveness and non-cooperation. Business executives may be reluctant to invest in a program that doesn’t directly increase revenue, creating constant budget battles. And teams that have never collaborated (like network ops and application developers) suddenly must work in lockstep, which can breed confusion or conflict. Because there’s no established playbook for a PQC migration, different stakeholders often disagree on priorities and approach.
This misalignment can significantly slow the program’s momentum. Organizations that recognize these change-management challenges and address them through strong leadership, education, and incentives will move faster, whereas those with internal infighting or apathy might see the timeline slip well beyond a decade.
Evolving Threat and Solutions
The quantum threat timeline is uncertain and the cryptographic solutions are still evolving – which means the target is moving during the program. Cryptographers expect new developments in the coming years: it’s very possible that by the 2030s, improved PQC algorithms (or better versions of today’s algorithms) will emerge that supersede the initial standards. In fact, one of the originally promising PQC candidates (the SIKE algorithm) was broken by researchers even before standardization.
This inherent uncertainty means the program cannot aim for a one-time “quantum safe” state and declare victory; it must build crypto-agility so the organization can adapt on the fly. Executives need to acknowledge that becoming quantum-ready is an ongoing capability, not a one-off project. The program should plan for continuous monitoring of quantum computing progress and cryptanalytic research, and be ready to pivot if, say, an algorithm is cracked or a new standard emerges.
In other words, even after the initial 10-year migration, the work doesn’t completely end – it transitions into a mode of regular updates and improvements (more on that in Phase 4).
Major Program Phases
In summary, a large telco’s quantum-readiness initiative is a long-term, strategic resilience program. Leadership should view it not as a single project with a fixed end date, but as the establishment of a permanent new capability within the organization. With that understanding, let’s break down the program into major phases and components, illustrating what each phase involves. (I’ll keep using a telco context for concreteness, but the steps are analogous in other sectors.)
Phase 1: Discovery – Asset Inventory and Cryptographic Inventory
“You can’t secure what you don’t know.” The first crucial step is a thorough discovery of all assets and cryptographic usage across the organization. Before fixing anything, the telecom must identify every place where cryptography is used, and even before that, identify all the systems and devices in the environment. This phase has two closely linked components: (1) building a complete inventory of IT/OT assets, and (2) performing a detailed cryptographic inventory (sometimes called a Crypto-Bill-of-Materials, or CBOM) to map out the cryptography on those assets.
1.1 Asset Discovery
A comprehensive systems and assets inventory is the foundation. Imagine an initiative where every single device, application, and system in the company – without exception – must be discovered and catalogued.
For a large telco, this is a massive undertaking by itself. Many organizations find that even maintaining a basic IT asset inventory is challenging; here, the bar is even higher, extending to operational tech and “hidden” devices.
The telecom must enumerate everything: not just servers, routers, and employee laptops, but also things like network appliances, base station hardware, IoT sensors in facilities, HVAC controls, badge readers, industrial control systems, dev/test environments, backup equipment, etc. If it touches the network, it’s in scope. Seemingly trivial items – a smart thermostat in a data center, a vehicle-counting sensor in the parking garage, a smart power strip in a server rack – all rely on cryptography and could become entry points for quantum-enabled attackers. One security assessment humorously noted even a smart fragrance diffuser on the corporate network; such an innocuous device can have an embedded web server using TLS, for example.
The point is that the asset list will be extensive and varied. This inventory process often uncovers forgotten legacy systems or shadow IT devices that the security team wasn’t aware of. It’s common to find that the real asset inventory is much larger (and more diverse) than anticipated once all business units contribute information.
Gathering this asset inventory requires a multi-pronged effort. Automated network scanning tools can help identify active IP addresses and devices. IT service management databases and CMDBs provide starting records (though they are rarely complete). Procurement records can show what was purchased. Interviews with facility managers or operations teams might reveal OT equipment. In some cases, manual walkthroughs are needed – literally checking what is plugged in or installed at various sites.
Importantly, this isn’t a one-time checklist; it should be approached systematically and ideally automated where possible so it can be kept up to date.
The output of asset discovery is a master list of hardware and software assets (with attributes like device type, location, owner, firmware versions, etc.) which will be cross-referenced in the next step. Asset discovery can happen in parallel with the cryptographic inventory, but it’s listed first because trying to inventory cryptography without knowing all your assets is like searching for treasure on a map full of blank spots.
1.2 Cryptographic Inventory
With an asset baseline in hand, the telco then conducts a deep cryptographic inventory to pinpoint where and how cryptography is used on each asset or application. Regulators and standards bodies explicitly recommend starting with this step – “a comprehensive and ongoing cryptographic inventory is a key baseline for successful migration to PQC,” stresses a U.S. government roadmap.
In practice, this means cataloguing all instances of cryptographic algorithms in use: all the protocols, libraries, certificates, and keys in the environment. The focus is especially on the public-key algorithms (RSA, ECC, Diffie-Hellman, DSA, etc.) since those are the ones quantum computers will eventually break, but it also includes related crypto (symmetric ciphers, hash functions, etc., because some of those may need strengthening as well). The inventory should capture details like: what algorithm is used, in what context (e.g. TLS for a web server, IPsec for a VPN tunnel, code signing, database encryption, user authentication, etc.), where in the network or system it resides, and who the vendor or owner of that component is.
Gathering a complete cryptographic inventory is a monumental task. Cryptography is ubiquitous and often non-obvious. It’s not only in security-focused systems like VPNs or SSL/TLS configurations; it appears in places people might not immediately think of – printer firmware, storage appliances, building access systems, automated backup software, internal APIs, IoT modules, and so on. One telecom security team joked that they had to assume “if it has a CPU, it probably has crypto.” This is not far from the truth. A thorough search might reveal, for example, that a networked HVAC controller has an embedded web interface using an outdated HTTPS library, or that a contractor’s support tool installed on some servers includes its own encryption module.
No single tool will magically find 100% of cryptographic instances in such a diverse environment. Automated scanners are very helpful – for example, tools that crawl the network to find TLS certificates and cipher suites, or scan software binaries for crypto library calls. These can quickly locate common things like SSL endpoints or known library usage. But tools alone are not enough. As a post-quantum readiness guide notes, even the best inventory tools can “never provide a 100% inventory on their own” – a holistic approach combining automation, manual review, and continuous monitoring is needed. The telecom will likely use multiple methods:
- Code and Configuration Analysis: Scanning source code repositories for references to cryptographic APIs (e.g., usages of OpenSSL, BoringSSL, crypto libraries in code) can uncover custom software using crypto. Similarly, checking configuration files and settings on systems (for protocol configurations, TLS versions allowed, key lengths, etc.). Many organizations write scripts to parse config files in bulk for known crypto parameters.
- Digital Certificate Inventory: Using certificate management tools to find all digital certificates in the environment (for internal and external-facing systems). Certificates are a good proxy for cryptographic usage since any service using TLS, for instance, will have a certificate. There are commercial tools that specialize in enterprise certificate discovery. This helps map out things like all the web servers, APIs, and applications using TLS and what algorithms their certs are signed with (RSA vs ECDSA, etc.).
- Network Traffic Analysis: Capturing and analyzing network traffic (with permission) to see where encryption is happening. For example, seeing an SSH handshake indicates use of certain key exchange algorithms; seeing a TLS handshake reveals the cipher suite chosen. This can catch devices that might not be documented elsewhere (e.g., an IoT device initiating an encrypted connection).
- Vendor Documentation and Inquiry: For third-party products and appliances, sometimes the quickest route is to reach out to the vendor (or consult manuals) to identify what cryptography is embedded. For instance, a telco might have hundreds of proprietary network appliances; instead of reverse-engineering each, they can ask vendors to provide a “crypto bill of materials” for their products. Some regulators are pushing vendors to deliver this kind of information to customers as part of security transparency initiatives.
- Interviews and Surveys: While not sufficient alone, talking to system owners and architects can surface custom or niche cryptographic implementations. For example, the team managing the billing system might recall that it uses PGP to encrypt data exports, or a developer might mention that a legacy application uses an old crypto library statically linked.
All these findings should be compiled into a centralized inventory database. Each entry in the inventory could include: the system or application name, location, owner, the cryptographic algorithm(s) used, the purpose (e.g., “RSA-2048 used for TLS on server X, protecting web portal Y”), and any relevant metadata (e.g., certificate expiration, library version, whether it’s FIPS-140 certified, etc.).
This process is time-consuming – for a large telco, it often takes many months or even a year-plus to complete the initial inventory. Even so, expect that the first pass will miss some things, requiring iterative scanning and refinement. It’s common to discover surprises along the way, like an old forgotten server that is still generating RSA keys for some workflow, or an outsourced platform that was assumed to be modern but is running outdated crypto under the hood.
Despite the difficulty, establishing a comprehensive cryptographic inventory is essential and non-negotiable. It provides the foundation for all subsequent steps. It’s not enough to find “most” of the crypto – the organization needs high confidence that every critical instance is accounted for. Even a single overlooked vulnerable algorithm could later become the weak link that an attacker exploits. As one guidance put it, leaving even one known-weak crypto component in the environment is unacceptable because it could undermine all other mitigation efforts. Thus, thoroughness is key. By the end of Phase 1, the telecom should ideally have:
- A full list of hardware, software, and cloud assets (the attack surface).
- A detailed catalog of all cryptographic algorithms and keys in use on those assets, especially any use of RSA, ECC, DH, or other quantum-vulnerable algorithms.
- Information about each instance: where it’s used, for what function, and the associated business impact (e.g. does it protect critical customer data or a trivial internal app?).
- A sense of which cryptography is implemented in software vs hardware (important for planning, since hardware-based crypto might need physical replacement).
- Identification of any obvious high-risk items discovered (for example, if you find something using 1024-bit RSA or a deprecated algorithm like SHA-1, that’s noted as needing urgent attention even aside from quantum concerns).
This inventory will feed directly into the analysis and planning in Phase 2. In fact, many organizations will start doing a basic risk ranking even as they compile the inventory – e.g., flagging the most sensitive systems early. The output of Phase 1 is primarily documentation: the CBOM and asset list, which become living documents to be updated as the program proceeds. Importantly, these inventories should be maintained continuously; new systems and crypto uses will be added over time, and old ones retired. (Integrating the maintenance of crypto and asset inventory into normal IT processes is a smart goal so that the inventory remains up-to-date through Phase 4 and beyond.)
Estimated effort for Phase 1
For a large telco, the discovery phase could involve a dedicated team of perhaps 5–15 people (internal staff and/or consultants) working for 6–12 months. The cost can vary, but considering tool licenses and labor, an initial crypto inventory project might cost on the order of a few million dollars (e.g. $2-5M), due to the extensive analysis required and the complexity of scanning thousands of systems. This is a ballpark – organizations with more automation might spend less, whereas those with many legacy/hidden systems could incur more. (I’ll provide a consolidated cost breakdown by phase later in the article.)
Phase 2: Assessment and Planning – Risk Prioritization and Cryptographic Strategy
Once the inventories are in hand, the next phase is to assess the findings, prioritize what to fix, and develop a detailed game plan – essentially formulating the organization’s cryptographic strategy for the quantum era. This phase is about answering: “Given all our cryptographic use cases, which ones do we tackle first, and how will we go about it?” It combines risk assessment, planning, and strategy development.
Key activities in Phase 2 include:
2.1 Impact Analysis of Crypto Uses
For each item in the cryptographic inventory, determine how it is affected by the quantum threat and how critical it is to the business. Not all cryptographic usages carry equal risk or urgency. Several factors are considered here:
- Algorithm vulnerability – is the component using a quantum-vulnerable algorithm (RSA/ECC/DH) or only symmetric crypto (which is less threatened)?
- Sensitivity of data – what data or function is the crypto protecting, and how severe would it be if that were broken?
- Longevity of security needed – if data must remain confidential for a long time (say 10+ years), even current traffic is at risk from “harvest now, decrypt later” attacks; conversely, something like a short-lived session key that protects data for 5 minutes might be lower priority.
- Exposure – is this crypto at an internet-facing interface (exposed to attackers) or deep in an internal system? High-exposure points (public-facing websites, VPN gateways, etc.) and high-value targets (core network controls, customer data flows) are flagged as high impact.
This analysis lets the team bucket the inventory into tiers of risk. For example, the telco might identify a set of ~100 “Tier 1” cryptographic assets that protect crown jewels (customer PII, inter-carrier signaling, payment systems, etc.), then “Tier 2” for important but less critical uses, and so on.
2.2 Risk-Based Prioritization
Using the impact analysis, the organization sets priorities. The guiding principle emerging in the industry is to tackle the highest-risk uses first – “addressing the highest-risk vulnerabilities first”. High risk typically means either the data is highly sensitive and long-lived, or the system is critical infrastructure. For a telco, this might put things like the core network authentication systems, subscriber identity protection, backbone encryption, and anything involving keys/certs that have a long validity (multi-year certificates, etc.) at the top of the list.
Lower-priority might be, say, an internal tool that uses TLS but only handles non-sensitive data. The team should also identify any “low-hanging fruit” – cases where an upgrade to a quantum-safe solution is readily available and not too disruptive (for example, perhaps a software library can be upgraded to a PQC-ready version easily – that could be done early).
On the flip side, identify hard blockers – systems that, as discovered in Phase 1, cannot support PQC without major changes. The White House PQC report advises agencies to “identify systems that cannot support PQC algorithms as early as feasible”. The telco should do the same: e.g., if an older IoT module or a vendor device has cryptography baked into hardware with no upgrade path, that is noted as a problem requiring special handling (replacement, isolation, or a compensating control – more on these below).
2.3 Defining the Cryptographic Strategy
This is the heart of Phase 2. Armed with the knowledge of what needs to change and how urgent each item is, the organization must devise how it will remediate each one and with what solution. In other words, develop a strategic roadmap for migrating or mitigating each cryptographic vulnerability. This involves several sub-steps:
2.3.1 Choose Technical Approaches
For each category of crypto use, decide the approach to make it quantum-safe. In many cases, the answer will be “migrate to a PQC algorithm” (aligned with the new standards).
For instance, decide which post-quantum algorithms will be used for various purposes: e.g., the telco might standardize on CRYSTALS-Kyber for key establishment (VPN tunnels, TLS handshakes) and Dilithium for digital signatures (code signing, certificates), since those are NIST-approved algorithms. Or they might use Falcon for certain constrained scenarios where smaller signatures are needed.
These choices should be guided by industry recommendations and any constraints observed (performance, message size, etc.). In addition, the strategy should consider crypto-agility: it’s wise not to put all eggs in one basket. For example, even if Kyber is chosen now, the architecture should allow swapping to another KEM if needed.
Many experts suggest deploying multiple PQC algorithms (or at least designing the capability to) so that if one is later weakened, a switch can be made via configuration change. In practice, this means planning to support algorithm diversity – possibly issuing certificates that include multiple algorithm signatures, or systems that can negotiate more than one algorithm.
2.3.2 Handle Unpatchable Cases
A crucial part of strategy is deciding how to deal with instances that cannot simply be “upgraded” to PQC. As the analysis likely revealed, a significant chunk of the tech estate – especially legacy IoT and OT systems – won’t be simple to fix with a software patch. Many legacy or embedded devices might never get a firmware update from the vendor (the vendor may be defunct, or the device may lack the computational power for PQC). The cryptographic strategy must include alternative risk mitigations for these. Approaches include:
2.3.2.1 Hybrid Cryptography
Use a hybrid approach as an interim solution wherever possible. This means implementing quantum-resistant algorithms alongside classical algorithms. For example, if a device only speaks RSA, you could introduce a gateway or protocol extension that performs an RSA+PQC dual handshake, so that if the peer is capable, the connection gets PQC protection, and if not, it falls back to RSA. Hybrid schemes ensure at least one layer (the PQC layer) is secure against quantum attacks. Many standards bodies (IETF, etc.) are already defining hybrid modes for TLS, IPSec, and certificates to facilitate gradual migration. The strategy should specify where hybrids will be used (likely in network protocols and VPNs first, as those are well-supported).
2.3.2.2 Encapsulation / Wrappers
For truly inflexible devices, consider a wrapper or gateway approach. If a device’s built-in crypto can’t be changed, encapsulate it in a quantum-safe tunnel. For example: put a gateway in front of a batch of legacy IoT sensors – the sensors keep using their old crypto amongst themselves, but the gateway handles communicating onward with quantum-resistant encryption. This effectively isolates the weak crypto within a safe zone.
Similarly, one can use protocol translation: e.g., if a legacy system only supports RSA for SSL, set up a proxy that speaks RSA to the legacy box but speaks PQC cipher suites to the outside world.
Tokenization is another form of mitigation for data at rest – e.g., if an old database can’t be re-encrypted easily, perhaps sensitive data can be tokenized so that even if its crypto is weak, the real data isn’t exposed directly. All these are case-by-case tactical measures, but the strategy should enumerate them for each problematic category.
2.3.2.3 Isolation & Network Segmentation
Increase security monitoring and isolation around systems that must remain quantum-vulnerable for some time. For instance, if a certain control system cannot be upgraded, treat it as high-risk: put it on a separate network segment, strictly limit access to it, and perhaps layer on additional encryption at the application level when it communicates (even if its transport encryption is weak).
Essentially, contain the risk so that if an adversary were to break that one weak link in the future, the blast radius is limited.
2.3.2.4 Replacement or Retirement
Plan the replacement of end-of-life systems well ahead of Q-Day. The strategy should list which legacy products have no upgrade path and thus will be retired or replaced. This might feed into capital expenditure plans – e.g., budgeting to buy new quantum-safe IoT modules in 2028 to swap out the old ones. The White House report noted that the cost to replace unsupported legacy systems is a significant portion of the overall PQC migration cost. Knowing this, a telco’s strategy must get management’s buy-in on some system replacement projects.
In some cases, if a system cannot be replaced in time, the strategy might even involve risk acceptance for a limited period – basically acknowledging a gap and planning interim compensating controls until the system is gone.
2.3.2.5 Exploring QKD
Consider where technologies like Quantum Key Distribution (QKD) might play a role. QKD, which uses quantum physics to securely distribute encryption keys, is a different approach than PQC (it’s not a software algorithm but a hardware-based link security mechanism).
While QKD is not a universal solution (due to distance and infrastructure limitations), it can provide quantum-safe key exchange for specific high-value network links (e.g., between two data centers over dedicated fiber). Some telecoms have piloted QKD for securing backbone links or inter-office connections.
The strategy should be realistic about QKD – it’s expensive and only practical in certain scenarios today – but it might be worth including as a complementary solution for the most critical links if the organization is willing to invest. At minimum, the strategy could be to monitor QKD maturity and adopt it selectively (for example, after 5 years, re-evaluate if QKD can enhance certain parts of the network).
2.3.3 Policy and Governance Updates
The strategy phase is also when policies and governance get updated to support the migration. For example, updating procurement policies to require any new system to be “quantum-ready” (meaning it can accept PQC updates or has crypto-agile design). Development guidelines might be revised to mandate use of approved crypto libraries (and disallow hard-coding algorithms).
The telco should establish a governance structure like a steering committee or program management office that will oversee the execution. This governing body will set standards, make priority calls, and ensure all departments are aligned (which helps avoid the internal misalignment issues mentioned earlier).
2.3.4 Timeline and Milestones
A high-level timeline for the migration should be drafted. This will outline phases or waves of implementation (we’ll present an example timeline in a later section). For instance, the strategy might say: “Year 1 – complete inventory and planning; Years 2–3 – pilot PQC in lab and upgrade internal PKI; Years 3–5 – upgrade top-tier systems; by Year 6 – begin upgrading remaining infrastructure; by Year 8+ – deprecate old crypto…” and so forth. Setting target dates (even tentative) is important to keep the program on track and to communicate urgency. This also ties into budgeting – the team should estimate the budget needed per year or per phase (we provide rough cost estimates later, but the planning team will refine those based on the organization’s specific needs).
2.3.5 Roles and Responsibilities
Clarify who will do what as the program unfolds. Quantum readiness cuts across many teams, so it’s critical to assign clear ownership.
For example: the CISO’s office might own the overall program; the network security team might handle network equipment upgrades; an application development team might handle refactoring in-house software; the IT operations team might be responsible for deploying updated libraries; the vendor management office might drive the engagement with suppliers (ensuring they deliver upgrades); the compliance team might integrate quantum-safety checks into audits, etc.
These responsibilities should be documented so that as Phase 3 begins, each group knows their tasks. It’s also wise to designate subject-matter leads – e.g., a “PQC Lead Architect” to provide technical guidance, a program manager to coordinate schedules, and so on.
Given the complexity, having a central core team that coordinates all moving parts is often beneficial.
In essence, Phase 2 produces the roadmap for the journey. By the end of this phase, the telecom should have a prioritized list of projects/workstreams, an overall architecture for how they will implement PQC (and other mitigations), and management approval of the plan with allocated budget and resources. One output might be a formal “Quantum-Readiness Strategy Document” or something akin to a playbook, which can be communicated internally and (in some cases) with partners/regulators to demonstrate that the organization has a concrete plan.
It’s worth noting that Phase 2 also involves a lot of external coordination. The telco’s strategy will be informed by outside factors, so during this phase the organization should actively engage with industry groups and standards bodies. For instance, participating in the GSMA task force or similar forums allows the telco to align its plans with industry timelines. Keeping tabs on NIST’s standards (which algorithms are getting finalized) and government mandates (e.g., if regulators set a deadline that critical infrastructure must be quantum-safe by 2030) is crucial.
Many telcos will use this phase to lobby or coordinate with their vendors too – essentially telling vendors “here are our requirements and timeline, we’ll need your support or product upgrades by X date.” This proactive communication can influence vendor roadmaps (discussed more in Key Components section).
In short, the planning phase is both an internal exercise in prioritization and design, and an external exercise in alignment and expectation-setting.
Estimated effort for Phase 2
The assessment and planning phase might take on the order of 6–12 months (overlapping partly with late stages of the inventory). It likely involves a smaller core team (perhaps 5–10 people) working with input from many others. External consulting expertise is often brought in at this stage (for example, to perform a formal risk assessment or to help design the cryptographic architecture), which can cost a few hundred thousand to a couple million dollars depending on scope.
In total, Phase 2’s costs might be in the low millions of dollars (say $1–3M), relatively small compared to the implementation phase, but crucial for avoiding missteps. It’s essentially the planning investment to potentially save time/money later by doing things in the right order.
Phase 3: Implementation – Migrating to PQC and Deploying Crypto-Agile Systems
This phase is where plans meet reality – the telecom begins executing the upgrades and transformations needed to become quantum-safe. Implementation will be iterative and multi-year, often structured in waves or sub-projects aligned with the priorities set in Phase 2. The guiding objective in Phase 3 is to introduce post-quantum cryptography (and other mitigations) into the environment in a controlled, compatible way, without breaking existing services. Below are key aspects of the implementation phase:
3.1 Pilot Testing & Lab Trials
Early in Phase 3, it’s prudent to test PQC algorithms and solutions in a controlled environment before large-scale deployment.
The telco might set up a PQC test lab or pilot environment. For example, take a representative system – say, a pair of servers and a client application – and configure them to use a post-quantum TLS cipher suite (e.g., TLS 1.3 with a Kyber-based key exchange and a Dilithium certificate) to observe what happens. Measure the performance: how much does the TLS handshake time increase? Does the larger certificate (~ several kilobytes) cause any issues in buffers or logs?
By experimenting with sample systems (like a test VPN, a sample 5G base station link, or a segment of the IT network), the team can identify unforeseen issues. Early industry pilots have been instructive: for instance, Japan’s SoftBank tested a hybrid VPN that combined classical ECC and lattice-based encryption on live 4G/5G traffic and found only marginal latency increase. Such pilots prove feasibility but also highlight issues to watch (e.g., ensuring all components in the path can handle the new crypto). Another pilot example: SK Telecom in Korea worked with a vendor to implement quantum-resistant protection for 5G subscriber identity (SUPI concealment) using a lattice-based algorithm on SIM cards.
These trials in a sandbox or limited production help build confidence and expertise. The telco’s team should document the results and incorporate any lessons (like needing hardware acceleration, or encountering a bug in a PQC library) before wider rollout.
Crypto-agility drills might also be done: for example, test switching the algorithm in use (from, say, Kyber to a hypothetical alternate) in the lab to ensure the architecture indeed supports a seamless change. This validates the “agility” part of the design.
3.2 Incremental Rollouts & Hybrid Mode
Implementation will not be a big bang; it occurs in phases, and often the first stage is deploying hybrid cryptography in production. As discussed in Phase 2, hybrid schemes allow backward compatibility – critical for a telco where not all users or peers will upgrade at once.
Concretely, the telco might begin by enabling hybrid cipher suites on its systems. For example, update the configuration of web servers, VPN gateways, and other endpoints to support dual key exchanges (classical ECDH + PQC KEM). If the client (e.g., a web browser or a customer device) also supports it, they’ll use the hybrid mode; if not, the connection can fall back to classical. This way, quantum-resistant protection is added opportunistically without cutting off legacy compatibility. Over time, as more clients become PQC-capable (e.g., updated browsers or device firmware), the proportion of quantum-safe connections will increase.
The telco might also deploy hybrid certificates – X.509 certificates that contain both a classical signature (say ECDSA) and a PQC signature (Dilithium). These are being standardized and allow one certificate to be verifiable by both old and new systems. Early adoption of hybrid certificates (for internal systems or even customer-facing sites) means the groundwork is laid for later turning off the classical algorithms.
Overall, running in a hybrid mode is a necessary interim state and could persist for many years while the ecosystem catches up.
3.3 Upgrading Core Infrastructure First
Following the priority list, the telco will likely tackle certain core systems in the early waves of implementation. One commonly cited early target is the Public Key Infrastructure (PKI). Telecom networks rely on numerous certificate authorities (CAs) and PKI systems (for things like authenticating network elements, signing software updates, securing internal APIs, etc.). Upgrading the PKI to be quantum-safe is foundational, because it enables issuance of PQC credentials for everything else. The telco might establish a new PQC-capable CA hierarchy (potentially running in parallel with the existing one). For example, they could deploy a Dilithium-based root CA and begin issuing test certificates. Eventually, this PQC PKI can start issuing the certificates for production systems (either hybrid certs or full PQC certs when ready).
Alongside PKI, other high-priority systems might include: VPN and secure communication systems (e.g., the systems that encrypt backhaul links, data center interconnects, etc.), authentication servers (like RADIUS/Diameter servers in mobile cores, or IAM systems in IT), and data encryption services (like databases or storage systems holding sensitive data). By upgrading these first, the telco secures the most sensitive data channels early. It also gains experience on contained, internal systems before facing public interfaces.
Notably, some telecom operators have started with things like internal management interfaces – for instance, securing the communication between network management workstations and base stations with PQC VPN tunnels, as that is a controlled environment with only the operator’s equipment (no third-party dependencies).
3.4 Coordinating with Vendors for Updates
A huge part of Phase 3 is working with the telecom’s equipment and software vendors to obtain quantum-safe versions. A typical telco might have hundreds of vendors supplying everything from routers and switches to billing software. The implementation plan should sync with vendor product release timelines. For example, if Cisco or Juniper announces that a certain router OS will support PQC in Q4 of next year, the telco schedules testing and deployment of that update in their environment accordingly. The telco’s vendor management team should actively engage vendors (if they haven’t already in Phase 2), possibly through joint workshops or a “quantum-readiness forum.”
Some large telcos have even organized events with their supply chain to share knowledge and expectations for PQC. This collaborative approach (the “carrot”) helps everyone fix things faster – e.g., the telco might share its test results with a vendor to help them improve a patch.
On the flip side, the telco should also wield a “stick”: update all contracts and procurement documents to require vendors to meet quantum-safe criteria on a deadline. For instance, contractually mandate that by 2028 all critical network gear provided must support the NIST PQC algorithms, with penalties if not. This puts pressure on suppliers. Given a large telco’s market power, such requirements can accelerate vendor action (especially if many operators coordinate through groups like GSMA).
During implementation, as new vendor firmware/software arrives, the telco will deploy it systematically: perhaps start with one region or one segment of the network to validate, then roll out network-wide during maintenance windows. In some cases, hardware replacement is unavoidable – e.g., older hardware security modules (HSMs) that manage keys might not handle PQC keys due to size or performance, requiring newer models. Or cell site equipment that lacks CPU power for PQC might need an upgrade. These replacements need to be planned and tested (and budgeted – potentially a significant cost, as discussed later).
3.5 Dual-Stack Operation
Throughout Phase 3, the organization will operate in a dual-stack mode with respect to cryptography. This means many systems will be capable of both classical and post-quantum crypto.
For example, after upgrades, a core router might accept both traditional IPSec connections and PQC-enhanced IPSec connections. This dual capability is necessary to maintain compatibility (so that old clients can still connect, etc.).
However, dual-stack comes with complexity: essentially every system is running two cryptographic modes, which can be resource-intensive. PQC algorithms often have larger key sizes and higher CPU usage, so enabling them in addition to existing crypto can tax systems more. The telco needs to monitor performance closely. In some cases, they might initially keep PQC turned off except for testing, and only gradually ramp up usage as confidence grows and performance tuning is done.
There will also need to be fallbacks: if a PQC negotiation fails (maybe an incompatibility), the system falls back to classical. Care must be taken that this fallback doesn’t inadvertently downgrade security without notice – proper logging and monitoring should catch if, say, a connection was supposed to be PQC but is only using classical because of an issue.
As the migration progresses and more systems become PQC-capable, the reliance on the old algorithms will shrink. But realistically, expect dual operation to persist for many years (likely the entire Phase 3 and into Phase 4), since some legacy elements might be among the last to be upgraded or retired.
3.6 Systematic Testing and Validation
Implementation isn’t just “deploy and forget” – it requires continuous testing, validation, and troubleshooting. Each time a component is upgraded or a new algorithm is turned on, the team should perform regression testing to ensure normal functionality and performance. They should test interoperability: e.g., if a telco upgrades its network equipment to support PQC, can it still interoperate with a neighboring network or roaming partner that hasn’t upgraded (via hybrid modes or fallback)? If the telco issues a PQC-based certificate for a server, do all clients (browsers, apps) connecting to it handle it well, or do some older clients break?
Early experiments have shown issues like certain middleboxes (firewalls, proxies) crashing or blocking traffic when they see “unusually” large certificates or unknown cipher IDs. The telco must ferret out such issues in testing.
Performance testing is also crucial: measure latency, throughput, CPU load with PQC enabled. Where needed, adjust configurations – for instance, perhaps use a slightly smaller parameter set of an algorithm if performance is an issue but security is still acceptable. In some cases, add hardware acceleration (some vendors offer accelerator cards for lattice crypto, etc.) if performance is lacking.
Security validation is another angle: ensure that the new algorithms are configured correctly (e.g., random number generators are good, keys are of intended length), and that classical algorithms are not unintentionally left as sole protection when they shouldn’t be. Given that PQC implementations are new, the telco should keep an eye out for patches or updates – for example, if an early version of a PQC library has a bug, track those updates and apply them.
Essentially, Phase 3 will have a cyclical feel: upgrade a set of systems -> test -> fix issues -> move to next set.
3.7 User and Customer Communications
As quantum-safe capabilities roll out, the telco may need to communicate with stakeholders, especially if changes affect them. For example, if the telco plans to issue new quantum-resistant SIM cards or customer premise equipment (CPE) to clients, it must coordinate that distribution and communicate why (possibly marketing it as a security enhancement). Enterprise customers may need to be told, “We are upgrading our VPNs to PQC algorithms; you’ll need to update your client software by X date to continue connecting securely.”
The telco might also use this as a marketing opportunity – being able to say “we offer quantum-resistant network security” could attract security-conscious customers (banks, governments). During implementation, the program team should work with PR/communications to craft the right messaging, balancing reassurance (we’re on top of it) with not causing alarm.
Regulators or government partners might also require status updates, so having documentation of progress is important.
Phase 3 is by far the costliest and most resource-intensive phase. It’s where most of the budget is spent, since it involves purchasing new hardware, deploying software at scale, and countless staff hours of engineering work. We will provide a breakdown of costs later, but it’s during implementation that expenses like new cryptographic hardware (HSMs, accelerators), upgraded devices, consulting support, and maintenance windows (with potential downtime costs) come into play.
A well-managed Phase 2 can reduce inefficiencies in Phase 3 (e.g., by prioritizing correctly so high-value fixes get done first), but it cannot eliminate the inherent effort. Large organizations often find that their implementation phase needs to be done in waves – for example, Wave 1 might target ~20% of systems (the critical ones), Wave 2 the next 30%, and so on. Each wave might last 1–2 years, with learnings from earlier waves applied to later ones.
By the end of Phase 3 (perhaps ~8–10 years in), the aim is to have all feasible systems upgraded and quantum-safe algorithms widely deployed in the environment. Some straggler systems might remain on legacy crypto (we handle that in Phase 4), but essentially the organization should be running PQC (at least in hybrid mode) for the vast majority of its operations. A major milestone would be when the telco can say: “No critical data is traveling over crypto that we believe will be broken by quantum computers – either it’s PQC-protected or it’s in a form (e.g. ephemeral encryption) that isn’t a long-term risk.” Achieving that is a huge step for security.
Estimated effort for Phase 3
Implementation will span multiple years (e.g., Years 2–9 in a 10-year plan, with progressive rollout). It will involve dozens of internal staff (potentially 50+ across various teams at peak) and significant external vendor support. The cost of this phase is very high – potentially hundreds of millions of dollars for a large telco – because it includes all the hardware replacements, software development, testing, and deployment labor. For instance, if a telco needs to replace or upgrade thousands of network devices at an average cost of $10,000 each, that alone could be tens of millions of dollars. Add to that the cost of new HSMs, upgrading customer devices (e.g., issuing millions of new SIM cards or routers, if needed), and countless person-hours of work.
We will break down a rough cost allocation in the next section, but it’s safe to say Phase 3 is where perhaps 80–90% of the total program budget will be spent. It’s essentially the execution of everything identified earlier.
Phase 4: Operations – Ongoing Maintenance and Crypto-Agility
By Phase 4, the telecom has implemented quantum-resistant solutions across most of the organization. However, the journey doesn’t end so much as transition into a new operational paradigm. Phase 4 is about running and sustaining the quantum-safe environment and being ready to adapt to future changes. Key elements of this phase include:
4.1 Long-Term Coexistence and Sunset of Legacy Crypto
Even after a decade of work, it’s likely some legacy cryptographic components will still be around. Perhaps a few IoT devices or an older subsystem couldn’t be economically replaced yet and thus remains on classical crypto.
In Phase 4, the organization needs a plan to eventually deprecate any remaining vulnerable algorithms. This might involve setting a firm policy like, “After 203X, no TLS connections using RSA/ECDH are allowed” and enforcing that via software updates or network controls. Leading up to that, they might operate dual infrastructures – e.g., two PKIs (one classical, one PQC). Over time, the classical one can be phased out (stop issuing new classical certificates, and let old ones expire). Some legacy systems may operate until end-of-life in isolated enclaves (with the understanding that they’re a contained risk). The telco’s risk management in this phase might formally accept some residual quantum risk for those isolated cases, with timelines for their decommission.
Essentially, Phase 4 involves a “cleanup” of any stragglers and then turning the corner to use only quantum-safe cryptography going forward (with exceptions documented). For audit and compliance, by this phase the telco should be able to demonstrate that, say, 99% of cryptographic instances are quantum-resistant, and have compensating controls for the rest.
4.2 Continuous Monitoring and Updates
Quantum readiness isn’t a one-time achievement; it requires continuous vigilance. In Phase 4, the organization should establish processes to monitor the external landscape:
- Keep track of advances in quantum computing (has someone built a larger quantum computer sooner than expected? Is the estimated “Q-Day” moving closer?). This might involve subscribing to industry threat intelligence or having an internal “quantum risk watch” team that reviews literature and government updates. If news breaks of a major quantum breakthrough, the team should be ready to reassess timelines (for example, if a lab announced a functioning 1,000-qubit stable quantum computer, one might accelerate final cutovers).
- Monitor cryptographic research: Are any weaknesses found in the PQC algorithms you deployed? We’ve already seen one algorithm (SIKE) get broken during the standardization process; it’s possible, though hopefully unlikely, that an approved algorithm like Falcon or Kyber could have an unexpected weakness discovered. Also, new algorithms could emerge that are more efficient or secure. For example, NIST will be standardizing more algorithms in round 2 (e.g., code-based schemes, etc.). The telco should be prepared to adopt improved cryptography as it becomes available. This is where the crypto-agility built earlier pays off – if everything is designed to be updatable, the organization can relatively quickly (say in a few months) roll out a change to replace an algorithm if needed.
- Update the cryptographic inventory regularly. As new systems are added to the network, ensure they are catalogued and using approved crypto. Essentially treat crypto inventory as an ongoing activity (some organizations make it a quarterly or annual review task). This ties into normal security operations – e.g., whenever a new application goes live, part of its security review is to verify it’s using quantum-safe crypto (or at least crypto-agile designs) as per policy.
- Patch management for cryptographic components: ensure that libraries and firmware related to crypto are kept up to date (just as you would do regular patching for vulnerabilities). Now that PQC is in production, vendors will likely release updates optimizing performance or fixing bugs – apply those through your change management.
4.3 Maintaining Crypto-Agility
By Phase 4, the telco should have institutionalized a crypto-agile architecture. This means that future changes in cryptography (quantum-related or otherwise) can be handled with minimal disruption.
Concretely, the organization by now should have: abstracted cryptographic functions in code (so algorithms can be swapped by configuration), deployed centralized crypto services or libraries that can be updated in one place, and possibly implemented technologies like crypto-agile key management. For instance, they might have a central Key Management System where switching the key generation algorithm from RSA to Dilithium is a setting change. Or network controllers that can instruct devices to switch cipher suites as needed.
The ongoing task is to test that agility periodically. Some organizations run drills – e.g., in a staging environment, simulate that Algorithm A is no longer allowed and ensure everything can switch to Algorithm B smoothly. This guards against complacency and ensures the agility isn’t just on paper. Crypto-agility also involves keeping an eye on standards: As new protocols (TLS 1.4 someday? New 6G security standards?) emerge that include better cryptography, the telco should plan upgrades to adopt those. Essentially, Phase 4 morphs into business-as-usual security management, but with the organization now much more nimble in dealing with crypto changes.
4.4 Performance Tuning and Optimization
As PQC becomes the norm in production, the telco can focus on optimizing for efficiency. Early deployments might have been conservative (e.g., using hybrid modes that add overhead, or using extra-long keys “just in case”). Over time, they can streamline: if it’s confirmed that a particular PQC algorithm is robust, maybe drop the classical hybrid part to reduce overhead. Or once hardware accelerators for PQC are more available, deploy those to offload CPU load from servers.
The telco might also negotiate with vendors for optimized products – e.g., requesting that a vendor provide a firmware update that improves how their device handles large PQC certificates (perhaps by increasing buffer sizes to avoid latency).
The goal is to eventually reach a state where the quantum-safe operations are as efficient and reliable as the old operations were, if not better. There may also be opportunities to reduce cost in the long run: for instance, if some costly workaround (like very frequent key rotation to mitigate harvest-now risk) is no longer needed once PQC is in place, those processes can be scaled back, saving operational effort.
4.5 Compliance and Demonstrating Readiness
By Phase 4, external parties will start paying attention to whether the organization is quantum-safe. Regulators, auditors, customers – all may ask for proof or status. The telecom should be ready with documentation and metrics: e.g., an audit report showing “out of X systems, 98% use only PQC or hybrid crypto, 2% (list) are exceptions under compensating controls.” Governments might even require certification of quantum readiness.
In telecom specifically, certain legal obligations like lawful interception (LI) need to be maintained securely in a post-quantum world. LI systems allow law enforcement (with warrants) to intercept communications; the telco must ensure that even those interception feeds are protected against quantum eavesdropping. GSMA has noted that a first step is upgrading the confidentiality of LI handover links with quantum-safe encryption.
By Phase 4, the telco should have done this – e.g., ensuring that when they deliver intercepted data to authorities, it’s over a PQC-encrypted channel so that it can’t be stolen by an adversary with a quantum computer. This is an example of a niche requirement that nonetheless must be covered to be fully quantum-ready. The broader point is, Phase 4 is about embedding quantum-safe practices into everyday operations and compliance. The program’s formal project structure may dissolve, and responsibility transitions to operational teams (with oversight from security governance) to maintain the posture.
In Phase 4, the organization effectively “graduates” from the dedicated PQC migration program into a steady state. The quantum readiness capability becomes part of the organization’s DNA – much like how, today, companies continuously manage and update classical crypto (deprecating SHA-1, rolling out TLS 1.3, etc., as ongoing tasks). The difference now is the scope and pace of potential change: having gone through this program, the telco will have a much stronger handle on its cryptographic infrastructure than it likely did before. Many organizations find that as a side benefit, they gain far better visibility into their systems and a more disciplined approach to cryptography management, which helps with general cybersecurity beyond quantum.
It’s important to celebrate the achievement at this stage: becoming quantum-safe (to the extent possible) is a major resiliency milestone. However, it’s also important to avoid complacency. Management should continue funding and supporting the crypto-agility and monitoring efforts.
The cost of Phase 4 is relatively modest compared to Phase 3 – mostly operational expenses (staff time for monitoring, some ongoing license or support costs for crypto systems, training refreshers, etc.). We might be talking on the order of maybe a few million dollars per year in ongoing costs for a large enterprise (for continuous improvements, audits, maintenance). In return, the organization significantly reduces the risk of a quantum-induced security breach, which in a telco’s case, could be catastrophic if it occurred (imagine nation-state actors decrypting years of intercepted phone traffic – an outcome this entire program works to prevent).
By the end of Phase 4, the telco can confidently state that it has achieved quantum readiness: it has overhauled its cryptography to withstand known quantum attacks, established the processes to keep it that way, and fostered the internal culture that values proactive security against emerging threats. This is an ongoing journey, but at this point it becomes part of “business as usual” in the security program.
Key Components and Workstreams in the Program
Throughout the phases described, there are several cross-cutting components or workstreams that the telecom will manage. These are aspects of the program that run in parallel and provide specialized focus on crucial areas. Identifying these workstreams helps ensure nothing falls through the cracks. Here are some of the key components:
Cryptographic Inventory Maintenance
Building the inventory is Phase 1, but maintaining it is a continuous effort. The program should establish a process (and possibly dedicated tools or team) for keeping the Crypto-Bill-of-Materials (CBOM) up to date. Every time a new system is introduced or an old one is decommissioned, the inventory must be updated. Over the 10-year program, the environment will not be static – M&A events, new application deployments, network expansions, etc., will add more cryptography that needs tracking.
One approach is to integrate inventory updates into change management: e.g., any change request must include an assessment of cryptography and update the CBOM if needed.
Automation can help here: some organizations deploy continuous scanning tools that periodically sweep the network for new certificates or crypto instances.
The inventory maintenance workstream ensures that by Phase 4, the telco still knows its crypto landscape. It also sets the stage for long-term crypto-agility: you can’t be agile if you don’t know what you have. In practice, this might involve a small team (or part of the security team) assigned to inventory governance, using dashboards to identify drift (e.g., if an unauthorized algorithm pops up somewhere). This team also would own the inventory documentation and ensure it’s accessible and understandable to all stakeholders (possibly via an internal portal).
PQC Algorithm Selection & Validation
Given that PQC is a developing field, the telco should have a workstream focused on cryptographic algorithm management. Early in the program, this team would analyze the candidate algorithms (NIST’s selections, etc.) and decide which ones to standardize on internally.
They would consider factors like security margins, performance benchmarks (latency, throughput on their hardware), and interoperability. For instance, they might test both Dilithium and Falcon for digital signatures in their lab to see which performs better on various devices (Dilithium has larger signatures but is lattice-based; Falcon has smaller signatures but is more complex to implement).
They might also keep an eye on alternate algorithms (like hash-based signatures e.g. SPHINCS+, or code-based encryption like Classic McEliece if they have niche uses that favor those).
This workstream essentially advises the rest of the program on crypto tech choices. It should stay engaged with the crypto research community – reading latest papers, attending conferences, or working with consultants. If any algorithm gets new findings (positive or negative), this team updates the strategy (for example, if a weakness in algorithm X is found, maybe they shift to algorithm Y).
They might also develop internal standards – e.g., decide on key sizes or parameter sets for each algorithm (some PQC algorithms allow trade-offs with different parameters).
Additionally, this team would likely oversee any use of specialized tech like QKD, ensuring it’s appropriately integrated and truly adds security value. Overall, think of this workstream as the cryptographic brain of the program, making sure the telco’s choices are well-informed and up-to-date.
Crypto-Agile Architecture & Development
To embed crypto-agility, the telecom likely needs an architecture workstream that reviews and guides system designs.
This team will develop the architectural patterns and guidelines to ensure systems are modular in their cryptography. For example, they might create a company-wide API or service for cryptographic operations (so that applications call the service rather than implementing their own crypto). That way, when it’s time to swap algorithms, you update the service and all apps benefit.
They may push for using standardized libraries (like OpenSSL 3.x which has PQC support) across all software projects, rather than each project bundling its own crypto. This workstream could also oversee modifications to existing systems to decouple business logic from cryptographic logic (a big refactoring task in some cases).
The architecture team likely produces reference architectures or blueprints – e.g., “here’s how to build a microservice that can be configured to use different algorithms, with config files for crypto settings.” They’ll work closely with application development and network architecture teams. A key principle they promote is “design for change”: ensure that no algorithm name or parameter is hard-coded in a way that’s painful to change later.
They may also set up central management for crypto configurations (so that, say, an update to a config file can enable/disable algorithms across hundreds of servers).
By Phase 4, thanks to this group’s efforts, the telco should have a sustainable architecture where introducing a new algorithm (or deprecating an old one) is relatively straightforward – perhaps a matter of updating a library and distributing it. It’s worth noting this is a cultural change for many development teams, who might not have considered algorithm agility before. The architecture workstream often involves educating developers and engineers on why they need to follow these new patterns.
Infrastructure & Network Upgrades
This workstream handles the telecom network infrastructure specifically – essentially the hardware (routers, switches, base stations, optical transport, etc.) and low-level firmware that powers the telco network. These systems are often the most challenging to upgrade (because they are diverse and sometimes have long replacement cycles).
The team here will inventory all network components that use vulnerable crypto (with input from Phase 1 results) and coordinate with network engineering and vendors to plan upgrades or replacements. They need to slot these upgrades into the network’s maintenance schedule (telcos often have strict processes for network changes to avoid outages). For example, upgrading the encryption on microwave backhaul links to PQC might require taking a link down briefly – so scheduling and redundancy planning is needed.
The workstream might break the network into domains: core backbone, access network, mobile core, IMS (IP Multimedia Subsystem for voice), transport network, etc., and tackle each in a sequence. They’ll test new firmware in lab first, etc.
This group must also coordinate with standards bodies input: for instance, if 3GPP releases a spec for 5G with PQC, they’ll work that into their plans. One significant part is ensuring end-to-end interoperability in the network – if one part (say core routers) moves to PQC, but the radio network hasn’t, does it cause any issues? The planning done in Phase 2 will inform this.
Additionally, certain infrastructure might need new hardware – e.g., adding quantum-resistant VPN appliances or quantum key distribution boxes for specific secure links (if adopting QKD). The cost and logistics (shipping, installing equipment at thousands of sites) can be heavy, so this workstream often extends well into the later years of the program.
By the end, this team’s goal is that the underlying network is quantum-safe and all the plumbing (routing protocols, signaling, management channels, etc.) uses approved algorithms.
Application & Software Remediation
Parallel to network gear, the telco’s applications and software systems need updates. This workstream handles everything from internal IT systems (like billing, CRM, data analytics platforms) to customer-facing software (web portals, mobile apps) and any custom-built tools. They will use the cryptographic inventory to find all instances in code where crypto is used and ensure those get updated. For custom code, this likely means linking against new libraries or adjusting API calls (e.g., switching an API call from using RSA to using Kyber KEM for key exchange). In some cases, data formats might change – for example, if a file format has an embedded cryptographic signature, the field sizes might need to increase to accommodate a Dilithium signature. The software remediation team will need to go into the design of applications to handle such changes.
They also need to coordinate with vendors of COTS (commercial off-the-shelf) software the telco uses: for instance, if the telco uses an Oracle database with TDE (Transparent Data Encryption), they’ll push Oracle to support PQC algorithms in a future release or use long AES keys.
This team often overlaps with DevOps and CI/CD processes – i.e., ensuring that as code is built and deployed, it’s using the updated crypto components. They will also oversee re-issuing any certificates or keys in software (like if an application has an embedded certificate for TLS, they’ll generate a new PQC certificate for it).
Testing is important: once an app is updated to support PQC cipher suites, does it still work with clients? Possibly they run dual stacks in apps too. A tricky part is backward compatibility in data: imagine an application that stores data encrypted with classical algorithms – how to migrate that data to PQC encryption without breaking access? Solutions might involve writing migration scripts or doing on-the-fly decryption with old crypto and re-encryption with new as data is accessed.
This workstream needs good project management because dozens of applications might each be a mini-project. By the end of the program, essentially all software should have been either upgraded or replaced, such that it either is quantum-safe or at least crypto-agile and ready to switch once the counterpart systems support it.
Public Key Infrastructure (PKI) and Key Management
Special mention is deserved for PKI and key management systems. As noted, telcos use PKI extensively: VPN certificates, web TLS certificates, code signing for firmware, device certificates (like eSIM or eUICC profiles), etc. The PKI workstream will set up the new quantum-safe PKI alongside the legacy one. This involves deploying new Certificate Authority software (or upgrading existing ones if the vendor provides PQC support). It also involves possibly using new Hardware Security Modules (HSMs) that can handle PQC keys; some older HSMs might not support PQC algorithms or key sizes, so new ones (with updated firmware) may be required. The team will need to figure out certificate formats (hybrid certificates or pure PQC certs) and possibly coordinate with external CAs if any public certificates are used for customer-facing services (e.g., if the telco’s website wants a Dilithium certificate, they need a public CA that can issue that – not yet common, but likely will be in the coming years).
They should also establish policies for key management: for example, are key lengths being increased for symmetric keys (AES-256 instead of AES-128, etc., since Grover’s algorithm weakens symmetric ciphers a bit)? How often should keys be rotated now? They might decide that certain keys (like root CA keys) should be kept classical+PQC in parallel for a while. The PKI team will run pilot issuance of PQC certificates, test validation in systems, and then migrate live systems to use them. They’ll maintain a dual PKI until cutover – which means careful coordination to avoid confusion about which cert is trusted where.
The Key Management side also includes updating any key exchange protocols and key storage: e.g., a centralized KMS might need to support storing bigger PQC private keys, backup processes for those, etc.
By the end, the telco should have a robust PQC-enabled PKI that underpins trust for all devices and services, and an updated key management framework that can handle both classical and quantum-safe keys through their lifecycle.
Vendor and Supply Chain Management
As hinted earlier, managing the 1200+ vendors (in a large telco’s case) is itself a massive undertaking. This workstream focuses on working with every external supplier to ensure they are on track. It involves sending out questionnaires or requirements to vendors asking: Do your products use vulnerable crypto? What is your quantum-safe roadmap? and getting commitments.
A telco could organize a “Quantum Readiness Forum” – essentially a summit (virtual or in-person) with all key vendors to share the telecom’s expectations and timelines, and facilitate knowledge sharing. The telco can share things like “here’s how we’re testing PQC in our lab” so vendors can align. This collaborative approach can uplift lagging vendors.
However, the telco should also be ready to enforce compliance. That means reviewing and renegotiating contracts: many existing contracts might not have clauses about quantum-safe requirements (since this is new). The procurement/legal team might add addendums requiring vendors to, say, meet certain standards by a deadline or risk contract termination. For new contracts, definitely include PQC requirements (e.g., any new equipment must support NIST PQC algorithms and be crypto-agile, otherwise it won’t be accepted).
In some cases, the telco may need to switch vendors if a current supplier is unable or unwilling to provide upgrades in time. For instance, if a smaller vendor says “we have no plan to update this product,” the telco might start looking for an alternative product from a competitor that is more proactive. This is obviously a last resort due to cost, but it might be necessary for critical components.
The vendor management team will track each vendor’s progress, perhaps building a dashboard (e.g., out of 1200 vendors, how many have delivered PQC updates, how many are in progress, how many no plan). Some regulators may ask telcos for this info as part of critical infrastructure protection – so maintaining evidence of engaging suppliers is important.
By collaborating through industry groups (like GSMA), telcos can also collectively push for solutions (for example, agreeing on standard approaches for PQC in SIM cards, so all SIM vendors implement it similarly).
In summary, this workstream is a combination of carrot and stick: carrot = engage, educate, partner with vendors for a common goal; stick = impose requirements, deadlines, and leverage purchasing power to ensure compliance.
Standards and Regulatory Alignment
Telecom is highly standardized (3GPP, ITU, IEEE, etc.) and often regulated (government telecom authorities). The program should have a workstream dedicated to external standards & compliance. This team’s job is to interface with standards bodies, industry groups, and regulators, and ensure the telco’s plans align with or influence those external requirements. For example, team members might participate in 3GPP SA3 (security group) meetings that discuss PQC in 5G/6G, or in ETSI’s quantum-safe cryptography working group. By being at the table, they get early insight into what’s coming (e.g., a new standard for quantum-safe 5G AKA protocol) and can prepare internally. They can also voice the telco’s needs – like “we need a standard for hybrid 5G authentication so we can implement it, here are our use-cases.” This helps shape standards that vendors will follow.
On the regulatory side, if governments are moving up timelines (the EU, for instance, urging quantum-safe encryption in critical sectors by 2030), this team ensures the telco will comply by those dates. They might handle required reporting – some jurisdictions might ask telcos to provide their quantum readiness status or plans.
Also, if any export controls or legal constraints exist around cryptography (some countries regulate use of certain algorithms or key lengths), they navigate those – ensuring the chosen PQC algorithms are permitted in all countries the telco operates in.
By Phase 4, this workstream would transition to a monitoring role: keeping track of any new laws (e.g., if in 2032 a country says “all telecom data in transit must use PQC or QKD”), and ensuring the company stays ahead of them. Essentially, this group makes sure there are no nasty surprises from the outside world that derail the program, and conversely, that the telco can demonstrate compliance and leadership in quantum security when needed.
Lawful Interception and Legal Obligations
This is a very telecom-specific aspect, but worth highlighting. Telecom operators have legal obligations like lawful interception (LI) of communications for law enforcement (under court orders). As cryptography evolves, the LI systems themselves and the processes around them need updates. The LI systems often involve handing over intercepted data (calls, messages) to law enforcement monitoring facilities over secure links. The telco must upgrade those handover interfaces to be quantum-safe – it would be ironic if you secure everything else but leave intercepted data vulnerable to quantum decryption.
Additionally, the warrants and requests might be digitally signed – those mechanisms should move to PQC signatures to prevent forgery in a future where classical signatures are breakable.
The LI workstream would coordinate with law enforcement agencies and LI equipment vendors to plan these upgrades. It’s a sensitive area because it involves government partners and strict regulation. The telco can’t unilaterally change LI methods without regulatory approval, so it might require working with government to update laws or technical standards for LI (some countries have specific rules on encryption in LI context).
The program plan should not overlook these obligations, because failure to maintain lawful access capabilities (in a secure way) could put the telco in legal non-compliance. Similar considerations apply to things like emergency services (e.g., ensuring that encryption of 5G emergency calls remains interceptable by authorities but securely so).
The output of this workstream is to ensure all legal mandates are met quantum-safely – which might even entail new systems (for example, deploying a dedicated quantum-safe network path for LI data to police).
Training and Change Management
Last but not least, a crucial workstream is managing the human element. A PQC migration touches many teams who may not be familiar with the concepts of quantum computing or the new algorithms. Thus, the program should include a comprehensive training and awareness component. This ranges from deep technical training (for cryptography engineers, developers implementing PQC – they might need to learn how to use new libraries or understand lattice crypto properties) to general awareness for all employees (explaining why the company is investing in this, to build support). Training can be done via workshops, e-learning modules, vendor-led sessions, etc. For example, the security team might run an internal conference on “Quantum Threat and Our Response” to get everyone on the same page.
There should also be specific training for operations staff: the folks in NOCs (Network Operations Centers) who will have to troubleshoot new issues (“Is this alert because of a PQC certificate error?” – they need to know how to handle that), or the PKI administrators who must deal with larger keys and new HSM procedures.
Additionally, change management in the organizational sense is vital: the program is likely to encounter resistance or inertia (as described earlier in internal challenges). The training/CM team can address this by ensuring there are quantum readiness champions in each department, facilitating communication between siloed groups, and celebrating milestones to keep morale.
Sometimes, bringing in an external expert to brief the C-suite or board about quantum risk can secure continued top-level buy-in (countering any leaders who think “this is a waste of time”).
The human component also involves hiring if needed – the program might identify skill gaps (maybe they need a couple of cryptographers or more security architects); the training team can push for those hires or arrange external consulting to fill gaps.
By program’s end, ideally the organization has not only the technical capability but also a culture that is supportive of proactive security measures – seeing the quantum-readiness effort as a success story of cross-team collaboration rather than a burden. This cultural shift can pay dividends in other areas of cybersecurity as well.
Each of these workstreams would have its own detailed plan and set of deliverables, but they all interconnect. For instance, the vendor management team feeds info to the infrastructure team about when vendor updates will be ready; the PKI team works with the application team to swap out certificates; the training team supports all others by improving skills. Coordination among workstreams is managed by the program’s governance (often a program manager or PMO ensures the pieces move in sync).
It’s useful to map these onto the phases: some workstreams are heavier in certain phases (e.g., inventory is heavy in Phase 1, vendor engagement ramps up in Phase 2 and is critical in Phase 3, training is continuous but there’s an early push, etc.). By identifying these components, the telco can allocate resources more precisely (e.g., have dedicated budget for training aside from the pure tech, have separate leads for PKI vs network upgrades because each is a full-time job, etc.).
In summary, a quantum-readiness program is not a single project but a portfolio of coordinated projects. The above list, while long, reflects the breadth of efforts required: from technical nitty-gritty of algorithms to legal compliance and people management. It underscores why this is often called the biggest cybersecurity overhaul in decades – few initiatives have to simultaneously deal with so many facets of the business.
The good news is that tackling these in parallel (with a strong strategy to keep them aligned) makes the challenge manageable. Many large organizations have done comparable multi-stream programs (think of a large digital transformation or a merger integration); it’s daunting but feasible with proper planning.
Timeline Estimate: A Phased Roadmap over a Decade
While exact timelines will vary by organization, we can sketch a plausible 10+ year timeline for a large telco’s quantum-readiness program, incorporating the phases and workstreams discussed. Keep in mind this is a high-level estimate – in reality some phases overlap and certain tasks might start earlier or later – but this gives an idea of how the journey might progress:
Year 0 – Year 1: Initiation and Discovery
The program kicks off. In the first few months, the telco secures executive sponsorship and funding for the initiative (making the case with the looming quantum threat and perhaps regulatory pressure).
A core team or Program Management Office (PMO) is established.
By mid-year, the asset discovery and cryptographic inventory (Phase 1) is in full swing – teams are scanning systems, collecting data from departments, and building the initial CBOM.
There’s an early emphasis on awareness: internal briefings are held so that all relevant teams understand what PQC is and why the inventory is needed (to smooth cooperation).
By the end of Year 1, the goal is to have a preliminary cryptographic inventory completed – maybe not 100% yet, but a solid map of most systems – and an initial high-level risk assessment identifying obvious high-risk areas.
Also, some quick wins might be achieved in Year 1: for instance, if any critical system was found using truly obsolete crypto (like 1024-bit RSA or SHA-1), the team might go ahead and patch that to a stronger classical crypto as an interim fix (no regrets there).
The end of Year 1 might be marked by a report to executives summarizing “Here is our cryptographic footprint and where we stand.”
Year 2 – Year 3: Assessment, Strategy, and Early Pilots
With inventory data in hand, the Phase 2 (Assessment & Planning) activities ramp up.
In Year 2, the team performs a detailed impact analysis on the inventory: categorizing which systems/data are most at risk.
They prioritize and develop the overall cryptographic strategy and roadmap. This likely involves writing a formal strategy document and getting it approved by leadership by around the end of Year 2.
During this time, the telco also engages vendors and industry peers: sending out requirements, attending standards meetings, joining the GSMA task force if not already a member.
Meanwhile, toward the latter half of Year 2 or early Year 3, the telco begins pilot testing PQC in lab environments. For example, they might set up a testbed replicating a segment of their network (a few 5G base station equipments and core network elements) and implement a hybrid PQC link between them to see behavior. Or spin up a sandbox IT environment where they configure a PQC-enabled TLS and measure performance.
These pilots (Year 2–3) provide input to the planning: if a chosen algorithm proves too slow on certain hardware, they’ll note needing hardware acceleration or an alternate algorithm.
By Year 3, the program has firmed up its implementation plan (Phase 3 plan): specific projects with budgets and timelines. Also by end of Year 3, we might see the first production deployment of a quantum-safe component – often this could be a PQC-capable internal PKI. For instance, the telco could build a new Certificate Authority that issues hybrid certificates, and use it (in parallel with existing PKI) to start issuing a few non-critical certificates (like for internal test systems) as a proof of concept.
This “Phase 3 pilot” signals the shift from planning to doing.
Year 4 – Year 5: Core Implementations Begin (Wave 1)
Now the heavy lifting of Phase 3 (Implementation) is underway. Years 4 and 5 likely focus on upgrading the highest-priority systems identified earlier. The telco might designate this as Wave 1 of implementation. Examples of activities in this period:
- Deploy quantum-safe VPNs or tunnels for critical internal links (like between data centers). Possibly in hybrid mode at first.
- Upgrade the firmware on core network routers and switches (if vendors have released PQC-supporting updates by this time, which is likely since NIST standards were finalized in 2024 and many vendors target ~2025-2026 for releases). The telco might, say, enable hybrid IPsec on all backbone links during Year 4.Issue PQC-based credentials to critical infrastructure: e.g., give all network nodes a hybrid certificate from the new PQC CA, so that management sessions or inter-node communication can use those certs.
- Similarly, start signing software updates with a PQC signature in addition to classical, ensuring that even if classical is broken in a few years, devices can still verify authenticity with the PQC signature.
- Focus on customer-facing portals and services that handle sensitive data: upgrade their TLS endpoints to support PQC cipher suites. Possibly offer it as a beta feature: e.g., the telco could allow connections over TLS 1.3+Kyber/Dilithium for clients that signal support (like a particular updated browser version), while others continue with TLS 1.2/1.3 classical.
- Internal governance setup: by Year 4, the program should formalize new policies (like crypto-agility requirements) and ensure new projects in the company are following them. Any new system introduced now should ideally be PQC-ready from the start, to avoid adding tech debt.
By end of Year 5, one would hope that all Tier-1 (critical) systems have at least a quantum-safe option deployed. They might not be running exclusively PQC (likely hybrid), but the mechanisms are in place. For example, perhaps the telco can say: all core network traffic (backbone, control-plane) is now protected with at least hybrid quantum-safe encryption; all critical databases have had their encryption keys doubled in size or re-encrypted with post-quantum algorithms where available; etc. There may still be a lot of dependent systems not done, but the crown jewels are addressed.
Importantly, by Year 5 the telco should also have solved major design questions – e.g., finalized how they will handle older devices (plans in place to encapsulate or replace them by a set timeline), and ensured the supply chain is delivering (vendors of the most critical gear should have provided updates by now, or interim solutions are in place).
Year 6 – Year 8: Broad Rollout (Wave 2 and 3)
In this middle period, the program tackles the bulk of the remaining systems. This is where many of the workstreams described are in full swing simultaneously. Key milestones and activities could include:
- Network-wide deployment: By now, PQC-capable firmware should be rolling out to all network elements, not just core. For instance, all the base station controllers, all the fiber optic network encryptors, microwave links, etc., get upgraded. The telco might coordinate these with regular tech refresh cycles (if they were going to replace some hardware anyway, they ensure the new one is quantum-safe). Given the scope, these upgrades likely happen region by region or segment by segment. If any hardware must be replaced due to lack of PQC support, those replacement projects (ordering, installing) occur in this window.
- Application updates: The majority of IT applications and platforms are remediated in this period. If the telco has, say, 200 in-house software applications, by Year 8 they aim to have all of them moved to using the approved PQC algorithms (or at least ready to flip to them). This could involve many code releases and version updates. Vendor software (ERP, CRM, etc.) would be upgraded to latest versions that support PQC (assuming vendors like Microsoft, Oracle etc., by this time have included PQC in their products – which is likely around 2025-2030).
- Customer device updates: If there is an impact on customer devices (for example, broadband routers, set-top boxes, or SIM cards needing upgrades), the telco will likely start executing that now. Perhaps they start issuing quantum-safe SIMs to new subscribers and have a plan to swap older SIMs by a certain date (some telcos might do this during normal SIM replacement cycles unless a critical need arises sooner). Or they push firmware updates to customer home routers to enable PQC cipher suites on the routers’ management interface or VPN backhaul. The extent of customer device changes depends on the telco’s services – a mobile operator might have to consider handsets and SIMs (which rely on standards – likely 5G/6G standards will by now have specified any changes needed for PQC).
- Cross-organization testing: As more systems become PQC-enabled, the telco will do broader testing such as end-to-end service tests. For example, test a scenario: a user with a quantum-safe device makes a phone call that goes through a quantum-safe core network to another network – ensure that interconnect works (possibly still classical if the other network isn’t upgraded; thus test fallback to classical on interconnect while maintaining PQC internally). Work with other carriers on testing roaming scenarios: one network uses PQC, another not – how does the device roam securely? The GSMA task force likely coordinates such tests among operators.
- Vendor forum & compliance: By Year 6-8, the telco’s push with vendors should yield results: hopefully the majority of vendors have delivered updates. Any stragglers would be escalated (up to executive level meetings, etc.). If needed, alternate suppliers might be phased in for critical components that a particular vendor failed to address. The telco might hold another big vendor meeting around Year 7 to review progress and remind everyone of any final deadlines (like “by 2030 no product without PQC will remain in our network”).
If we call these years Wave 2 and Wave 3, by the end of Year 8 the telco is probably through ~80-90% of the migration. Most systems should be running in dual mode (classical+quantum). The internal focus might shift from deploying new tech to decommissioning old tech. For example, they might begin turning off support for known weak algorithms on internal systems, essentially “raising the floor” so that only strong crypto is used internally. Externally, they might announce to partners/customers that by a certain date they will require all connections to use at least hybrid PQC (giving advance notice).
Year 9 – Year 10: Final Phase and “Flag Day” Transitions
In the latter part of the decade, the program enters a concluding phase. Activities here include:
- Decommissioning legacy cryptography: The telco will choose a point (or several points) to finally disable or remove remaining classical algorithms. This might happen in stages: e.g., first disable 1024-bit RSA usage (if any still left), then by Year 9 disable all RSA/ECC for internal system connections entirely, and perhaps by Year 10 stop accepting classical-only connections even from external sources (assuming alternatives exist for those external parties – this is tricky, might depend on industry progress). A major goal could be that by the end of Year 10, the telco’s public-facing services no longer allow purely classical TLS handshakes – they require at least hybrid or PQC. Achieving that will depend on client readiness (browsers, customer devices, etc., which by 2030 should have support).
- Addressing remaining exceptions: Any leftover items that couldn’t be upgraded – maybe a handful of IoT sensors or a legacy system slated for retirement in 2 more years – will be formally documented and isolated. The telco might implement heavy monitoring on those or compensate by other means. If some of these can be swapped out with short notice when Q-Day is imminent, the plan for that is ready.
- Audit and validation: Around Year 10, the telco might conduct an independent audit or assessment to verify that they’ve achieved their quantum-safe objectives. This could involve a thorough review of the cryptographic inventory (making sure nothing was missed) and penetration testing focusing on cryptographic attack paths. Regulators might also be keen to get an audit report. Essentially, this is the point of saying “we set out to do X, have we done it?”.
- Celebrating success and institutionalizing practices: The program, as a separate entity, might wind down by Year 10. The remaining tasks are handed off to normal operations. There might be a formal closure report to executives: e.g., “We have migrated 1000+ applications and 500 network systems to PQC, retired 50 legacy systems, updated 1200 vendor contracts, trained 500 staff, etc., over 10 years.” The telco could even use this in PR – being able to say “we are one of the first fully quantum-ready telcos.”
Notably, the timeline can shift if an unexpected quantum breakthrough occurs. If by Year 8 there are rumors that a CRQC might arrive earlier than thought, the telco might accelerate the final cutovers, potentially compressing Year 9-10 activities into a scramble. Conversely, if progress is slower (say some industries lag, or a key PQC algorithm faces an issue and needs replacement mid-course), some milestones might slip beyond Year 10.
Year 11 and beyond: Ongoing Evolution
After the formal program, the telecom enters the continuous Phase 4 operations as described. Crypto-agility is now part of life. The organization remains vigilant for changes. If, for example, NIST announces in 2032 a new algorithm to replace one that’s been weakened, the telco can spin up a mini-project to roll that out in, say, 1-2 years, thanks to the groundwork laid. If quantum computers still haven’t materialized, the telco nonetheless maintains readiness and uses the extra time to further optimize and potentially help others (maybe by this time, smaller companies or late adopters look for advice – the telco could even consult or offer quantum-safe services as a business line).
It’s interesting to compare this kind of timeline to historical ones. For instance, the transition from SHA-1 to SHA-256 took roughly a decade (2005–2015) for widespread adoption, and that was with far simpler logistics (mostly just issuing new certificates and software updates). A PQC migration, as we see, is far more multi-faceted, hence the decade or more timeline is justified. Organizations that start late might find themselves trying to compress these activities into fewer years, which would be extremely challenging – one reason why starting early (mid-2020s) is heavily advocated.
Finally, it’s worth noting the timeline may not be linear in effort – the early years are heavy on planning (less visible change), the middle years heavy on deployment (peak effort), and later years on cleanup and transition. Maintaining momentum throughout is key. Many large projects suffer fatigue after a few years; by outlining interim achievements (e.g., “by Year 5, 50% of our environment is quantum-safe”) and celebrating those, the program can sustain support to get through the long haul.
Resource and Cost Estimates for the Quantum-Readiness Program
A program of this magnitude demands a significant investment. While costs can vary widely based on organization size and how the effort is executed, we can provide ballpark estimates for each major component of the program, as well as the overall total. It’s important to emphasize these are rough figures – actual costs would be refined with detailed analysis – but they illustrate that this is not a cheap endeavor. However, the cost should be weighed against the potential consequences of not acting (which could be far higher, e.g., massive breaches or system overhauls in crisis later). We’ll outline costs by phase/component, then summarize:
Phase 1 – Discovery (Asset & Crypto Inventory)
The initial discovery phase is labor-intensive. It involves deploying scanning tools, running analyses, and manually investigating systems that automation can’t reach. A large telecom might spend $2 – $5 million on this phase. This includes purchasing or licensing discovery tools (perhaps a few hundred thousand dollars), and staffing (possibly a dedicated team of 10+ for a year). Often consultants are hired to assist in crypto inventory, which can run into the high six figures in fees. The result is a comprehensive inventory which is the foundation for everything else – so while a few million upfront seems high, it’s a one-time cost to map out thousands of systems.
Phase 2 – Assessment & Planning (Risk Prioritization & Strategy)
This phase’s cost is primarily in expert time – analyzing inventory data, running risk models, holding strategy workshops, etc. There may be some spend on external advisors or cryptographers to validate the strategy. One might budget around $1 – $3 million for this phase. That could include, for example, hiring a specialized consultancy to run a series of readiness assessments and develop a roadmap (which can easily be $500K+ for a large enterprise engagement), and internal labor for the core team over the year or so (some staff might overlap with Phase 1 team). Planning might also involve pilot projects in a lab – allocating say $100K for lab equipment upgrades or experimental licenses. In the grand scheme, planning is perhaps only ~1-2% of total program cost, but it’s crucial to avoid missteps. Note that thorough planning can save money later by prioritizing correctly (spending the big bucks where needed most). This phase also sets the budget ask for implementation. By its end, the telco would likely have an estimated overall budget figure for the whole program to present to executives.
Phase 3 – Implementation (Execution of Upgrades)
This is by far the largest cost segment. It encompasses hardware, software, and labor over multiple years. Let’s break down some major contributors within implementation:
- Hardware/Equipment Upgrades: A telco might need to replace or add hardware like routers, base stations, HSMs, accelerator cards, possibly even endpoint devices. Suppose the telco has 5,000 network devices that need new crypto hardware modules at an average cost of $20,000 each – that’s $100 million right there. Not all devices will need full replacement; some just need firmware (which is labor but not hardware cost). However, critical infrastructure like HSMs (for PKI) might be, say, 10 new HSMs at $50K each ($500K). If QKD is experimented with, a pair of QKD devices for a link can cost tens of thousands as well. It’s reasonable to estimate hardware-related costs in the low hundreds of millions for a large global telco. On the order of $100M – $200M could go into equipment and devices over the decade, especially if customer premises gear or SIMs are included (e.g., replacing 10 million SIM cards at $2 each is $20M).
- Software & Licenses: Many software systems will require upgrades to new versions (which might be covered under maintenance contracts, or might require new licenses). Let’s say $10M is allocated for various software upgrades and licenses for new crypto libraries, etc. If the telco uses any custom-developed PQC solutions or buys third-party PQC products (like a quantum-safe VPN software or a crypto-agility management platform), those could be a few million as well.
- Labor (Engineering/Deployment): During peak years, dozens of engineers (network, software, security) will be working on the rollout. If we assume at peak 50 FTEs (full-time equivalents) working for, say, 5 years on various tasks, that’s 250 FTE-years. Valuing that at an average loaded cost of $150K/year (for simplicity, including overhead) – that’s about $37.5M in internal labor. Add external contractors or professional services for specialized installation or integration tasks – maybe another $10M spread over years. So labor might be on the order of tens of millions (~$40-50M) for implementation. It could be more if the program relies heavily on outside services (consultants can be expensive, but sometimes internal staff is reallocated, which has an opportunity cost but not an incremental cost).
- Testing and Pilots: Throughout implementation, significant resources go into testing and quality assurance – setting up testbeds, running interoperability tests, etc. The telco might invest in a dedicated test lab environment with PQC test gear. Over years, this could sum to a few million.
- Contingency and Miscellaneous: Large programs often budget 10-15% for contingencies. On a big scale, that itself is a big number (could be $20-30M reserved) in case, for example, a major re-design is needed mid-course or an accelerated timeline is required.
Putting it together, Phase 3 could reasonably cost on the order of $200 – $300 million (or more) for a large telecom. To take a mid-point, maybe ~$250M over the multi-year implementation. It’s a huge sum, but to put in perspective, large telcos often spend billions annually on network upgrades generally. This program’s spend would be a significant portion of a multi-year CAPEX/OPEX budget. Not all at once – it would be spread (maybe $25-50M per year over 8 years). To justify it, note that the U.S. government estimates $7.1B for its agencies to transition by 2035; a single large telco is smaller than the entire U.S. federal enterprise, but a few hundred million is in line with being, say, a few percent of that scope. Also, recall that telcos in 3G/4G upgrades spent comparable amounts for technology shifts (though those had direct revenue tie-ins; here the spend is for security resilience).
Phase 4 – Ongoing Operations
Once the heavy lifting is done, ongoing costs are relatively modest but not zero. There will be continuous monitoring systems (maybe subscriptions to quantum risk intel, etc.), ongoing training refreshers, and maintenance of new systems (the new PKI, new hardware – which have support contracts).
We might estimate an annual ongoing cost of perhaps $2 – $5 million per year in the steady state for a telco to maintain quantum readiness. Over a decade, that’s maybe another $20-50M. This includes keeping staff like cryptographers on payroll, performing regular audits (could be external audits costing hundreds of thousands), and participating in standards (travel, memberships).
It’s wise to budget for periodic tech refresh too – for example, if by 2035 new PQC hardware or algorithm upgrades are needed, have funds earmarked for that. These costs essentially become part of the security operating budget indefinitely. Compared to the implementation spike, this is small.
Training and Change Management
Spread across phases but deserves earmarked budget. A comprehensive training program for thousands of employees, plus specialized crypto training, plus perhaps hiring a few new experts. A large company might easily spend $1M+ on training over the years (bringing in experts, developing courses, etc.). Change management activities (like running the vendor forums, internal communications campaigns) also incur costs (event costs, materials); maybe another several hundred thousand. We include this for completeness.
Total Program Cost (10+ years)
Summing the above rough figures: Phase 1 ($3M avg) + Phase 2 ($2M) + Phase 3 ($250M) + Phase 4 ($30M over later years) + training ($1M) gives around $286M. Given the uncertainty, it’s reasonable to round and say on the order of $300+ million. We could say $300–500 million to be safe, as some telcos might need more hardware replacements than others. It’s a big range, but even the low end is a substantial number.
For a large global telecom, a ballpark figure of a few hundred million dollars over a decade for quantum readiness is justifiable. It aligns with market research projections: for instance, the global PQC market is expected to reach ~$30B by 2034, driven by thousands of organizations all investing in similar programs. A single major telco’s $300M is one piece of that global spend. Another comparison: the cost of inaction could be higher – a major breach of a telecom can cost hundreds of millions in damages, customer loss, and regulatory fines. So executives may see this as an insurance-like investment.
It’s worth noting that some costs blend into normal tech refresh. If the telco smartly aligns quantum upgrades with regular upgrade cycles (e.g., replacing routers at end-of-life with quantum-safe models), some of that spend might have happened anyway, just on different equipment. In that sense, not all the budget is “new” money solely for quantum; a portion is reallocated planned spending but directed to quantum-safe tech. That can help in business cases (reduce the net incremental cost).
Also, cooperation can lower costs: sharing testing results among industry via GSMA, using open-source implementations of PQC to avoid high licensing fees, etc., can make things more efficient. But inevitably, a lot of bespoke integration work is needed in each organization.
To summarize, a comprehensive quantum-readiness program is a significant investment – likely in the hundreds of millions of dollars for a large telecom – spanning hardware, software, manpower, and extended maintenance. This is a high price tag, but leaders increasingly recognize it as the cost of doing business securely in the future. The White House explicitly noted that while the transition “won’t be cheap,” ignoring the problem until quantum attacks emerge would be “much, much more expensive.”
Conclusion: A Marathon Endeavor, But Mission-Critical for Security
Preparing a large telecom (or any enterprise) for the post-quantum cryptography era is a massive, multi-faceted undertaking, but it is achievable with foresight, resources, and commitment. We’ve seen that it involves much more than just installing new algorithms – it’s about transforming an organization’s approach to cryptography across potentially thousands of applications and devices, under uncertain timelines and in coordination with many external players.
In all likelihood, this quantum-readiness program will be one of the most complex IT/security projects the organization has ever executed, comparable to – or even exceeding – major transformations like the rollout of a new network generation or a large merger integration. The program spans technology, process, and people: from the nuts-and-bolts of lattice-based encryption performance, to policy-setting and vendor negotiations, to overcoming human resistance and silos.
A few closing insights and lessons from this analysis:
- Start Early and Plan for the Long Term: If there’s one takeaway, it’s that quantum migration is a long game. Underestimating the effort (the “just patch it later” mentality) is dangerous. Organizations should begin the groundwork now – especially inventories and setting strategy – even if they think Q-Day might be 10+ years out. The program will likely take that long to complete anyway. Those who procrastinate could find themselves in a time crunch or, worse, caught unprepared if breakthroughs accelerate. As one telecom’s 10-year (and counting) project shows, even starting early doesn’t mean you’ll finish early. But starting late virtually guarantees a scramble and higher risk.
- Global Collaboration is Key: No telco operates in isolation; likewise, no company can solve the quantum threat alone. The example of 50+ companies collaborating in GSMA’s task force underscores the value of sharing knowledge and aligning efforts. Similarly, engaging with standards bodies ensures you’re not caught off guard by technical requirements. Pushing vendors collectively amplifies pressure (vendors have heard consistent demands across their customer base). Quantum security is ultimately an ecosystem problem – the security is only as strong as the weakest link in the chain of partners, suppliers, and connected parties. The more organizations band together (through forums, industry groups, open-source efforts for PQC implementations, etc.), the smoother and less costly the transition will be for everyone.
- Balance Caution with Pragmatism: Throughout the program, there will be many decision points without clear “right” answers – e.g., which algorithms to choose, when to cut over fully, how much to invest in protecting something that might be retired soon, etc. It’s important to be guided by risk-based thinking. Protect the most critical assets first (don’t let perfection in one area delay action in another that really needs it). At the same time, maintain agility – if a plan isn’t working (maybe a vendor is too slow, or a chosen algorithm has issues), be ready to pivot. Build contingency into plans. In a way, the program itself must be agile, just like the crypto it’s deploying. There won’t be a complete “rule book” from day one – the organization will learn and adapt as it progresses (which is another reason to start early, to have that learning time).
- Don’t Neglect the Human Element: We highlighted how internal misalignment and culture can make or break this effort. It’s crucial to secure broad buy-in – from the boardroom to the engineering trenches. Continuously educate and update stakeholders on progress (celebrate small wins, show the value of what’s being done). Address fears and objections head-on: for example, if some managers fear audits will expose their realm, frame it as a collective improvement effort, not a blame game. Executive tone from the top is vital; if leadership treats quantum readiness as a strategic priority (not just an “IT project”), teams will be more motivated to cooperate. In large programs, often “people problems” cause more delay than technical ones – being aware of that and proactively managing change can literally shorten the timeline by years.
- Invest for Resilience (Cost of Inaction > Cost of Action): We put forth cost estimates in the hundreds of millions, which are daunting. Yet, consider the flip side: if a telecom’s communications were compromised by a quantum-enabled adversary, the fallout could be enormous – think nation-state espionage on critical infrastructure, or criminals decrypting customer data. The brand damage, customer churn, regulatory penalties, and remediation after the fact would also run into the hundreds of millions, if not more, and would happen when it’s too late (after damage is done). Spreading, say, $300M over 10 years is more palatable and controllable than facing a crisis with a similar or higher price tag in a short time. Moreover, many investments (like new hardware or modernized applications) bring ancillary benefits – improved performance, better classical security, etc. There’s truth in the saying “an ounce of prevention is worth a pound of cure.” In this case, the prevention is costly, but the potential cure (trying to rapidly recover from not being quantum-safe when an attack hits) might be impossible. A U.S. government report put it plainly: ignoring the problem until it’s imminent would be “much, much more expensive” than a planned transition.
In conclusion, a telecom that executes a comprehensive quantum-readiness program will not only mitigate the specific quantum threat but will come out the other side with a far stronger overall security posture. The organization will have up-to-date cryptography, a disciplined inventory of its systems, an agile approach to future crypto changes, and a workforce educated on advanced security concepts. It essentially “future-proofs” the security of the network to the extent possible. Given how foundational communications security is to economies and societies, this is a mission-critical effort.
As the quantum clock ticks down (and it may tick faster unexpectedly), those organizations that treated this as a strategic marathon – and steadily made progress – will be glad they did. They’ll be the ones telling their customers and stakeholders, “Don’t worry, we’re ready for the quantum age,” while latecomers scramble.
(I explore these points, and much more, in my upcoming book, “Practical Quantum Resistance”. If you’re looking for clear strategies and actionable guidance to get your organization quantum-ready, head over to QuantumResistance.com and sign up to stay in the loop!)
© 2025 Applied Quantum. All rights reserved