Gate-Based (Circuit) Model vs. One-Way (Cluster) Model
In the gate-based approach (used by most superconducting, ion trap, and semiconductor qubit platforms), algorithms are executed by applying a sequence of quantum gates (unitary operations) on qubits, analogous to a classical circuit. This requires the qubits to be maintained coherently for the duration of the circuit and to be able to undergo two-qubit interactions on demand. In photonics, however, implementing two-qubit gates deterministically is notoriously difficult because photons do not naturally interact – one typically needs either special nonlinear materials or measurement-induced effects with additional photons, which are probabilistic. The one-way model offers a different route: all entangling operations are done upfront (which can be attempted many times in parallel if probabilistic), and thereafter only single-qubit measurements are needed. One-way quantum computers do not necessitate any on-the-fly two-qubit gates, unlike circuit computers. This gives one-way computing a potential hardware advantage for photonics: it trades the problem of reliably performing many sequential gates for the problem of preparing a large entangled state once. Additionally, the computational “steps” in one-way computing – single-qubit measurements – are relatively simple and can be fast and high-fidelity, in contrast to complex multi-qubit logic gates.
Another difference is in how algorithms are designed: gate-model algorithms are sequences of gates, whereas one-way algorithms are typically designed as measurement patterns on a given cluster graph. Mathematically, the two models are equivalent in power (anything doable with gates can be done with a cluster and vice versa), but the resource accounting differs.In terms of scalability and hardware, gate-based photonic computing (e.g., the original KLM scheme) would require an enormous overhead of additional photons and extremely low-loss circuits to achieve something like error-corrected operation – each two-qubit gate might only succeed a small fraction of the time, so many attempts must be buffered and coordinated. One-way photonic computing shifts this overhead to the initial state: generating a large cluster state may require many photons and entangling attempts, but once it’s generated, using it is straightforward. Researchers often consider the cluster model more natural for optics, since one can entangle a large number of photons (for example, via parametric down-conversion sources or optical combiners) relatively “easily” in parallel, whereas in a gate model you’d have to interact qubits pairwise in series which is tough if each interaction is probabilistic. Indeed, a photonics team at RIKEN highlighted that entangling a large number of optical modes is in some ways easier than scaling up solid-state qubits, making measurement-based optical computers potentially more scalable in qubit count. On the other hand, keeping all those photons coherent at once can be challenging due to loss – it’s a trade-off.
From the perspective of fault tolerance, both modalities can achieve it but by different means. Gate-based computers typically employ quantum error-correcting codes (like surface codes) on physical qubits and perform syndrome measurements periodically using additional qubits and gates. One-way cluster computers can achieve fault tolerance by using special cluster states that have a trellis-like 3D structure corresponding to an error-correcting code (e.g., a 3D cluster implementing a surface code in space-time). In fact, the cluster-state model is very compatible with certain topological error correction schemes: one can build a 3D cluster state where making appropriate measurements is equivalent to performing error correction on a surface code. Photonic implementations may find this approach appealing – instead of a fixed 2D array of qubits doing a surface code, you continuously generate a 3D entangled cluster of photons that is a fault-tolerant error-correcting code, and you consume it as you go. Recent research indeed suggests that photonic cluster architectures could allow fault-tolerant operation if loss and error rates are below certain thresholds (on the order of a few percent loss per photon, as discussed later).
In summary, the cluster vs. gate-based question in photonics is about trade-offs: cluster-state computing requires preparing a complex entangled state but then uses only measurements (with feedforward), whereas gate-based computing requires the ability to interact qubits arbitrarily throughout the algorithm. For platforms like superconductors or ions, gate-based is natural because qubits can interact via well-defined couplings; for photons, cluster-state may be more natural because interactions can be shifted to a pre-processing stage. Both are universal, but the hardware demands differ significantly.
Adiabatic/Annealing Model vs. One-Way Model
Adiabatic quantum computing (AQC) and its practical variant, quantum annealing, is another modality where instead of logical gates or measurements, one encodes the problem into a Hamiltonian (an energy landscape) and then slowly evolves the quantum system to find the solution (usually the ground state of that Hamiltonian). The leading example is the D-Wave quantum annealer, which uses hundreds or thousands of superconducting flux qubits to solve optimization problems by energy minimization. In principle, AQC is computationally equivalent to gate-based QC (any gate circuit can be encoded into an adiabatic process), but in practice current annealers are special-purpose – mainly useful for certain optimization or sampling tasks. One-way photonic computers differ fundamentally from annealers in that they are digital and gate-equivalent (they can run arbitrary algorithms with the right measurements), whereas annealers are analog and typically not used for general algorithms like factoring. For instance, a quantum annealer excels at solving Ising model minimizations or finding low-energy configurations, and has been shown to be extremely fast for certain cases of those problems, but a quantum annealer cannot efficiently run Shor’s algorithm or many other quantum algorithms that require a sequence of logic operations. A photonic cluster-state computer could run Shor’s algorithm (in theory) because it’s a universal quantum computer.
Another difference is that current annealers require heavy cryogenic analog hardware and are not error-corrected – they rely on an analog process that is somewhat resilient to certain noise, but they cannot correct arbitrary quantum errors and they lack the full capability of a universal gate quantum computer. Photonic one-way computers, by contrast, are being designed with error-correctability in mind (via cluster-based codes), and operate at room temperature (except for possibly the detectors). In terms of use cases, annealing is great for optimization problems (e.g., scheduling, route planning, some machine learning tasks) – think of it as a quantum solver for specific math problems via energy minimization. One-way photonic computing would be aimed at the broad class of problems quantum computers can tackle, including those same optimization problems (via algorithms like QAOA or Grover’s algorithm) and also problems like factorization, quantum simulation of physics/chemistry, etc., which annealers can’t efficiently do. From a photonics perspective, one could build a photonic quantum annealer – for example, using coherent optical parametric oscillators to find Ising ground states (there have been experiments along these lines). But those devices, while “photonic quantum” in some sense, are not cluster-state or MBQC devices; they implement a different algorithmic approach. A photonic cluster-state computer is more akin to a universal gate-based machine in capability, except it uses measurements instead of gates.
In short: quantum annealing is a specialized analog modality (exploiting gradual evolution to a solution), whereas cluster-state computing is a universal digital modality (exploiting entanglement and measurement). Each suffers different challenges: annealers face difficulty in scaling to problems beyond optimization and in ensuring the adiabatic criteria, while cluster-state devices face the difficulty of generating and maintaining large entangled states with feedforward. They are complementary in the quantum ecosystem. It’s worth noting that they could even work together – e.g., a cluster-state computer could potentially simulate an annealing process or vice versa, since both are quantum models. But practically, if one’s goal is to solve generic problems or break cryptography, an annealer won’t suffice – one needs a universal QC, which the cluster-state machine is. Conversely, if one is only interested in, say, solving an optimization problem faster, a dedicated annealer might reach useful scale sooner than a general photonic QC because it has simpler goals.
In summary, compared to other modalities: Photonic cluster-state computing stands out by offering universality (like gate-based QCs) with an operational mode tailored to photons (parallel entanglement + measurements instead of sequential gates). It potentially avoids the need for complex two-photon gates by front-loading entanglement, thus playing to photonics’ strengths. Against gate-based machines, it promises easier physical operations (measurements) at the cost of a complex initial state; against annealers, it offers a fully programmable algorithmic range (not limited to optimization). As a result, many see photonic cluster-state computing as a viable path to a large-scale quantum computer that leverages the natural advantages of optics while mitigating some optical disadvantages through clever design. The true test will be scaling: can we generate and manage the huge cluster states needed for real-world algorithms faster than other platforms can implement long gate sequences or adiabatic evolutions? Ongoing research and development (discussed next) aim to answer that.
Current Development Status
Research and development in photonic cluster-state computing have advanced significantly, with both academic and industry efforts pushing toward larger and more reliable photonic quantum processors. Here we highlight the current status and recent progress in this field, including scalable cluster-state generation and the approaches of major players like PsiQuantum and Xanadu:
Scaling Up Cluster States
A central challenge has been to create cluster states large enough (and with high enough quality) to perform useful computations. Early experiments entangled on the order of 4–8 photons in small clusters or graph states. Today, that scale is being vastly extended. In the discrete variable domain (photons as qubits), one avenue is to use integrated photonic circuits to stabilize and interferometrically combine photons. For example, four-photon cluster states and eight-qubit graph states have been generated on a silicon photonic chip, demonstrating that on-chip sources and waveguide circuits can reproduce what previously required bulk optics. Integrated platforms improve stability and enable potentially hundreds of components (sources, beam splitters, phase shifters) to work together, which is crucial for scaling.
Another approach uses time multiplexing: instead of spatially separate photons, a sequence of time-bin pulses from a pulsed laser and nonlinear crystal can form a cluster by interfering successive pulses. Pioneering experiments in 2015–2020 by groups in Japan and Denmark created continuous trains of entangled pulses forming 1D cluster states and even a 2D cluster state of over 10,000 modes using time and frequency multiplexing of squeezed light. In continuous-variable photonics (where qubits are replaced by modes with continuous quantum variables), very large cluster states have been achieved – Larsen et al. and Asavanant et al. both reported entangling on the order of $$10^4$$ light modes into a large two-dimensional cluster using optical frequency combs and beam splitters. These results set records for entanglement size, albeit in the CV regime (which is still useful for MBQC, although error correction is different than for qubit clusters). A major recent milestone on the source side is the development of high-quality single-photon sources that can emit strings of entangled photons. In 2023, researchers demonstrated the deterministic generation of a cluster-state string of photons using a quantum dot in a cavity. Quantum dots (semiconductor nanocrystals) can emit single photons on demand, and by coherently driving a quantum dot and using its excitonic states, the team made it emit photons that were entangled with one another as a one-dimensional cluster (a photonic “string” cluster). This is exciting because it points to a future where a single chip containing quantum dot emitters could spit out a continuous cluster state (like a factory for cluster-state photons), eliminating the probabilistic nature of SPDC (spontaneous parametric down-conversion) sources. Likewise, on the detection end, transition-edge sensors and superconducting nanowire single-photon detectors have reached detection efficiencies above 95%, which helps preserve cluster fidelity when measuring. In essence, the hardware for cluster states – sources, circuits, detectors – is rapidly improving. We now have proof-of-principle machines that can generate entangled photonic states of unprecedented scale (in CV) and demonstrations of on-demand entangled photon streams (in DV).
Integrated Photonics and On-Chip Processors
Integration is a key theme. Companies and labs are building photonic chips that integrate dozens of components to manipulate many photons simultaneously. Xanadu, for instance, demonstrated a programmable nanophotonic chip that injected up to 8 squeezed-light photons and performed a variety of reconfigurable circuits with them. In 2022, Xanadu’s Borealis photonic quantum processor implemented a 216-mode time-multiplexed interferometer (effectively a large cluster of light pulses in time) to perform Gaussian boson sampling, achieving a quantum advantage result with photons that is beyond what classical supercomputers can simulate. While Borealis was not a universal cluster-state computer (it was geared for a specific sampling task), it used technologies directly relevant to MBQC: multiplexed pulsed squeezing sources, fast programmable phase modulators for feedforward, and large-time-delay loops – essentially creating a big entangled state of light on the fly. This shows that scalable photonic hardware (with hundreds of modes) is becoming a reality. On another front, researchers have integrated sources of entangled photon pairs, linear optical circuits, and detectors all on one chip, showing that “entanglement-on-chip” is possible. For example, silicon photonics chips have generated four-photon GHZ and cluster states internally. Such integration will be critical for reducing loss and physical size as the system grows.
Approaches of Major Industry Players
Two notable companies focusing on photonic quantum computing are PsiQuantum and Xanadu.
- PsiQuantum (a U.S./UK startup founded ~2016) is explicitly pursuing a photonic cluster-state, measurement-based architecture for a fault-tolerant quantum computer. Their approach uses silicon photonics – basically, optical waveguide circuits fabricated in a conventional semiconductor fab – to integrate thousands of photonic components on each chip. PsiQuantum’s design calls for extremely many qubits (they often quote a goal of one million physical photonic qubits), because photonic qubits are fast and can be generated in large numbers to offset probabilistic losses. They have proposed a “fusion-based” quantum computing architecture, which is a variant of cluster-state MBQC where small entangled resource states (like 3- or 4-photon states) are continually generated and then fused (entangled) together by projective measurements to build up a large 3D cluster for computation. In 2022, PsiQuantum announced a theoretical breakthrough in the efficiency of compiling fault-tolerant circuits on a photonic architecture, claiming a 50× reduction in overhead for certain algorithms. They also published a detailed blueprint of their machine, emphasizing that silicon photonics is the only way to scale beyond one million qubits for a fault-tolerant universal quantum computer. This is backed by the idea that photonic chips can leverage existing semiconductor manufacturing to achieve volume and precision. PsiQuantum, in partnership with GlobalFoundries, is currently developing the necessary technology: single-photon sources, detectors, and low-loss waveguide circuits all integrated. The timeline is ambitious – PsiQuantum aims to build a useful, error-corrected quantum computer by the late 2020s, potentially around 2027. While this is a bold target, it gives a sense of their confidence in the cluster-state photonic approach. They’ve also entered Phase 2 of DARPA’s quantum computing program, which supports under-explored approaches (like photonics) to achieving utility-scale quantum computers. In short, PsiQuantum’s development strategy is to combine advanced photonic hardware with the cluster-state MBQC model and topological error correction (surface codes implemented via a 3D cluster) to reach a large-scale machine. They are all-in on photonic cluster states: one of their founders (Terry Rudolph) wrote “Why I am optimistic about the silicon-photonic route to quantum computing” – reflecting the company’s belief that photonics and cluster states are the path to scalability.
- Xanadu (a Canadian company) takes a slightly different angle, focusing on continuous-variable (CV) photonics and Gaussian boson sampling as near-term targets, but with an eye toward universal quantum computing via cluster states as well. Xanadu’s hardware is based on squeezed light pulses (which are CV qubits, or “qumodes”) that are interfered in a reprogrammable fiber-loop circuit. Their recent Borealis machine is essentially a one-way quantum processor: it creates a large entangled state of up to 216 modes by injecting squeezers and using beam splitters and phase shifters in time domain, and then measurements (photodetection) are performed on that state to sample from its distribution. While Borealis was used to perform a specific task (boson sampling), the underlying technology – time-multiplexed cluster state generation with feedforward – is a stepping stone to a fully programmable CV cluster-state quantum computer. Xanadu has also been developing error-correctable CV qubits (so-called GKP states) that could be incorporated into cluster states for fault tolerance, although that’s still a challenging goal. On the software side, Xanadu’s Strawberry Fields platform and PennyLane interface allow programming photonic circuits, which will be applicable once cluster-based photonic QPUs come online. The company has publicly stated goals of building a modular photonic quantum computer and has demonstrated key components like high-efficiency photon-number-resolving detectors and ultrafast optical switches. They, along with academic collaborators, are exploring hybrid approaches too – e.g., using photonic cluster states to interface with memory qubits or to create repeater networks. Xanadu’s timeline for a fault-tolerant device isn’t as explicitly stated as PsiQuantum’s, but their achievements (like the advantage demonstration) show steady progress in scaling photonic systems.
- Other Notable Efforts: Aside from PsiQuantum and Xanadu, several academic and industrial groups worldwide are developing photonic quantum computing in various forms. NTT in Japan and the University of Tokyo (Furusawa’s group) have been leaders in optical cluster states and feedforward control – Furusawa’s lab demonstrated one of the first large-scale time-multiplexed cluster states and continues to push CV cluster experiments, as evidenced by their nonlinear feedforward demonstration for universal QC in 2023. In China, the USTC group (Jian-Wei Pan) built the Jiuzhang photonic processors that achieved quantum supremacy in boson sampling, and they are also investigating photonic gates and small cluster states on silicon chips for more general tasks. There are also startups like QuiX in Europe focusing on photonic chips, and academic consortia exploring quantum repeaters that involve photonic cluster states to correct for losses in quantum communication. Governments are investing in photonic approaches through programs like the UK’s quantum computing initiative (which funds photonic integrated quantum circuits) and the EU’s Quantum Flagship (with projects on optical quantum computing). The U.S. National Quantum Initiative also includes photonics as a key area, evidenced by funding for labs developing photonic quantum interconnects and the DARPA program mentioned earlier.
- Status of Quantum Volume/Capability: As of 2024, photonic quantum computers (cluster-state or otherwise) are still in the demonstration phase, not yet at the point of outperforming classical computers on useful tasks (except for specialized tasks like boson sampling). However, the quantum volume – a measure combining qubit number and fidelity – of photonic devices is rapidly growing. The continuous-variable cluster experiments have extremely large mode counts but lower effective fidelity per mode; the discrete photonic experiments have high fidelity but lower counts. Bridging those will be important. One notable achievement is that photonics holds the record for largest entangled state by number of modes (thousands of modes entangled), though each mode had limited quality. Meanwhile, small-scale photonic processors have achieved programmable operations on ~10 qubits with decent fidelity, though not yet as high as superconducting qubit devices. The next big milestones that the community is aiming for include: demonstrating a logical qubit (error-corrected qubit) in a photonic platform, demonstrating moderate-size algorithms (like variational algorithms or quantum simulations) on a photonic testbed, and increasing the size of cluster states while keeping loss and error low. Given the trajectory, we expect to see photonic cluster-state computers with on the order of 50–100 effective qubits (or modes) and some form of error mitigation within a couple of years, and then pushing into the hundreds with error correction in the later 2020s.
In summary, the development status of photonic cluster-state computing is that of an emerging technology transitioning from fundamental research to engineering. Major progress has been made in generating larger entangled states (especially via time multiplexing and integrated optics) and in envisioning scalable architectures. Companies like PsiQuantum and Xanadu are heavily invested in solving the remaining challenges (loss reduction, source and detector improvement, and error correction integration). We have seen “quantum advantage” experiments with photonic systems (albeit not full cluster-state computers yet), indicating the raw potential of photonics. The field is now moving toward turning these photonic entanglement capabilities into fully programmable computers. If current trends continue, the coming years will likely bring photonic demonstrations of small-scale algorithms running in the cluster model, increased qubit counts, and eventually the incorporation of fault-tolerant techniques that are necessary for useful, large computations.
Advantages
Photonic cluster-state computing offers several compelling advantages that stem from the physical properties of photons and the nature of the one-way model. These advantages make the approach attractive for large-scale quantum computing and quantum network integration:
Room-Temperature Operation
Photons can be used as qubits at room temperature – unlike many matter-based qubits (superconducting circuits, trapped ions, etc.) that require cryogenic or ultra-high vacuum environments. Optical systems suffer virtually no thermal decoherence because photons have no charge and do not easily couple to thermal noise. This means photonic quantum hardware can, in principle, be operated in normal laboratory (or even data center) conditions. The freedom from dilution refrigerators or ion traps is a huge practical advantage for scaling up: it simplifies infrastructure and allows the possibility of leveraging existing telecom and semiconductor tech (which all run at ambient temperatures). Note that certain photonic components like single-photon detectors might be superconducting and cooled, but these can be small components (e.g., on the periphery of a photonic chip or fiber network) – the core photonic processing can remain at room temp. Room-temperature operation also facilitates modular expansion – you can envision racks of photonic processors linked by fibers, without specialized cooling for each module.
Low Decoherence and High Stability
Photons interact very weakly with the environment. Once a photon is created in a given quantum state (say polarization or time-bin), it can maintain its quantum coherence over long distances and times, as long as it isn’t absorbed. There is no equivalent of “phase flip” noise from fluctuating fields that, for example, plague superconducting qubits – a photon in free space or a low-loss fiber can keep its quantum state essentially unchanged for kilometers. Optical fiber losses are on the order of 0.2 dB per kilometer for telecom wavelengths, and photons do not experience “memory” effects: they either get lost or they don’t; if not lost, their quantum state is nearly perfectly preserved aside from predictable phase shifts. This means that photonic qubits have extremely high intrinsic coherence times (limited effectively by how long you can keep the photon trapped or delay it – potentially milliseconds or more, which corresponds to hundreds of kilometers of travel). Additionally, photonic operations like linear optical transformations (phase shifters, beam splitters) can be performed with very low noise and error. In fact, photonic platforms have demonstrated error rates below $$10^{-5}$$ in certain operations, which is orders of magnitude lower than the error rates in today’s superconducting or ion trap gates (typically $$10^{-3}$$ to $$10^{-2}$$). Such low error rates are partly because photons don’t suffer from things like stray two-level system interference or motional decoherence – if your optics are stable and your detectors are low-noise, the only significant error is an occasional photon loss or dark count. The combination of long coherence and low operational error bodes well for eventually achieving fault-tolerance. Fewer error correction overheads may be needed if each photonic operation is extremely clean (though loss is still a main challenge). The bottom line is that photonic qubits are highly reliable carriers of quantum information in terms of coherence.
Natural Networking and Distribution
Photons are mobile qubits. They inherently propagate at the speed of light and are the ideal choice for transmitting quantum information over distance. This makes photonic cluster-state computers a perfect fit for quantum networks and distributed computing. A photonic quantum processor can easily send qubits to another processor or receive qubits from remote sources, simply using optical fiber or free-space links. There is no need for special transduction to a communication medium – the computing qubits are themselves the flying qubits. This gives photonic systems an edge in scalability across distance: multiple photonic modules can be connected into a larger machine with minimal link overhead. Moreover, photons can connect different types of quantum systems (they can interact with atoms, quantum dots, NV centers, etc., serving as a universal communicator), so a photonic cluster-state computer could readily be part of a hybrid quantum architecture where it links to, say, ion-trap memory nodes or superconducting processors via optical interfaces. In the context of cybersecurity (discussed later), the ability to integrate with quantum communication (QKD) infrastructure is a big plus – a photonic quantum computer can exchange entanglement or quantum keys with distant parties by the same mechanisms as it uses internally. Also, the measurement-based model lends itself to networking: one can perform blind quantum computing or secure delegated computing by sending photons to a server. All of this is enabled by the fundamental fact that photons can go long distances with little decoherence and only modest loss (e.g., hundreds of kilometers in fiber, or even to satellites in space). In short, photonic cluster-state devices are communication-ready by design.
Ultra-Fast Operations and Parallelism
Photonics operates at the speed of light – literally. Gates and measurements in optical systems can be extremely fast (picosecond or nanosecond scale) because they’re often limited only by how quickly you can modulate or detect light. This means a photonic quantum computer can have a very high clock speed in principle. For instance, single-photon detectors can operate at up to gigahertz rates, and electro-optic modulators can adjust measurement bases on nanosecond timescales, allowing sub-nanosecond feedforward latency as demonstrated in some experiments. In addition, the cluster-state model inherently supports parallel operations: many photons (qubits) in the cluster can be measured simultaneously if their measurements do not depend on each other’s outcomes. In a 2D cluster, one can often measure an entire layer of qubits in parallel, which corresponds to executing many gates at once (this is analogous to doing a whole layer of a quantum circuit in one go, thanks to the pre-entanglement). Because of this, a photonic one-way computer could leverage massive parallelism – potentially performing thousands of single-qubit measurements concurrently, something not easily done in other architectures that have more sequential gating constraints. The combination of fast per-operation time and parallelism means high throughput. For example, one proposal is to use a conveyor-belt of photonic cluster states rolling out such that while one part of the cluster is being measured (computing), the next part is being prepared. This pipelining, together with parallel measurement, could allow photonic quantum computers to execute quantum circuits with millions of gate operations effectively in a very short physical time. Another aspect of parallelism is in state preparation: one can have many photon pair sources generating entanglement simultaneously to build different parts of the cluster, rather than one central operation that has to entangle qubits one pair at a time. All of these factors contribute to the belief that photonic cluster-state QCs might be extremely fast if they can be built, surpassing other technologies in operations per second. (Some caveats: detector dead times and data processing could slow things, but advanced multiplexing and classical hardware can mitigate that.)
Scalability via Modular and Mass-Manufacturable Components
Photonics benefits from the mature fabrication technologies of the telecom and silicon photonics industries. It’s plausible to leverage semiconductor foundries to manufacture photonic chips with waveguides, modulators, and detectors at scale. This mass-manufacturability means once a design is proven, scaling to more qubits is mainly an engineering replication task, not making each qubit individually by hand. This is an advantage over, say, superconducting qubits, which are also fabricated en masse but face increasing control wiring challenges as numbers grow. Photonic chips can carry many channels of light with relatively minimal crosstalk. Additionally, scaling in photonics can be modular: one can imagine adding more identical photon sources to increase the cluster size, or adding more detection channels to handle more qubits. Each module (say a chip with 1000 sources) could be identical, lending to easier scaling. PsiQuantum’s emphasis on silicon photonics is exactly to exploit this – they argue it’s the only technology that can integrate a million qubits on a reasonable number of chips, given the tiny size of waveguide devices and the capabilities of photolithography. Also, because photons don’t directly interact, having more photons in the system doesn’t necessarily complicate control in the way adding more superconducting qubits does (where crosstalk and frequency collisions become an issue). In a photonic cluster, qubits interact only via the predefined entanglement connections. This can make architectural scaling more straightforward: one can move to 3D cluster states for fault tolerance by conceptually just increasing the graph connectivity without worrying about physical cross-talk between qubits – the entangling operations are all mediated by interference which can be localized. In summary, photonic cluster-state systems have a clear path to scale up using existing manufacturing and by leveraging the natural parallelism and modularity of optical networks. This contributes to optimism that once the current hurdles (like loss) are solved, scaling to very large numbers of qubits (millions) might be more feasible in photonics than in other platforms.
Compatibility with Fault-Tolerant Schemes
The structure of cluster-state computing is well-suited to implementing certain error-correcting codes, especially topological codes. A prime candidate for fault tolerance is to use a large three-dimensional cluster state that encodes a surface code (a leading quantum error correction code) in its entanglement structure. Photonic cluster states can be built to embody this 3D structure (often visualized as a stack of 2D cluster layers linked in a third dimension). Once built, error correction in the one-way model is performed by measuring the cluster in particular ways to detect and correct errors. The advantage is that error correction becomes a part of the measurement pattern – no separate “syndrome extraction qubits” are needed as in circuit models; they are effectively built into the cluster connectivity. This could make fault tolerance more resource-efficient. Moreover, since photons naturally have low error rates for bit-flip/phase-flip, the main errors to correct are losses. Topological codes can be adapted to handle loss (erasure errors) up to a threshold (several percent). Photonic implementations like fusion-based quantum computing (FBQC) explicitly design the architecture to tolerate a certain loss rate while still creating a logical cluster suitable for fault-tolerant computation. Recent studies indicate that a loss rate on the order of 2-3% per photon can be tolerated in a 3D cluster state with appropriate encoding, meaning if each photonic qubit and fusion has a 97–98% chance of success, the entire computation can still be error-corrected. The current state-of-the-art in integrated photonics and detectors is approaching that ballpark (for instance, single-photon sources with 99% fidelity and detectors with >98% efficiency are on the horizon). Thus, photonic cluster-state computers are on a trajectory to meet the requirements for fault tolerance. Once they do, all the aforementioned advantages (room temperature, networking, speed) can be enjoyed at scale without being negated by errors. In essence, the cluster-state model plus photonics is one of the promising avenues to achieve a fault-tolerant, scalable quantum computer, as has been argued in several reviews. If successful, it would mean millions of operations could be performed reliably by encoding a logical qubit in many photons and continuously “feeding” the cluster with fresh entanglement as needed (active error correction in the one-way picture involves extending the cluster to fix errors).
In summary, photonic cluster-state computing’s advantages include: the ability to operate and interconnect at room temperature; very low intrinsic error rates and long coherence (photons don’t easily decohere); native integration with communication networks (making them ideal for distributed computing and cryptography applications); potential for extremely fast and parallel operation (light-speed gates and many simultaneous measurements); and a pathway to scaling via chip-based integration and compatibility with powerful error correction techniques. These strengths underpin the strong interest in photonic approaches – if the technical challenges can be overcome, a photonic cluster-state quantum computer could be a workhorse for quantum computing, potentially linking across a quantum internet and performing computations at speeds and scales difficult to match by other qubit technologies.
Disadvantages
Despite its promising features, photonic cluster-state computing also faces several significant challenges and disadvantages that researchers are actively working to address. These include fundamental issues with photonics as a platform as well as practical engineering difficulties in creating and handling large cluster states:
Probabilistic Entanglement and Photon Sources
A core challenge in optical quantum computing is that, with current technology, generating entanglement between photons is often probabilistic. In linear optics (using beam splitters and phase shifters with no optical nonlinearity), two photons entering a device do not deterministically entangle – one typically relies on measurement-induced entanglement (such as the fusion gates or post-selected interference outcomes). For example, the Browne-Rudolph fusion gate succeeds only 50% of the time in entangling two cluster fragments (and fails otherwise, though heralded by a measurement result). Similarly, spontaneous parametric down-conversion (SPDC), the workhorse for creating entangled photon pairs, produces photons at random times – it’s probabilistic whether you get a pair in a given pump pulse. This probabilistic nature means scaling up requires massive parallelism or multiplexing to compensate for low success probabilities, which adds complexity. If one tries to entangle photons sequentially and each attempt has, say, a 50% chance, the probability of success drops exponentially with the size of the cluster if done naively. One can employ strategies (like having many SPDC sources and switching networks to pick out successfully created photons) but that greatly complicates the system. In short, until truly deterministic photon-photon entangling operations are available, photonic cluster state generation is tricky and resource-intensive. This is a disadvantage compared to, e.g., ion traps where any two ions can be entangled on command with a laser gate. Single-photon sources themselves, if based on SPDC, suffer from the fact that they sometimes produce 0 or 2 photons instead of exactly 1, introducing loss or contamination. Recent quantum dot sources are improving this (bringing multi-photon emission rates down and indistinguishability up). The community is pursuing multiplexed SPDC sources where many SPDC processes are attempted in parallel and one successful pair is picked – this has yielded sources with high single-photon probability and low multi-photon noise, but at the cost of a lot of hardware. Overall, the lack of a straightforward, high-rate deterministic entangler is a disadvantage that photonics has to overcome. There is optimism though: as noted, experiments have shown deterministic entangled photon emission from quantum dots, and Rydberg atomic ensembles have created photonic entanglement on demand in small cases. But until these are fully integrated, current photonic entanglement generation remains partially stochastic.
Photon Loss
Loss is the bane of all optical quantum systems. A lost photon means a loss of a qubit from the cluster – effectively an error that can be more damaging than a simple bit-flip. If a photon in a cluster state is lost (e.g., absorbed in a waveguide or misses a detector), the entanglement bonds to that photon are broken, which can fragment the cluster or remove a logical qubit if not handled. Loss accumulates with system size. For instance, sending photons through many meters of waveguides, beam splitters, and other components will inevitably result in some absorption or scattering loss at each element. Even a 0.5% loss per optical component can become significant when a photon passes through dozens of components. Detector inefficiency is another form of loss (not clicking on a photon is equivalent to that photon being lost to the computation). The cluster-state model can tolerate some loss if using certain codes, but only up to a threshold (a few percent). Right now, loss in photonic systems is on the edge of that threshold: fiber and waveguide losses are low but nonzero, and while some integrated photonics boast <0.1% loss per cm of waveguide and ultra-high efficiency detectors (~98%), things like coupling loss (getting photons on and off chip) and source inefficiencies effectively count as lost photons. Managing loss requires extreme optimization of optical components: using ultra-low-loss materials, high-quality anti-reflection coatings, superconducting detectors with >95% efficiency, etc. It’s a significant engineering challenge to build a large optical circuit (with, say, thousands of elements) and keep total loss below a few percent. Photon loss not only reduces the success probability of the computation (since a critical photon might be missing) but also complicates entanglement generation (many attempts are needed to get all photons through). Approaches to mitigate loss include: building in redundancy (using entanglement purification or making the cluster state larger than needed so that some photon loss can be tolerated via error correction), and using heralding (detecting early if a photon is lost and attempting to replace it or adapt). However, heralding typically requires having quantum memories or delay lines to temporarily store photonic qubits, which are themselves challenging. Fusion-based architectures partly address loss by design: since small resource states are generated independently, if one is lost you only discard that small piece and try again, rather than losing a whole half-constructed cluster. Nonetheless, photon loss remains one of the toughest disadvantages for photonic quantum computing. It essentially sets a limit on size until error correction is in place. If, for example, your cluster uses 1000 photons and you have 5% loss per photon, on average 50 photons will be lost – too many for the computation to be reliable unless error-corrected. So, pushing losses down is an ongoing battle. That said, remarkable progress is being made: new waveguide fabrication yields losses less than 0.1 dB/m, and novel detector technologies approach 99.9% efficiency. These improvements will need to keep pace as systems scale. Until then, loss is a key disadvantage compared to, say, solid-state qubits which don’t “drop out” of the device (they have other errors, but they don’t vanish; losing a trapped ion is extremely rare whereas losing a photon is common if not engineered against).
Complexity of Large-Scale Cluster Creation
Even if loss and source issues are managed, the task of creating a large cluster state (say with millions of entangled photons) is daunting. The one-way model conceptually assumes we have this big cluster ready to go. But practically, building that state is a huge orchestration problem. It’s not enough to have good sources and low loss; one also needs to synchronize photons, ensure they interfere coherently, and manage potentially massive physical apparatus. There are a few approaches:
- Spatial approach: Have many photons generated simultaneously from many sources and interfere them across a large optical network. This might require a physical optical circuit with hundreds of beam splitters and phase shifters arranged to entangle photons in a 2D lattice pattern. Stabilizing such a large interferometer (keeping phase stability across all paths) is extremely challenging. Any vibration or drift can break the delicate interference conditions. Additionally, manufacturing variations mean every beam splitter might need tuning. So a static spatial cluster generator of large size is very complex.
- Temporal approach: Use fewer components and recycle them over time by storing photons in delay lines (fibers or loops) and mixing new photons with stored ones to build entanglement step by step. This greatly reduces hardware count (since a single interferometer can entangle many sequential photons), but it introduces the need for memory or storage of photons. Long optical delay lines can themselves introduce loss and require maintaining coherence (which means low phase noise lasers, etc.). The temporal approach is how the CV cluster states were made (with fiber loops), and it worked impressively, but scaling further might need even longer or more loops, or optical switches routing pulses around – all adding complexity. In both cases, control complexity grows: you may have hundreds of phase actuators that need real-time adjustment, thousands of time slots to manage, and signals to route to detectors in sync. The classical control and synchronization of so many photonic qubits is non-trivial. In contrast, a superconducting qubit chip has a few hundred control lines at most and everything is on one chip; an optical system could easily have thousands of components spread over meters. Furthermore, cluster-state bookkeeping: For a given algorithm, you need to entangle photons in a specific graph structure. Reconfigurability is a challenge; either your hardware is hardwired to one type of cluster (like a 2D grid), or you need a dynamically configurable entangling network, which adds another layer of control complexity. Overall, assembling a scalable, reliable, on-demand entangled state of many photons is a bit like trying to choreograph a million dancers in the dark – any misstep (loss, timing error, phase slip) can throw off the whole performance. This is arguably harder in practice than controlling a million superconducting qubits on a chip, because at least those qubits sit nicely in a defined array with electrical control. In photonics, you have to manage flying qubits. Some experts point out that while photonics sidesteps some challenges, it replaces them with a “different kind of hardness” in orchestrating big optical networks. That said, photonic engineers are countering this by pushing integration: putting as much as possible on chips to reduce drift and size, using automated calibration algorithms to stabilize phases, and leveraging multiplexing to reduce the total number of components needed by reusing them. Even so, in 2023 no experiment has yet entangled more than ~8 single photons in a freely programmable way (though larger fixed cluster states exist in CV). So, the step up to hundreds or thousands of photons is still ahead and is recognized as a major undertaking.
Detection and Feedforward Latency
Another practical challenge is the requirement of ultrafast detection and feedforward for one-way computing. In cluster-state computation, once you measure a photon, you might need to adjust the basis of another measurement that could be happening very soon after. If your photons are spaced, say, by a few nanoseconds in a pulsed system, you need your single-photon detector to produce a signal, and your control logic to compute the new setting and apply it to a modulator, all within a few nanoseconds – that’s extremely demanding for electronics (it borders on the limit of microwave or digital logic speeds). In many experiments, to simplify, researchers delay the next photons a bit to allow time for feedforward. For instance, they might send photons through a long fiber delay so that by the time the next one reaches the measurement device, the previous measurement’s outcome has been processed. Using fiber delays for timing is fine on a small scale, but in a large system it means you need potentially very long delays (which means large loops of fiber, which means more loss and complexity). Alternatively, one must develop specialized low-latency electronics or optical processing. The RIKEN group, for example, implemented feedforward in the optical domain for some continuous-variable operations (they converted the measurement result to an optical modulation almost instantly). Others use fast FPGA boards and keep photons on delay lines until the FPGA outputs a control signal (like a Pockels cell switching within tens of nanoseconds). All this feedforward machinery is an overhead that circuit-based systems don’t face to the same extent – in a circuit model, gates are just applied in sequence by a central controller at a rate that qubits can handle (usually microseconds, much slower). Photonic one-way computation demands more real-time, high-speed classical control which is a disadvantage in complexity. If the feedforward timing isn’t met, the algorithm might have to pause (which for flying photons could mean you lose them unless you have a buffer). This is why some envision using quantum memory crystals or looped circuits to temporarily park photons – but quantum memories for photons (other than delay loops) are still not very efficient or fast. In summary, achieving synchronous, low-latency control in a photonic cluster computer is an additional hurdle not present in static multi-qubit arrays. It’s being worked on (with advances in opto-electronics), but remains a system-level complexity and a source of possible error (timing jitter or gate signal errors).
Despite these disadvantages, extensive research is addressing them. For probabilistic entanglement: the development of deterministic photon sources and novel interaction techniques (e.g., Rydberg atomic mediators or integrated photon-photon gates) is ongoing. For photon loss: better materials (like ultra-low-loss silicon nitride waveguides), photon multiplexing, and designing architectures tolerant to some loss (via error correction) are mitigating the issue. For cluster complexity: modular architectures like fusion-based QC break the problem into smaller pieces built in parallel, and sophisticated chip fabrication combined with feedback control can tame large optical circuits. And for feedforward: faster electronics and clever optical tricks (like encoding multiple qubits in one photon’s different degrees of freedom to reduce cross-photon feedforward) can alleviate latency issues.
In summary, the key challenges for photonic cluster-state computing are the “4 Ps”: Photon source quality, Photon loss, Parallel complexity, and Post-measurement (feedforward) timing. These make building a large-scale photonic quantum computer very challenging. However, none of these challenges are viewed as insurmountable – each has a research roadmap. The next section will discuss how overcoming these challenges is part of the future outlook and what breakthroughs are anticipated.
Impact on Cybersecurity
The advent of large-scale photonic cluster-state quantum computers (or any quantum computers) has profound implications for cybersecurity, both positive and negative. Photonic systems in particular also tie in naturally with quantum communication and cryptography. Here we explore how this technology intersects with cryptographic security:
Enhancing Quantum Cryptography (QKD and beyond)
Photonic cluster-state computers could significantly enhance quantum cryptography techniques such as Quantum Key Distribution (QKD). QKD is a method to share encryption keys with security guaranteed by quantum physics, typically using photons sent over a network. Since photonic cluster-state devices use photons and can easily produce entangled photon pairs or more complex entangled states, they can serve as advanced QKD transmitters/receivers or as entanglement swapping nodes in quantum networks. For example, a cluster-state quantum computer could generate multi-photon entangled states (like a GHZ or cluster state) that enable conference key agreement among multiple parties or device-independent QKD protocols (which require entangled states and bell tests to ensure security even with untrusted devices). Because photons can travel long distances with little decoherence, a photonic quantum node can directly integrate cryptographic key exchange with quantum computation. One scenario is a quantum network where photonic cluster-state quantum computers at different locations establish secure quantum links (using entanglement distribution) and also perform distributed computing. The cluster-state model actually provides a way to do quantum teleportation of data and even gates, which could be used to send quantum encrypted information between parties. Another concept is quantum secure direct communication, where a pre-shared entangled state (like a cluster) is used to directly transmit a message securely; cluster-state computers could set up and manage such states.
As quantum computers, photonic devices could also run algorithms to improve classical cryptographic protocols – for instance, generating true random numbers for cryptographic keys (photonic systems are excellent random number generators due to quantum measurement outcomes). Overall, photonic cluster-state technology will be a cornerstone in building the Quantum Internet, wherein secure communication (QKD) and quantum computing resources are intertwined. Government and industry reports often highlight that quantum networks with entangled states will enable new forms of cryptography and secure communication that are impossible classically. We may see cluster-state quantum computers acting as secure routers or servers that perform encrypted quantum computation or facilitate key exchanges among clients.
Threat to Classical Cryptography
On the flip side, a full-scale quantum computer (photonic or otherwise) poses a serious threat to many classical cryptographic systems in use today. Most of today’s public-key encryption (such as RSA, Diffie-Hellman, and elliptic curve cryptography) relies on mathematical problems like integer factorization or discrete logarithms, which a quantum computer can solve exponentially faster using Shor’s algorithm. A photonic cluster-state quantum computer with sufficient qubits and low error rates could run Shor’s algorithm to factor large RSA moduli or break elliptic curve crypto, thereby breaking the security of essentially all internet communications that rely on those schemes. This is a well-recognized risk: government agencies like NIST and NSA have warned that quantum computers could render current encryption insecure and have initiated programs to develop Post-Quantum Cryptography (PQC) – new algorithms that are believed to resist quantum attacks. It’s worth noting that the timeline is uncertain: some experts predict that a cryptographically relevant quantum computer (i.e., one able to break RSA-2048) could be developed within a decade or two, especially given the aggressive progress by companies like PsiQuantum. Photonic cluster-state computers are among the likely contenders to reach that scale, thanks to their prospective ability to scale to millions of qubits. If, say, by 2030 a photonic quantum computer achieves fault tolerance with a few thousand logical qubits, it could run Shor’s algorithm on RSA-2048 (which requires on the order of thousands of logical qubits and $$10^9$$ logical operations) in reasonable time, thus compromising much of today’s secure communications. This looming threat is often referred to as “Y2Q” (Years to Quantum) analogous to Y2K – the moment in time when quantum computers can break current cryptography. Some security analysts even caution that adversaries might harvest encrypted data now and store it, with the intent to decrypt it later when a quantum computer becomes available (known as a “harvest now, decrypt later” attack). This means sensitive data with long secrecy requirements (like government or personal records) are already vulnerable in principle, even before quantum computers exist, because an attacker could record the encrypted data now and decrypt it in a decade with a quantum computer. Therefore, the development of photonic quantum computers intensifies the urgency for migration to quantum-resistant cryptographic algorithms (such as lattice-based, hash-based, or code-based cryptosystems that are believed to be secure against quantum attacks). Bodies like NIST have already announced new PQC standards in 2022–2024 and are urging adoption of these by the mid-2020s.
Post-Quantum and Quantum-Resistant Measures
In preparation for quantum computers, the cybersecurity community is working on two fronts: deploying post-quantum cryptography (PQC) which runs on classical hardware but is hard for quantum computers, and developing quantum cryptography (like QKD) for scenarios requiring information-theoretic security. Photonic cluster-state computers actually can aid in testing and implementing PQC. For instance, they could be used to analyze the quantum hardness of proposed PQC schemes (by attempting to solve underlying problems like lattice short vector search using quantum algorithms, thus identifying weak algorithms before they’re widely adopted). Also, because photonic QCs can interconnect through optical links, they could be part of a hybrid infrastructure where classical PQC and QKD are both used – e.g., QKD to exchange keys, and PQC algorithms running on classical/quantum hybrids to secure other aspects. Governments are treating the transition to PQC as a pressing issue. It’s expected that even with PQC, some niche applications will prefer quantum cryptography (for example, diplomatic or military communications might use QKD over fiber between cities for provably secure key exchange, supplemented by PQC algorithms for digital signatures).
Blind Quantum Computing (Secure Delegation)
Photonic cluster-state computing also offers a unique security application: Blind Quantum Computing (BQC) or Quantum Homomorphic Encryption, which is a method for a client to delegate a quantum computation to a quantum server without revealing the input, output, or algorithm to the server. This is crucial if quantum computing is provided as a cloud service (which is likely, given the complexity and cost of quantum hardware). The cluster-state model has a built-in way to achieve BQC: a client can prepare single photons in certain encoded states (e.g. randomly rotated qubits) and send them to the server, who incorporates them into its large cluster state and performs the measurements instructed by the client. Because the qubits were encoded with the client’s secret random rotations, the server’s measurement results are encrypted (the server can’t interpret them) and the server doesn’t know the actual bases – effectively it is doing a computation for the client “blindly.” The client only needs to be able to prepare and send single qubits (or entangled pairs) and perhaps do some simple classical processing; the heavy quantum lifting is done by the server’s photonic cluster computer.
Crucially, the one-way model makes it easier to implement blind computing because measurements (and their needed bases) are the primary actions – a client can use a technique by Broadbent, Fitzsimons, and Kashefi (2009) where the client’s random choices of measurement angles hide the true computation from the server. This protocol guarantees that the server learns nothing about the client’s data or which algorithm is being executed, aside from an upper bound on its size. It’s analogous to encrypting your data before sending it to a cloud, but here it’s quantum data being encrypted by quantum means. Photonic systems are especially well-suited to this because they can easily transmit qubits from the client to server (via fiber) and because cluster-state computers naturally align with the protocol (the “brickwork state” used in blind QC protocols is a type of cluster state).
In fact, the first demonstration of blind quantum computing was done with a photonic setup: researchers in 2012 used a four-photon cluster state and a remote client to successfully perform a blind computation (a simple algorithm). With full-scale photonic cluster computers, we can envision a scenario where companies or individuals send quantum-encrypted tasks to a quantum cloud (perhaps run by a big provider), and get back the result without the provider ever being able to read the sensitive data or even know what algorithm was run. This could protect proprietary algorithms or confidential data (like medical or financial data) in a future quantum cloud computing ecosystem.
Security of the Quantum Computer Itself
We should also consider the security of the quantum computer from attacks. If quantum computers are networked, one must secure the quantum channels and the classical control channels. Photonic cluster-state computers will exchange photons with other devices (clients, other servers). One has to prevent an attacker from tampering with these photons – for instance, intercepting them (which QKD can detect if done on the quantum channel) or injecting their own photons to manipulate the computation. The protocols like BQC are designed to detect or nullify tampering (an incorrect measurement or extra photon would lead to results that fail verification with high probability). Moreover, one can use authentication of quantum states – a nascent area of research – to ensure that an entangled state hasn’t been adulterated. On the classical side, any quantum computer will have classical control software and user interfaces which need standard cybersecurity (to prevent hacking, unauthorized access, etc.). Photonic systems don’t change that modality much, except that they might be more naturally open to networking by design (since they use light). So ensuring the integrity of photonic quantum protocols is an active research area; fortunately, the principles of quantum mechanics (no-cloning, disturbance upon measurement) often provide built-in protection.
In conclusion, the impact of photonic cluster-state computing on cybersecurity is two-fold:
- Positive: It will bolster quantum cryptographic methods, enabling secure key exchange (QKD) over long distances and among multiple parties, and allowing secure delegation of computation via blind quantum computing. Photonic quantum nodes will be integral to the coming quantum-secured internet, ensuring communications privacy in ways not possible classically. Critical infrastructure and communications can be made eavesdrop-proof by using entangled photons and quantum protocols, with photonic cluster computers possibly acting as the network hubs handling those tasks.
- Negative: It accelerates the threat to traditional cryptography. Once such a quantum computer is operational, classical RSA/ECC-based encryption and certain hashing or discrete-log-based schemes will no longer be safe. This necessitates the urgent adoption of post-quantum cryptographic algorithms in all sectors before quantum computers reach that level. The mere prospect of a photonic quantum computer in the near future means that even today’s data might be vulnerable in the future (if recorded now). Therefore, organizations are recommended to start transitioning to PQC now, and governments are investing in standards and migration plans accordingly.
Overall, photonic cluster-state computing will be a double-edged sword for cybersecurity: empowering new secure communication forms while simultaneously rendering obsolete many of our current encryption techniques. The net effect will depend on our preparedness (deployment of PQC) and on leveraging quantum technologies for defense as well as offense. In the best case, we end up with a quantum-safe world where data is secured by quantum-resistant algorithms and quantum cryptography, possibly facilitated by the same photonic quantum machines that could have broken the old schemes.
Future Outlook
The future of photonic cluster-state computing is promising, but there are key milestones and breakthroughs needed before it reaches commercial viability and widespread use. Here we outline the expected outlook, timeline, and potential roles of this technology in the broader quantum ecosystem:
Timeline to a Fault-Tolerant Photonic Quantum Computer
Experts estimate that achieving a fault-tolerant (error-corrected) quantum computer may take on the order of a decade or more of intensive development (as of 2024). Personally I predict 7 years (Q-Day Predictions: Anticipating the Arrival of Cryptoanalytically Relevant Quantum Computers (CRQC)). Photonic approaches are in a tight global race with superconducting qubits, ion traps, and others. Some optimistic forecasts, such as from PsiQuantum, suggest a ~1 million physical qubit photonic quantum computer could be built by the late 2020s, delivering the first commercially useful, error-corrected quantum computations. This timeline is ambitious and assumes steady progress in integrating photonic components and demonstrating error correction. A more conservative timeline from many academics is that by the early to mid-2030s we might see a fault-tolerant quantum computer (of any type). Photonics, due to its scalability, could indeed be the first if the loss and source challenges are solved in the next few years. We can break the timeline into stages:
- Near-term (2025–2027): We expect demonstrations of increasingly large photonic cluster states (perhaps tens of photons in a genuine cluster used for a small computation) and the first implementation of quantum error correction codes in a photonic platform (e.g., demonstrating a logical qubit with a simple code like a repetition code or a small surface code). Also, we might see the first blind quantum computing cloud service on a small scale – for example, companies offering secure quantum computation on a few qubits for clients, using photonic links.
- Mid-term (late 2020s): If all goes well, a photonic machine with on the order of several hundred high-quality (logical) qubits could be operational. This might be enough to perform some specialized tasks that are beyond classical ability (like certain chemistry simulations or optimization problems) in a fault-tolerant manner. It’s around this time that if someone is to break RSA-2048, a machine would appear. Governments anticipate that by around 2030, one might exist, which is why they are pushing post-quantum standards now. PsiQuantum’s goal of a million physical qubits by ~2027 implies maybe ~1000 logical qubits (depending on error rates and code overhead), which could indeed attempt big algorithms like breaking RSA. Whether this timeline holds depends on hitting performance and integration targets soon (for instance, demonstrating ~99.9% reliable photonic fusions and <1% loss, etc., by 2025).
- Long-term (2030s and beyond): Photonic cluster-state computers, if successful, will be scaled up further, improving error rates and adding more logical qubits. By the 2030s, we could have universal quantum computers with thousands of logical qubits capable of a wide range of algorithms – solving classically intractable problems in cryptography, chemistry (like simulating complex molecules for drug discovery), materials science, optimization, machine learning, etc. Because photonic machines can be networked, we might also see distributed quantum computing, where multiple smaller photonic quantum computers link to act as one larger machine (thereby circumventing size limitations of a single module). The ultimate vision is a fault-tolerant quantum internet where photonic cluster-state quantum computers serve as both the nodes and the communication channels.
Expected Breakthroughs Required
Several key breakthroughs are needed to realize this future:
- Nearly Deterministic Single-Photon Sources: We need on-demand sources that produce indistinguishable single photons with very high efficiency (ideally >99% chance per pulse) and very low probability of producing extra photons (multi-photon events <$$10^{-6}$$). Quantum dot emitters coupled to cavities are one route – recent progress is encouraging – another is parametric down-conversion with active multiplexing. A breakthrough would be achieving, say, a 99.9%-efficient source of indistinguishable photons at GHz rates. This would drastically reduce the resource overhead (no need for huge multiplexing networks).
- High-Fidelity, Low-Loss Entangling Operations: Whether it’s a beam splitter + detection (fusion gate) or a nonlinear interaction (like an integrated $$\chi^{(3)}$$ waveguide or Rydberg atomic interface), we need entangling operations that succeed with very high probability or are heralded with minimal loss. The fusion-based architecture aims for operations that tolerate loss – but still, each fusion should be as good as possible. A milestone would be demonstrating a two-photon entangling gate on-chip that works, say, 90% of the time and is heralded (so you know when it fails) or a deterministic CNOT via a quantum dot-cavity with fidelity >90%. These will then be improved to >99% with error correction.
- Ultra-Low-Loss Photonic Circuits: To build large clusters, every component (waveguide, beamsplitter, switch) must introduce minimal loss and decoherence. Advances in fabrication (e.g., silicon nitride waveguides with <0.1 dB/m loss, better coupling from fiber to chip, etc.) are needed. Also efficient detectors (>99% efficiency, low timing jitter) are required for measurement. We may see new materials (like lithium niobate or aluminum nitride photonics) that allow integrating sources, circuits, and detectors with low loss. 3D photonic integration (stacking photonic layers) might help pack more functionality with shorter paths, reducing loss.
- Quantum Error Correction Demonstrations: A crucial proof-of-concept expected in the next few years is the demonstration of a logical qubit with photonics that has longer coherence than the underlying physical qubits. This means encoding a qubit into, e.g., a small cluster-based code and showing it survives noise better than a single photon. The first demonstration might be something like a repetition code for photon polarization flips using multiple photons entangled, or a small surface code on a cluster state (which might require on the order of 20-30 photons). Achieving the break-even point where error-corrected qubits outperform physical ones will be a watershed moment, likely by late 2020s. After that, it’s scaling up the code distance by adding more photons and improving gate fidelity. The one-way model is naturally suited to error correction – breakthrough experiments could involve using cluster states to implement topological cluster codes and correcting simulated losses or flips on the fly.
- Better Feedforward and Control Electronics: Another needed advance is in the classical hardware that coordinates the photonic system. We’ll need ultrafast logic perhaps implemented in microwave photonics or highly optimized FPGAs/ASICs that can handle GHz clocked operations. The integration of photonics with CMOS electronics (or even using light-based logic for feedforward) might be required. A breakthrough would be a fully integrated photonic chip with built-in fast optical switches controlled by on-chip photodiodes or Phase-change material modulators that react within a few ns based on previous detections – essentially merging some classical decision-making into the photonic domain. This will reduce latency and help manage complexity.
- Networking and Memory: For distributed and modular architectures, breakthroughs in quantum memory for photons or better yet, memoryless networking using synchronization techniques will help. If one module can consistently send entangled photons to another with high fidelity and they can perform inter-module gates (via teleportation) with ~99% success that enables scaling beyond a single chip. Already entangling two remote qubit modules was shown; extending that to many modules is on the horizon. Photonic cluster states could even be used as quantum repeaters themselves to connect distant quantum computers, so advances in entanglement swapping and purification via cluster measurements will be important for long-distance links.
Commercial Viability and Applications
Once a fault-tolerant photonic cluster-state computer is built, what will it be used for? Initially, likely applications include:
- Cryptography and Security: As discussed, breaking classical crypto or running new quantum-secure protocols. A quantum computer might be first rented out by governments to decrypt historical data or by companies to test their new PQC against quantum attacks.
- Chemistry and Material Science: Quantum simulation of molecules to discover new drugs, catalysts, or materials is a killer app. Photonic QCs with a few hundred logical qubits could surpass classical supercomputers for simulating complex chemical systems (like enzyme active sites or new battery materials). This is considered one of the first useful applications of quantum computers.
- Optimization and Finance: Solving certain hard optimization problems (for logistics, scheduling, portfolio optimization, etc.) faster using quantum algorithms (like Grover’s algorithm, QAOA, or quantum annealing-like routines run on universal QCs). Photonic QCs, being fast, could attempt these in a way similar to today’s quantum annealers but with more rigor and possibly better scaling.
- Machine Learning: There’s growing interest in quantum machine learning. Photonic quantum computers can naturally represent high-dimensional data (using modes, time bins, frequencies) and might implement algorithms like quantum neural networks or accelerate linear algebra subroutines. There are proposals for quantum support vector machines, clustering algorithms, etc. If photonic QCs can perform these faster (taking advantage of optical parallelism), they could find use in big data analysis or AI applications.
- Quantum as a Service: Much like cloud computing today, we might see Quantum Cloud Platforms where users submit jobs to a photonic cluster-state quantum computer located at a data center. Photonics will ease integration with fiber networks – users could literally send their quantum states (or entangled signals) to the cloud for processing, enabling things like blind computing or distributed tasks. Companies might not buy a quantum mainframe; instead, they lease quantum time over the network.
Role in Quantum Networks and Hybrid Architectures
In the future, photonic cluster-state computers are expected to be central nodes in the Quantum Internet. They will likely work in tandem with other types of quantum devices:
- Hybrid systems: For example, a quantum data center might have memory nodes consisting of matter qubits (like NV centers or ions) which have long coherence, connected via photonic cluster states that serve as flying qubits between them. Photonic processors could handle communication and some fast processing, while matter qubits store information or perform particular high-fidelity operations. This plays to each strength. Already, experimental quantum networks connect atomic clocks or NV centers via photons; adding cluster-state quantum processors in the loop could allow distributed quantum computing tasks, where different processors compute parts of an algorithm and exchange intermediate results quantum-mechanically. The Oxford ion-trap network demonstration effectively did a distributed CNOT gate via photonic link. In the future, one could distribute the pieces of a large algorithm across multiple modules (perhaps each module is easier to maintain at smaller size) and join them through photonic entanglement. Because photonic qubits travel, they provide the bus that connects modules (like an optical backplane in a quantum multicomputer).
- Quantum Repeater and Network Nodes: Photonic cluster states, particularly 1D cluster chains, have been proposed for quantum repeaters – devices that extend the range of entanglement distribution by dividing the channel into segments and connecting them. A specific proposal uses entangled photon chains and measurements to do entanglement swapping with error correction. As photonic cluster computers develop, they could incorporate repeater functionality, creating entanglement between distant nodes with high fidelity by correcting errors locally. Governments and companies (e.g., Cisco’s Quantum Networking group) are envisioning exactly this: quantum network routers that manage multi-partite entanglement and route quantum information between endpoints. Photonic cluster states are a natural resource for that, since they can be split, distributed, then measured to perform teleportations and gate teleportations as needed. One can imagine a quantum network architecture where each node creates a large photonic cluster, parts of which are sent to neighbor nodes, and by measuring their parts of the cluster jointly, the nodes establish long-distance entanglement. This is essentially a fault-tolerant repeater network scheme.
- Scaling by Networking: Rather than building a single monolithic million-qubit machine, an alternate approach to scale is to network many smaller quantum computers. Photonics is the only practical way to do this because it’s the quantum medium for communication. In the long term, we could see a modular quantum computing approach: dozens of photonic quantum modules in a data center connected by fiber, acting as one large virtual quantum computer. This approach might overcome fabrication yield issues (maybe making a single chip with a million components is hard, but making ten chips with 100k each is easier, then network them). The demonstration of a distributed algorithm by Oxford (teleporting a gate between two traps) supports this concept – it’s the first step toward a quantum data center where tasks are divided among modules. Photonic cluster-state computers will provide the communication links (via teleportation through photons) that make the modules feel like a contiguous quantum machine. Achieving this seamlessly will rely on standardization of quantum network interfaces (likely based on photons at telecom wavelengths) and robust entanglement generation between modules – things that are actively being worked on now.
In conclusion, the future outlook for photonic cluster-state computing is bright: within the next 5–10 years we expect to see progressively larger and more reliable photonic quantum processors, possibly reaching the error-correction threshold. With continued progress, photonics could be the first platform to demonstrate a truly scalable, fault-tolerant quantum computer, ushering in the age of quantum supremacy for useful tasks. This will transform cybersecurity (breaking and making encryption), revolutionize certain industries (through quantum simulation and optimization), and enable new applications (like secure cloud quantum computing and global quantum networks).
Of course, there are uncertainties in timing – unforeseen technical hurdles could slow progress, or alternative technologies might leap ahead. However, many signs point to photonics being a strong contender for the “Quantum Computing endgame” due to its inherent advantages in connectivity and error rates. The practical realization of large-scale photonic cluster-state computing will likely be a major landmark in science and engineering, comparable to the development of classical computers or the internet. If and when it happens, we can expect a modality shift: problems deemed impossible to solve classically may become tractable, and the integration of quantum processors with quantum communication will yield a new computing infrastructure that spans the globe – a Quantum Internet where photonic cluster-state quantum computers are the nodes performing computation and routing entanglement. This vision, outlined by quantum pioneers and slowly being built in labs, could become reality in the coming decades, marking the next chapter in the information age – the quantum chapter.
(This article was updated in Feb 2025 with the latest developments)
© 2025 Applied Quantum. All rights reserved