Asian American Daily

Subscribe

Subscribe Now to receive Goldsea updates!

  • Subscribe for updates on Goldsea: Asian American Daily
Subscribe Now

The 3 Big Advances Likely to Precede Quantum Computing Rollout
By Goldsea Staff | 27 Nov, 2025

We're likely to see three technologies that will dramatically advance computing and lay the technological groundwork for producing and implementing practical quantum computing.

For decades computing has obeyed Moore's Law by shrinking transistors to double processing power roughly every two years.  But this period of exponential growth is reaching its twilight.  

Image depicting expanded view of 3-D stacking within a processor chip.  (Image by Gemini)

As each transistor on a semiconductor chip approaches the physical size of just a few atoms, the fundamental laws of physics — primarily those governing heat, electron tunneling, and manufacturing precision — have begun to assert themselves, making further miniaturization increasingly costly and difficult. The performance gains we once took for granted are now stalling.

A greatly enlarged view of circuitry within a photonic chip.  (Image by Gemini)

Meanwhile, the demands placed on computing infrastructure have exploded. We are witnessing a "Tsunami of Data" generated by AI, IoT devices, and massive data centers, all of which require unprecedented levels of speed and energy efficiency. The sheer volume of this data has created a crippling energy wall—the amount of power required to simply move and process data is becoming unsustainable.

IBM TrueNorth and Intel Loihi neuromorphic chips.  (Image by Gemini)

While quantum computing represents the ultimate destination — a radical shift that promises to solve problems currently intractable for any classical machine —i t is years, perhaps a decade or more, away from being a practical, universally applicable technology. The systems are fragile, complex, and currently only useful for highly specialized calculations.

Therefore, the immediate future of the computing revolution rests not in the quantum realm, but in three parallel classical breakthroughs. These innovations are not just incremental improvements; they represent fundamental shifts in architecture, physics, and material science that are necessary to extend the lifespan of classical computing, solve the immediate crises of power and speed, and lay the technological groundwork for when quantum computers finally arrive. These three breakthroughs are the dimensional leap of 3-D Semiconductor Stacking, the speed revolution of Classical Photonics Computing, and the architectural paradigm shift of Neuromorphic Computing.


1. The Dimensional Leap with 3-D Semiconductor Stacking

The traditional method of chip manufacturing is inherently two-dimensional. Processors, memory, and accelerators (like GPUs or specialized AI hardware) are fabricated side-by-side on flat silicon wafers and then mounted onto a large printed circuit board (PCB). This horizontal layout is the source of a major performance bottleneck known as the "Memory Wall."

The Memory Wall refers to the growing performance gap between the extremely fast Central Processing Unit (CPU) and the comparatively slow speed at which it can retrieve data from off-chip dynamic random-access memory (DRAM).Because the CPU and memory are physically separated on the PCB, data must travel relatively long distances through slow, power-hungry copper wires. Every time a processor needs data, it must wait, and this waiting consumes both time (latency) and excessive power. In modern computing, a significant percentage of the system’s total energy is dedicated to simply moving data around, not processing it.

The 3-D Solution

The solution to the Memory Wall is to introduce the third dimension: vertical stacking. Instead of spreading components out, manufacturers are now stacking layers of different components directly on top of each other. A 3-D chip might consist of layers for logic (the processor), layers for high-speed cache memory, and layers for DRAM, all precisely aligned and bonded together.

The critical enabling technology for this is the Through-Silicon Via (TSV). TSVs are tiny, vertical connections—thousands of them—that pass straight through the silicon layers, acting as incredibly short, high-speed electrical highways between the stacked components. These vertical interconnects are measured in micrometers, drastically reducing the data path length from the millimeters or even centimeters required on a traditional PCB.

This dimensional leap solves several core problems simultaneously:

First, it slashes latency by reducing travel time, allowing the processor to access data almost instantaneously. Second, it dramatically improves energy efficiency because less power is needed to push the data across a shorter, cleaner connection. Finally, it enables compute-in-memory, a powerful architecture where processing cores are placed directly adjacent to the memory they need, blurring the line between computation and storage. This architecture is especially critical for AI and machine learning tasks that require constantly moving massive data sets between memory and the accelerator.

3-D stacking is the most immediate and impactful architectural change to classical computing. It is already being implemented in high-end processors, memory modules (like High Bandwidth Memory, or HBM), and specialized AI accelerators, ensuring that the performance gains of classical silicon continue, even as Moore’s Law slows its pace.


2. The Speed of Light: Classical Photonics Computing

While 3-D stacking addresses the spatial challenges of computing, classical photonics computing tackles the fundamental physical limitation of using electrons to move information.

In traditional electronic chips, data is carried by electrons flowing through copper wires. This process is inherently inefficient. Electrons encounter resistance, leading to scattering and friction, which generates heat. This heat is the single biggest impediment to increasing clock speeds and component density in modern chips; the massive air conditioning units and cooling systems of modern data centers exist primarily to manage the heat generated by electron movement. This is the Energy Wall in its most literal form.

The Photonics Solution

Classical photonics replaces electrons with photons (particles of light). Light travels much faster than electrons in a wire, and, crucially, photons do not interact with each other or the material through which they travel in the same way electrons do. This means light can transmit data at incredibly high speeds with almost no energy loss and, therefore, virtually no heat generation.

The core technology is Silicon Photonics, which involves fabricating all the necessary optical components—waveguides (which act as "wires" for light), modulators (which act as "switches" to encode data), and detectors—directly onto standard silicon chips using existing, high-volume semiconductor manufacturing techniques.

The initial and most widely adopted application of photonics is in interconnects—replacing the copper wires used to link chips, servers, and data centers. By switching to optical fibers and on-chip waveguides, data centers can achieve exponential increases in bandwidth while drastically cutting power consumption.

The next, more revolutionary step involves using light not just to move data, but to perform the actual computation (arithmetic and logic). Light-based logic gates promise to execute calculations with significantly less energy and at higher speeds than their electronic counterparts. This is particularly effective for large-scale, parallel tasks like those found in AI, machine learning, and high-performance computing (HPC) acceleration.

Crucially, classical photonics is a parallel and distinct revolution from quantum photonics (like PsiQuantum’s approach). Classical photonics uses light as a faster, cooler carrier for classical bits (still 0s and 1s), while quantum photonics uses single photons as quantum qubits (using superposition and entanglement). Classical photonics will ensure the communication backbone of future data centers is fast and efficient enough to even host a quantum co-processor.


3. The Brain’s Architecture: Neuromorphic Computing

The third major breakthrough challenges not the materials (silicon vs. light) or the layout (2D vs. 3D), but the fundamental architecture of how computers process information. Since the 1940s, classical computers have been based on the Von Neumann architecture, where the processor and memory are physically separate. While incredibly versatile, this architecture is inherently inefficient for tasks that require massive parallel processing and data fusion, particularly modern Artificial Intelligence.

The human brain, by contrast, operates with staggering efficiency, consuming only about 20 watts of power—less than a dim lightbulb—to perform real-time sensing, reasoning, and learning.

The Neuromorphic Solution

Neuromorphic computing seeks to mimic the structure and function of the biological brain, fundamentally changing how data is processed.

  • Mimicking Neurons: Neuromorphic chips use spiking neural networks (SNNs) and specialized hardware to emulate biological neurons and synapses. Unlike traditional computers that process data continuously and synchronously, neuromorphic systems are event-driven. They only "spike" or consume energy when a change or signal is detected, much like real neurons.
  • The Power of Synapses: The systems use novel components, such as memristors, which act as synthetic synapses.Memristors (memory-resistors) can store and process information simultaneously, essentially combining computation and memory into a single element. This instantly eliminates the Von Neumann bottleneck and the Memory Wall that plague traditional chips.

This event-driven, compute-in-memory architecture results in unprecedented energy efficiency. Neuromorphic chips, such as those developed by IBM (TrueNorth) and Intel (Loihi), have demonstrated processing capabilities at tens of milliwatts of power—orders of magnitude less than standard CPUs performing the same AI tasks.

Neuromorphic computing is not a general-purpose processor; it is designed to excel at specific tasks requiring real-time sensing, pattern recognition, and rapid, on-the-spot learning. This makes it ideal for "edge computing"—applications in robotics, autonomous vehicles, smart sensors, and advanced mobile devices where power conservation and instantaneous reaction time are paramount.

While quantum computing seeks to perform massive calculations, neuromorphic computing seeks to perform massive sensory processing tasks with brain-like efficiency, offering a vital piece of the puzzle for the pervasive, always-on AI future.

The Necessary Classical Foundation

These three breakthroughs—3-D stacking, classical photonics, and neuromorphic computing—are not simply competitors to quantum computing; they are essential prerequisites for it.

The immediate challenges facing the industry (the Energy Wall, the Memory Wall, and the slowing of Moore's Law) are fundamentally classical in nature. Addressing them ensures that we can continue to advance AI, manage the overwhelming growth of data, and build the necessary high-performance infrastructure. Furthermore, technologies like classical photonics and 3-D integration will be crucial for the cryogenic control systems, high-speed interconnects, and decoding hardware required to manage and run the first generation of error-corrected quantum computers.

The future of computing is a hybrid ecosystem: a robust classical foundation, built upon 3-D density, photonic speed, and neuromorphic efficiency, which will ultimately support the specialized power of the quantum co-processor. The breakthroughs that truly revolutionize processing power in the next five to ten years will be those that master the laws of classical physics in radically new ways.

(Image by Gemini)