Loading...
Loading...
qGAN for learning probability distributions
Quantum Generative Adversarial Networks combine a quantum generator, which prepares a data distribution as a quantum state, with a classical discriminator that distinguishes real from fake samples. This hybrid architecture can learn probability distributions with fewer parameters than fully classical GANs.
Generative Adversarial Networks consist of two competing neural networks: a generator G that learns to produce realistic samples from a latent noise distribution, and a discriminator D that learns to classify samples as real or fake. In the quantum variant, the generator is replaced by a parameterized quantum circuit that transforms a simple input state into a complex output distribution over measurement outcomes.
The quantum generator operates by applying a sequence of single-qubit rotations and entangling gates controlled by classical parameters θ. When measured in the computational basis, the resulting bitstrings are interpreted as samples from the learned distribution. Because n qubits can encode 2ⁿ amplitudes, the quantum generator has the potential to represent exponentially rich distributions using only polynomially many parameters.
The discriminator remains classical and is typically implemented as a small feed-forward neural network. It receives either a real data sample or a measurement outcome from the quantum generator and outputs a probability that the sample is authentic. The two networks are trained adversarially: the generator tries to fool the discriminator, while the discriminator tries to avoid being fooled.
Generator output state
Measurement probability
Minimax objective
Training a qGAN requires careful balancing of the generator and discriminator update steps. If the discriminator becomes too powerful too early, the generator receives vanishing gradients and ceases to learn. Conversely, if the generator outpaces the discriminator, the discriminator's feedback becomes uninformative.
In practice, one alternates between classical backpropagation through the discriminator and parameter-shift rule-based gradient estimation on the quantum generator. The parameter-shift rule allows exact computation of gradients for gate rotations on quantum hardware by evaluating the circuit at two shifted parameter values. This avoids the high variance associated with finite-difference methods.
Convergence of qGANs is an active research topic. Unlike classical GANs, where the global optimum corresponds to the generator perfectly replicating the data distribution, quantum generators are constrained by the expressivity of the chosen circuit ansatz. Recent work has shown that qGANs can achieve competitive performance on financial loading and image-generation benchmarks, particularly when the target distribution has low effective dimensionality.
Parameter-shift gradient
Generator loss
Runnable implementations you can copy and experiment with.
This Qiskit example constructs a simple parameterized quantum generator circuit, runs it on a simulator, and prints the sampled probability distribution over computational basis states.
from qiskit import QuantumCircuit, transpile
from qiskit_aer import AerSimulator
import numpy as np
n_qubits = 3
shots = 8192
def generator_circuit(params):
qc = QuantumCircuit(n_qubits)
for i in range(n_qubits):
qc.ry(params[i], i)
for i in range(n_qubits - 1):
qc.cx(i, i + 1)
qc.measure_all()
return qc
params = np.random.rand(n_qubits) * 2 * np.pi
qc = generator_circuit(params)
simulator = AerSimulator()
job = simulator.run(transpile(qc, simulator), shots=shots)
counts = job.result().get_counts()
probs = {k: v / shots for k, v in counts.items()}
print("Generated distribution:", probs)This PennyLane example defines a quantum generator as a QNode. The circuit applies RY rotations followed by CNOT entanglement, returning the full probability distribution over measurement outcomes.
import pennylane as qml
import pennylane.numpy as np
n_qubits = 3
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def generator(params, noise):
for i in range(n_qubits):
qml.RY(noise[i] * np.pi, wires=i)
for i in range(n_qubits):
qml.RY(params[i], wires=i)
for i in range(n_qubits - 1):
qml.CNOT(wires=[i, i + 1])
return qml.probs(wires=range(n_qubits))
params = np.random.rand(n_qubits, requires_grad=True)
noise = np.random.rand(n_qubits)
probs = generator(params, noise)
print("Learned distribution:", probs)