QSPPACK
  • Homepage
  • Examples
    • Quantum linear system problems
    • Negative power function
    • Quantum Hamiltonian simulation
    • Quantum Gaussian filter
    • Singular value threshold projector
    • Singular vector transformation
    • Uniform singular value amplification
    • Gaussian state
    • Kaiser window state
    • Gibbs state
Powered by GitBook
On this page
  • Overview
  • The equivalent problem in function approximation
  • Setup parameters
  • Polynomial approximation using convex-optimization-based method
  • Solving the phase factors for QLSP
  • Verifying the solution
  • Reference
  1. Examples

Quantum linear system problems

PreviousHomepageNextNegative power function

Last updated 1 year ago

Overview

The quantum linear system problem (QLSP) is a quantum variant of the linear system problem Ax=bAx=bAx=b. Given the access to an invertible matrix AAA and to a normalized quantum state ∣b⟩|b\rangle∣b⟩, QLSP asks a construction of the solution quantum state

∣x⟩:=A−1∣b⟩/∥A−1∣b⟩∥2|x\rangle := A^{-1}|b\rangle / \Vert A^{-1} |b\rangle\Vert_2∣x⟩:=A−1∣b⟩/∥A−1∣b⟩∥2​

with bounded error.

Suppose the condition number of the coefficient matrix is κ:=∥A∥2/∥A−1∥2\kappa := \Vert A\Vert_2 / \Vert A^{-1}\Vert_2κ:=∥A∥2​/∥A−1∥2​ and A∈CN×NA \in \mathbb{C}^{N \times N}A∈CN×N. The recent progress reveals some near-optimal quantum algorithms for solving QLSP with bounded error ϵ\epsilonϵ and complexity O~(κlog⁡(1/ϵ)polylog(N))\tilde{O}\left(\kappa\log\left(1/\epsilon\right)\mathrm{polylog}(N)\right)O~(κlog(1/ϵ)polylog(N)). In this example, we will deploy a much simpler (but sacrificing the total complexity) construction of the solution to QLSP.

The equivalent problem in function approximation

The immediate idea for solving QLSP is approximately implementing the map A↦A−1A \mapsto A^{-1}A↦A−1. Hence, in light of QSP, the implementation boils down to a scalar function f(x)=x−1f(x) = x^{-1}f(x)=x−1. Without loss of generality, we assume the matrix is normalized so that ∥A∥2≤1\Vert A \Vert_2 \le 1∥A∥2​≤1. Then, the condition number implies that the eigenvalues of AAA lie in the interval Dκ:=[−1,−κ]∪[κ,1]D_\kappa := [-1, -\kappa] \cup [\kappa, 1]Dκ​:=[−1,−κ]∪[κ,1]. Hence, we have to find an odd polynomial approximation to f(x)f(x)f(x) on the interval DκD_\kappaDκ​. Here, the interval is set to DκD_\kappaDκ​ rather than [−1,1][-1,1][−1,1] to exclude the singularity of f(x)f(x)f(x) at x=0x = 0x=0.

Setup parameters

This improves the numerical stability.

kappa = 10;
targ = @(x) (1/(2*kappa))./x;
% approximate f(x) by a polynomial of degree deg
deg = 151; 
parity = mod(deg, 2);

Polynomial approximation using convex-optimization-based method

% set the parameters of the solver
opts.intervals=[1/kappa,1];
opts.objnorm = Inf;
opts.epsil = 0.2;
opts.npts = 500;
opts.fscale = 1; % disable further rescaling of f(x)ab

Then, simply calling the subroutine yields the coefficients of the approximation polynomial in the Chebyshev basis. As a remark, the solver outputs all coefficients while we have to post-select those of odd order due to the parity constraint.

coef_full=cvx_poly_coef(targ, deg, opts);
coef = coef_full(1+parity:2:end);
visualize polynomial approximation
% convert the Chebyshev coefficients to Chebyshev polynomial
func = @(x) ChebyCoef2Func(x, coef, parity, true);
figure()
tiledlayout(1,2)
nexttile
hold on
xlist1 = linspace(0.5/kappa,1,500)';
targ_value1 = targ(xlist1);
plot(xlist1,targ_value1,'b-')
plot(-xlist1,-targ_value1,'b-')
xlist1 = linspace(-1,1,1000)';
func_value1 = func(xlist1);
plot(xlist1,func_value1,'-')
hold off
xlabel('$x$', 'Interpreter', 'latex')
ylabel('$f(x)$', 'Interpreter', 'latex')
legend('target function', '', 'polynomial approximation')

nexttile
plot(xlist,func_value-targ_value)
xlabel('$x$', 'Interpreter', 'latex')
ylabel('$f_\mathrm{poly}(x)-f(x)$', 'Interpreter', 'latex')
print(gcf,'quantum_linear_system_problem_polynomial.png','-dpng','-r500');

Solving the phase factors for QLSP

We use Newton's method for solving phase factors. The parameters of the solver is initiated as follows.

% set the parameters of the solver
opts.maxiter = 100;
opts.criteria = 1e-12;
% use the real representation to speed up the computation
opts.useReal = true;
opts.targetPre = true;
opts.method = 'Newton';
% solve phase factors
[phi_proc,out] = QSP_solver(coef,parity,opts);

Verifying the solution

xlist = linspace(0.1,1,1000)';
func = @(x) ChebyCoef2Func(x, coef, parity, true);
targ_value = targ(xlist);
func_value = func(xlist);
QSP_value = QSPGetEntry(xlist, phi_proc, out);
err= norm(QSP_value-func_value,Inf);
disp('The residual error is');
disp(err);

figure()
plot(xlist,QSP_value-func_value)
xlabel('$x$', 'Interpreter', 'latex')
ylabel('$g(x,\Phi^*)-f_\mathrm{poly}(x)$', 'Interpreter', 'latex')
print(gcf,'quantum_linear_system_problem_error.png','-dpng','-r500');

Reference

  1. Gilyén, A., Su, Y., Low, G. H., & Wiebe, N. (2019, June). Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing (pp. 193-204).

  2. Dong, Y., Meng, X., Whaley, K. B., & Lin, L. (2021). Efficient phase-factor evaluation in quantum signal processing. Physical Review A, 103(4), 042419.

Output of the code
norm error = 7.64691e-07
max of solution = 0.800546
iter          err
   1  +1.2889e-01 
   2  +8.8228e-03 
   3  +6.4969e-05 
   4  +3.7459e-09 
   5  +1.4694e-15 
Stop criteria satisfied.
The residual error is
   6.2172e-15

For numerical demonstration, we set κ=10\kappa = 10κ=10 and scale down the target function by a factor of 1/(2κ)=1/201/(2\kappa) = 1/201/(2κ)=1/20 so that

f(x)=12κx,max⁡x∈Dκ∣f(x)∣=12.f(x) = \frac{1}{2\kappa x}, \quad\max_{x \in D_\kappa} |f(x)| = \frac{1}{2}.f(x)=2κx1​,x∈Dκ​max​∣f(x)∣=21​.

To numerically find the best polynomial approximating f(x)f(x)f(x) on the interval DκD_\kappaDκ​, we use a subroutine which solves the problem using convex optimization. We first set the parameters of the subroutine.

To visualize this polynomial approximation, we plot it with the target function. From the figure, we can find both agree very well on DκD_\kappaDκ​ but have distinct regularity otherwise.

We verify the solved phase factors by computing the residual error in terms of the ℓ∞\ell^\inftyℓ∞ norm

residual_error=max⁡k=1,⋯ ,K∣g(xk,Φ∗)−fpoly(xk)∣.\mathrm{residual\_error} = \max_{k = 1, \cdots, K} |g(x_k,\Phi^*) - f_\mathrm{poly}(x_k)|.residual_error=k=1,⋯,Kmax​∣g(xk​,Φ∗)−fpoly​(xk​)∣.

Using 1,0001,0001,000 equally spaced points, the residual error is 6.2172×10−156.2172 \times 10^{-15}6.2172×10−15 which attains almost machine precision. We also plot the point-wise error.

Approximating the matrix inversion within the interval of condition number.
Left: The odd polynomial approximation to the target function solved using the convex-optimization-based method. Right: The point-wise approximation error.
The point-wise error of the solved phase factors.