N. Funcke and J. Berberich, “Robustness of optimal quantum annealing protocols,”
New Journal of Physics, vol. 26, no. 9, Art. no. 9, Sep. 2024, doi:
10.1088/1367-2630/ad7b6b.
Abstract
Noise in quantum computing devices poses a key challenge in their realization. In this paper, we study the robustness of optimal quantum annealing (QA) protocols against coherent control errors, which are multiplicative Hamiltonian errors causing detrimental effects on current quantum devices. We show that the norm of the Hamiltonian quantifies the robustness against these errors, motivating the introduction of an additional regularization term in the cost function. We analyze the optimality conditions of the resulting robust quantum optimal control problem based on Pontryagin’s maximum principle, showing that robust protocols admit larger smooth annealing sections. This suggests that QA admits improved robustness in comparison to bang-bang solutions such as the quantum approximate optimization algorithm. Finally, we perform numerical simulations to verify our analytical results and demonstrate the improved robustness of the proposed approach.BibTeX
P. Pauli, N. Funcke, D. Gramlich, M. A. Msalmi, and F. Allgöwer, “Neural network training under semidefinite constraints,” in
2022 IEEE 61st Conference on Decision and Control (CDC), in 2022 IEEE 61st Conference on Decision and Control (CDC). Dec. 2022, pp. 2731–2736. doi:
10.1109/CDC51059.2022.9992331.
Abstract
This paper is concerned with the training of neural networks (NNs) under semidefinite constraints, which allows for NN training with robustness and stability guarantees. In particular, we focus on Lipschitz bounds for NNs. Exploiting the banded structure of the underlying matrix constraint, we set up an efficient and scalable training scheme for NN training problems of this kind based on interior point methods. Our implementation allows to enforce Lipschitz constraints in the training of large-scale deep NNs such as Wasserstein generative adversarial networks (WGANs) via semidefinite constraints. In numerical examples, we show the superiority of our method and its applicability to WGAN training.BibTeX