\Cref{sec:stocastic_matrix_model} formulates mathematical tools to algebraically describe an abstract model of probabilistic computations defined in \cref{sec:probabilistic_model}. This section takes a reverse approach. The tools developed in \cref{sec:stocastic_matrix_model} are based on stochastic matrices, which is an obvious choice to model probabilistic state transitions. Unfortunately this model has some shortcomings. This section first highlights these inconveniences and then fixes them. By doing so the model will gain in computational power, demonstrated by the implementation of Deutsch's algorithm. Finally, it will be shown that this extended model is indeed physically realizable.
@ -291,8 +294,48 @@ The measurement operators of \cref{sec:measurements_probabilistic} obviously als
\subsubsection{(Real) Projective Measurements}
One of the most important special cases of measurement operators are projective measurements. As the name already suggests, projective measurements are linear projections onto subspaces of $\mathcal{B}_{\R}^n$.
\subsection{Interference - Computational Power}
\subsubsection{Flipping a Coin Twice Equals Doing Nothing}
So far, moving computations on affine combinations to points on the unit sphere had merely subjective and rather esoteric reasons to \emph{clean up} an abstract description of a computational model. In short: It's mathematically nicer to move around on the unit sphere. This section shows, that utilizing the power of probably amplitudes one actually gains computational power compared to the previous model.
In the classical probabilistic model a coin flip destroys any information stored in a bit, even in superposition states. It is easy to verify that $P_{\sfrac{1}{2}}=\frac{1}{2}\big(\begin{smallmatrix}1&1\\1&1\end{smallmatrix}\big)$ indeed implements a 1-bit coin flip and satisfies the conditions of \cref{thm:probabilistic_matrix_representation}. Two consecutive coin flips are independent, which is illustrated by $P_{\sfrac{1}{2}} P_{\sfrac{1}{2}}= P_{\sfrac{1}{2}}$. The coin flip applied to an arbitrary superposition $\mathbf{b}= p_0\mathbf{b}_0+ p_1\mathbf{b}_1$ yields:
Given any input state $P_{\sfrac{1}{2}}$ reaches a fixed point after one iteration. With $P_{\sfrac{1}{2}}$ not being orthogonal a different operator is needed for the orthogonal model, which is the \emph{Hadamard} operator.
\begin{definition}[Hadamard Operator]
\label{def:hadamard_gate}
\begin{equation}
H = \frac{1}{\sqrt{2}}\begin{pmatrix}
1 & 1 \\
1 & -1 \\
\end{pmatrix}
\end{equation}
\end{definition}
The Hadamard operator is basically the same as $P_{\sfrac{1}{2}}$ but the global factor is adjusted for probability amplitudes and the matrix columns are made orthogonal by introducing a negative phase in the second column. Exactly this negative phase will have some surprisingly useful effects. It is easy to see that
for $\ket{b}\in\parensc{\ket{0}, \ket{1}}$, $M_{\ket{0}}=\ketbra{0}$ and $M_{\ket{1}}=\ketbra{1}$. So, $H$ does indeed implement a fair coin flip. But contrary to $P_{\sfrac{1}{2}}$ the Hadamard operator does not destroy the information stored in a superposition.
Actually, applying $H$ a second time completely reverses the computation, this is the result of $HH =\idmat$ and might seem strange at first but is in fact a trivial consequence of orthogonal operators representing rotations and rotoreflections, which both are obviously reversible. Thus, the first $H$ can not have destroyed any information. It is interesting to look into how something like that happens. After the second $H$ application the state is:
The key observation can already be made in \cref{eq:hadamard_on_superposition}. Probability amplitudes can destructively interfere with each other. In \cref{eq:hadamard_on_superposition} this can be seen in the term $(a_0- a_1)$ and in \cref{eq:2_hadamards_on_superposition} the amplitudes cancel each other out just perfectly to restore the original input state. It can't be mentioned enough: probability amplitudes are not probabilities. Destructive Interference is not possible with stochastic matrices from \cref{sec:stocastic_matrix_model} with all their entries being strictly positive. The next section shows how interference effects can be utilized effectively to outperform any probabilistic computation.