|
93 | 93 | None, |
94 | 94 | 'kernels-and-non-linearity'), |
95 | 95 | ('Kernel trick', 2, None, 'kernel-trick'), |
| 96 | + ('Visualization of the Kernel trick', |
| 97 | + 2, |
| 98 | + None, |
| 99 | + 'visualization-of-the-kernel-trick'), |
96 | 100 | ('The problem to solve', 2, None, 'the-problem-to-solve'), |
97 | 101 | ('Convex optimization', 2, None, 'convex-optimization'), |
98 | 102 | ('Different kernels', 2, None, 'different-kernels'), |
|
283 | 287 | <!-- navigation toc: --> <li><a href="#new-constraints" style="font-size: 80%;">New constraints</a></li> |
284 | 288 | <!-- navigation toc: --> <li><a href="#kernels-and-non-linearity" style="font-size: 80%;">Kernels and non-linearity</a></li> |
285 | 289 | <!-- navigation toc: --> <li><a href="#kernel-trick" style="font-size: 80%;">Kernel trick</a></li> |
| 290 | + <!-- navigation toc: --> <li><a href="#visualization-of-the-kernel-trick" style="font-size: 80%;">Visualization of the Kernel trick</a></li> |
286 | 291 | <!-- navigation toc: --> <li><a href="#the-problem-to-solve" style="font-size: 80%;">The problem to solve</a></li> |
287 | 292 | <!-- navigation toc: --> <li><a href="#convex-optimization" style="font-size: 80%;">Convex optimization</a></li> |
288 | 293 | <!-- navigation toc: --> <li><a href="#different-kernels" style="font-size: 80%;">Different kernels</a></li> |
@@ -986,6 +991,15 @@ <h2 id="kernel-trick" class="anchor">Kernel trick </h2> |
986 | 991 | \( \phi(\boldsymbol{x}_i)^T\phi(\boldsymbol{x}_j) \) during the SVM calculations. |
987 | 992 | </p> |
988 | 993 |
|
| 994 | +<!-- !split --> |
| 995 | +<h2 id="visualization-of-the-kernel-trick" class="anchor">Visualization of the Kernel trick </h2> |
| 996 | + |
| 997 | +<br/><br/> |
| 998 | +<center> |
| 999 | +<p><img src="figures/kerneltrick.png" width="900" align="bottom"></p> |
| 1000 | +</center> |
| 1001 | +<br/><br/> |
| 1002 | + |
989 | 1003 | <!-- !split --> |
990 | 1004 | <h2 id="the-problem-to-solve" class="anchor">The problem to solve </h2> |
991 | 1005 | <p>Using our definition of the kernel We can rewrite again the Lagrangian</p> |
@@ -1472,7 +1486,7 @@ <h2 id="quantum-kernels" class="anchor">Quantum kernels </h2> |
1472 | 1486 | (and theoretical definitions) use the squared overlap. In any case, |
1473 | 1487 | the kernel measures similarity: if \( \vert \phi(\boldsymbol{x})\rangle \) and |
1474 | 1488 | \( \vert \phi(\boldsymbol{x}’)\rangle \) are close in Hilbert space, |
1475 | | -\( k(\boldsymbol{x},\boldsymbol{x}’) \) is large. |
| 1489 | +\( K(\boldsymbol{x},\boldsymbol{x}’) \) is large. |
1476 | 1490 | </p> |
1477 | 1491 |
|
1478 | 1492 | <!-- !split --> |
@@ -1902,7 +1916,8 @@ <h2 id="and-nisq-quantum-kernels" class="anchor">And NISQ Quantum Kernels </h2> |
1902 | 1916 | possible quality or feature-based advantage: the quantum feature map |
1903 | 1917 | might separate data better than any known classical kernel. This |
1904 | 1918 | approach has been demonstrated on small datasets (e.g. Iris) and |
1905 | | -studied theoretically. For example, Havlicek et al. showed on a toy |
| 1919 | +studied theoretically. For example, Havlicek <em>et al.</em>, see <a href="https://www.nature.com/articles/s41586-019-0980-2" target="_self"><tt>https://www.nature.com/articles/s41586-019-0980-2</tt></a> |
| 1920 | +showed on a toy |
1906 | 1921 | problem that a quantum kernel can correctly classify points that a |
1907 | 1922 | simple classical kernel cannot. However, other studies have found |
1908 | 1923 | that for random data classical kernels often suffice, so the advantage |
@@ -2230,7 +2245,6 @@ <h2 id="computing-quantum-kernel-matrices" class="anchor">Computing Quantum Kern |
2230 | 2245 | </div> |
2231 | 2246 | </div> |
2232 | 2247 |
|
2233 | | -<p>This is equivalent.</p> |
2234 | 2248 |
|
2235 | 2249 | <!-- !split --> |
2236 | 2250 | <h2 id="training-svm-with-precomputed-quantum-kernels" class="anchor">Training SVM with Precomputed Quantum Kernels </h2> |
|
0 commit comments