Hippler, R. Pfab, M. Spectroscopy, Dijon , paper D14, p. A global electric dipole function of ammonia and its isotopomers in the electronic ground state R. Thanopulos, D. Stohner J. Spectroscopy, Dijon , paper J13, p. Sieben, and M. Acta, 86 , — Richard R. Ernst and Prof. Jack Dunitz K. Albert Keppler, S. Albert, M. Hippler, H. Hollenstein, L. Oeltjen, M. Seyfang, A. Sieben, J. Willeke Chimia, 57 , — Albert, K. Mode-selective stereomutation tunneling as compared to parity violation in hydrogen diselenide isotopomers 1,2,3 H 2 80 Se 2 M. Willeke Israel Journal of Chemistry, 43 , — Willeke, Int.

Mass Spectrometry , — Global analytical potential hypersurface for large amplitude nuclear motion and reactions in methane. Characteristic properties of the potential and comparison to other potentials and experimental information R. A , — Albert, and M. Mariotti, M. Stohner Chimia, 58 , — Ab initio calculation of parity violating potential energy hypersurfaces of chiral molecules A. Bakasov, R. Berger, T. Quack Int. Quantum Chem. Krylov, M. Nikitchenko, M. Seyfang, Proc. SPIE Vol.

Schepler and D. Albert, H. Willeke Mol. Spectroscopy, Dijon , p. Zeit und Zeitumkehrsymmetrie in der molekularen Kinetik M. Quack Schriftliche Fassung des Vortrages am 7. Kryachko, Vol. Marquardt, K. Sagui, W. Klopper and M. B , — Isotopic chirality and molecular parity violation R. Berger, G. Laubender, M. Chemie Intl. Sieben, R. Willeke, Proc. Spectroscopy, Dijon , paper F4, p.

Ulenikov, E. Bekhtereva, S. Grebneva, H. Spectroscopy, Dijon , paper H38, p. Parity violation in chiral molecules M. Stohner Chimia, 59 , — Trendbericht Physikalische Chemie Nachrichten Aus Der Chemie , 54 3 — , Electroweak quantum chemistry and the dynamics of parity violation in chiral molecules. Royal Society of Chemistry, Cambridge, Stereomutation tunneling switching dynamics and parity violation in chlorineperoxide Cl-O-O-Cl.

A , 9 — , Intramolecular primary processes: Recent results and new questions. Grill and T. Innsbruck University Press, Innsbruck, Recent results on parity violation in chiral molecules:camphor and the influence of molecular parity violation. High resolution infrared spectroscopy of aromatic compounds. Albert and M.

**http://ejoxogopoguh.ml**

## We're sorry!

Ultrafast redistribution of vibrational energy after overtone excitation of CH 3 I - three different time scales. Krylov, E. Miloglyadov, M. Theory and molecular spectroscopy of the parity violating electroweak interaction: Signatures in rovibrational spectra of polyatomic molecules. Brill Academic Publishers, Leiden, Mode selective tunneling dynamics observed by high resolution spectroscopy of the bending fundamentals of 14 NH 2 D and 14 ND 2 H.

Quack Mol. Krylov, A. Kushnarenko, E. High sensitivity femtosecond gas phase pump-probe experiments using a hollow waveguide: intramolecular redistribution processes in CH 3 I. In Proc. Albert, S. Bauerecker, M. Quack, and A. High resolution rovibrational spectroscopy of chiral and aromatic compounds. High resolution rovibrational spectroscopy of pyrimidine. Cohen, T. Cvitas, J. Frey, B. Kuchitsu, R. Marquardt, I. Mills, F. Pavese, M. Stohner, H. Strauss, M. Takami, and A. Quantities, Units and Symbols in physical chemistry, Third edition , Spectroscopic properties of trichlorofluoromethane CCl 3 F calculated by density functional theory.

Handy, S. Carter, M. Willeke, and M. Stereomutation dynamics in hydrogen peroxide. Oeltjen, and M. A , — , Recent results in quantum chemical kinetics from high resolution spectroscopy. Simos, G. Maroulis, High-resolution spectroscopic studies and theory of parity violation in chiral molecules. Stohner, and M. Bauerecker, and M. Beck, M. Drabbels, and T.

See also: Vibrational spectra and ab initio calculations for the study of intramolecular vibrational energy redistribution in the CH-chromophore in CHD 2 I. See also: Twentieth colloquium on high resolution molecular spectroscopy - dijon, 3 to 7 september - foreword. Boudon, G. Flaud, M. Quack, and T.

Manca and M. Global analysis of the infrared spectrum of 13 CH 4 : Lines in the region 0 to cm Niederer, and S. Bauerecker, V. Boudon, J. Champion, and M. Chimia , 62 4 — , Swiss Chemical Society prize paper. Ab initio study of some persistent nitroxide radicals. Horny, F. Mariotti, and M. Chimia , 62 4 —, Vibrational predissociation in hydrogen obnded dimers: the case of HF 2 and its isotopomers.

Manca, M. Mettler Toledo award paper. Global analysis of the high resolution infrared spectrum of methane 12 CH 4 in the region from 0 to cm Boudon, L. Brown, J. Champion, M. Loete, A. Nikitin, and M. Quack in P. Intramolecular vibrational energy redistribution measured by femtosecond pump-probe experiments in a hollow waveguide. Kushnarenko, V. Corkum, S. De Silvestri, K. Nelson, E. Riedle, and R. Schoenlein Eds. High-resolution near infrared spectroscopy and vibrational dynamics of dideuteromethane CH 2 D 2.

Bauerecker, H. A , 10 —, Quack, Intern. Union Pure Appl. Japanese Translation J. Milewski, A. Kendl and P. Keppler Albert, Ph. Lerch, and M. Cavity enhanced saturation spectroscopy of NH 3 in the near infrared P. Dietiker, M. Schneider, G. Seyfang, and F. Miloglyadov, A. Kulik, M. Miloglyadov, V. Kushnarenko, M. Suter, C. Manca Tanner, and M. Albert, C. Quack, and F. Merkt, and M. Merkt, Eds. Fundamentals of Rotation-Vibration Spectra S. Keppler Albert, H. Hollenstein, C. Manca Tanner, M. Keppler Albert, M. Snels, V. Hippler, E.

Synchrotron-based highest resolution Fourier transform infrared spectroscopy of naphthalene C 10 H 8 and indole C 8 H 7 N and application to astrophysical problems S. Keppler Albert, P. Lerch, M. Contributions to the discussion on Frontiers in Spectroscopy. Quack, and D. Schmidiger, Faraday Disc. Niederer, S. Seyfang, and M. Prentner, M. Stohner, Faraday Disc. Albert, P. Seyfang, Faraday Disc. Quack, Bunsenmagazin 13 , Wann wurde Robert Wilhelm Bunsen geboren? Warum nicht? ISSN Juni im Druck publizierteFassung M. Highest resolution Fourier transform infrared FTIR spectroscopy of polyatomic molecules with and without synchrotron radiation S.

Inversion tunneling in normal and substituted anilines from infrared spectroscopy and quasiadiabatic channel reaction path Hamiltonian calculations E. Miloglyadov, R. Prentner, G. IR-laser induced population transfer from highly populated rotational levels of NH 3 in a molecular beam P. Assessing noise sources at synchrotron infrared ports Ph. Lerch, P. Dumas, T. Schilcher, A. Nadji, A. Luedeke, N. Hubert, L.

Cassinari, M. Boege, J. Denard, L. Stingelin, L. Nadolski, T. Garvey, S. Gough, M. Wambach, M. Dehler and J. Filhol, J. Synchrotron Rad. Boudon , Softley V. Boudon, P. De Natale, M. Herman, M. Mai im Druck publizierte Fassung M. Warum Physikalische Chemie? Quack Bunsen-Magazin 14 6 , — Albert, Ph. Lerch, R. Prentner and M. Quack Angew. English 52 , , Angew. Quack mit C. Remenyi Nachrichten aus der Chemie 61 , 30 — 32 High resolution spectroscopy and first global analysis of the Tetradecad region of methane 12 CH 4 , A. Nikitin, V. Boudon, C.

Wenger, S. Albert, L. Brown, S. Quack, Phys. Schmidiger, J. Sagui, J. Zheng, W. Thiel, D. Luckhaus, S. Yurchenko, F. Synchrotron based rotationally resolved high resolution FTIR spectroscopy of azulene and the unidentified infrared bands of astronomy S. Lerch and M. Quack ChemPhysChem.

Brown, K. Sung, D. Benner, V. Devi, V. Boudon, T. Gabard, C. Wenger, A. Campargue, O. Leshchishina, S. Kassi, D. Mondelain, L. Wang, L. Daumont, L. Rey, X. Thomas, V. Tyuterev, O. Lyulin, A. Nikitin, H. Gordon, L. Rothman, H. Sasada, A. Coustenis, M. Smith, T. Carrington Jr, X. Wang, A. Mantz, P. Spickler J. Transfer , — Analysis of the rovibrational spectrum of 13 CH 4 in the Octad range H. Niederer, X. Wang, T. Carrington Jr, S. Trendbericht Physikalische Chemie S. Quack Nachrichten aus der Chemie 62 , — Bauerecker, K.

Keppler, Ph. Dietiker, E. Milogyadov, M. Quack and A. The infrared spectrum of methane up to cm -1 O. Niederer, and M. Bekhtereva, A. Fomchenko, A. Litvinovskaya, C. Leroy, and M. Quack Adv. Quack in "Incentives and performance: Governance of research organizations" , pages —, I. Welpe, J. Wollersheim, S. Ringelhan, and M. Osterloh, M. Walter Thiel zum Geburtstag M. Quack Bunsen-Magazin 16 , 46—47 Joachim Sauer zum Sierka and M. Quack Nachrichten aus der Chemie 63 , — Keppler, M. Line shape of amplitude or frequency-modulated spectral profiles including resonator distortions M.

Suter, M. Quack Applied Optics 54 , — Keppler, P. Wokaun J. Bjelobrk, C. Quack Z. Quack Bunsen-Magazin 17 , — Quack ChemPhysChem 16 , — Willeke J. A , — , doi: November M. Galparsoro, A. Martinez, H. Juaristi, C. Crespos, M. Alducin, and P. Pena Torres, H. Crespos Phys. Gonzalez-Lezana, L. Larregaray, Y. Wu, W. Bian J. Gonzalez-Lezana, Y. Suleimanov Phys. Galparsoro, H. Galparsoro, J. C , doi : Crespos, J. Decock, P. Bonnet J. C , doi: Panades-Barrueta, J. Rubayo-Soneira, M. Monnerville, P. Larregaray, F. Dayou y A. Rivero- Santamaria Rev.

Galparsoro, R. Lara, S. Chefdeville, P. Bonnet, J. Launay, M. Costes, C. Naulin, A. Bergeat J. A , doi: Nosir, C.

### This account is currently unavailable.

Crespos, R. Aurel, H. Busnengo and A. Martinez J. Gonzalez-Martinez, O. Dulieu, P. Bonnet Phys. A 90, Petuya, P. Larregaray J. Chem C. Larregaray, Ph. Halvick and J. Larregaray, C. Crespos, H. Quintas-Sanchez, P. Crespos, E. Quintas-Sanchez, and P. Crespos, A.

## Carl Zeiss Vom Atelier Für Mechanik Zum Führenden Unternehmen Des Optischen Gerätebaus

Perez-Mellor Rev. Springer series in surfaces sciences, ed. Busnengo and R. Diez-Muino, Springer-Verlag, Rayez, L. Martin-Gondre and J. Rubayo-Soneira J. Ruf, C. Handschin, A. Bertrand, L. Bonnet, R. Cireasa, E. Constant, P. Corkum, D. Descamps, B. Fabre, P. Larregaray, E. Petit, B. Pons, D. Staedter, H. Worner, D. I was lucky to do my Ph. I would also like to thank them for teaching me never to give upand always try to make this world fairer.

Since I was a kid my mother taught methat everything in life is explained with the laws of math and physics. My dad alsotaught me that chemistry, on its own, is a wonderful science that can make youunderstand the wonders of nature. I would like to thank my parents for making mepassionate about knowledge. To my father, for teaching me to always fight for my dreams. In memory of my father. Major breakthroughs in the field of modern science have been achieved due togrowth in computational power in recent years.

Nevertheless, there are still manyinteresting quantum systems that cannot be fully simulated with the current com-puting capacity and thus different methodologies have to be designed. In R. Feynman proposed the idea to simulate quantum systems analogously [64]. Thisnew research field is known as quantum simulators and its only goal is to usefully controllable quantum systems to improve the human knowledge of quantumphysics [30].

As practical as it seems, quantum simulators cannot mimic all quantum sys-tems and the hunger for novel numerical tools to study quantum physics is stillpresent [30]. Recently the use of machine learning algorithms has gained momen-tum in the fields of theoretical chemistry and physics [8, 33, 35, 71]. In this thesismachine learning algorithms are being used to study quantum physics in a widerange of problems; from the description of the electronic energy as a function ofthe nuclei position to the discovery of new quantum phases of matter.

In this chap-ter, a brief introduction and motivation to each of the research projects that wereconducted during my Ph. The last part of the current chapter sum-1marizes each of the chapters dedicated to the research projects that this thesis iscomprised of. The state of a system, S, determines themeasured quantity, o, that is produced by a physical observable F. G is the gravitational constant. One can observe that Equation 1. All the quantum observables that are studied in this thesis are described byHermitian operators. Using Equation 1.

Using Equations 1. Furthermore,Equation 1. One of the central ideas in this dissertation, Equation 1. In theoretical chemistry, computing the electronic ground state energy of an en-semble of electrons and nuclei in different fixed positions can be mapped to Equa-tion 1. This leads to the first question addressed in this the-sis: Can ML be used to interpolate quantum observables accurately to constructPESs? This raises the second question: Can ML be used to extrapolateobservables to learn phase diagrams? Many types of quantum problems can be described by Equation 1. From Equation 1.

This question amounts in the fieldof quantum physics to the inverse scattering problem. Due to the complexity ofthe inverse scattering problem, one could rephrase the above question as the thirdquestion of this dissertation: Can the inverse scattering problem be solved usingML? In order to do so, ML algorithms are trained byminimizing a loss function which sometimes is not a simple task. Furthermore,the optimization of ML algorithms is computationally demanding. Some classi-cal algorithms when they are mapped into the quantum framework became moreefficient [].

For example, the spread of quantum walks in some regimes issignifically faster than random walks, making them more efficient for search typealgorithms. Quantum walks are the quantum counterpart of random walks, oneof the building blocks used in various classical algorithms as statistical models ofreal-world processes, and they are also used in the field of quantum computing5[46, 89, , ]. There are two types of quantum walks, discrete and continuous quantum walks. The first type resembles the classical view of a random walk by tossing a quantumcoin.

As it is statedabove, the quantum walk has moved with equal probability to the adjacent sites. For discrete quantum walks the evolutionoperator is the Hadamard operator, Equation 1. In continuous quantum walks, the Hamiltonian itself is7the graph where the quantum walk will move [4, 63], and the transition probabili-ties, described by the Hamiltonian, are the vertices between different nodes as it isdescribed in Equation 1. One of the key differences between random and quantum walks is the spread ofthe walk over time. This unique property has made quantum walks extremely valuable inquantum algorithms [45, 47, ].

In this thesis we also raise the following ques-tion: Can the spread over time be enhanced for quantum walks beyond the ballisticlimit? Feynman proposed a revolutionary idea. He suggested that one could mimicquantum phenomena with a fully controlled simulator [64]. This simple idea hadhuge experimental consequences, since, in order to achieve quantum simulators,humans have to be able to fully control quantum systems [30]. For example, in D. Jaksch et al. Thefirst term of the Bose-Hubbard model represents the hopping of particles betweendifferent lattice sites.

The second term describes the interaction between two parti-cles at the same lattice site. The last term is known as the chemical potential and isrelated to the total number of particles in the system. For ultracold atoms trappedin an optical lattice, the parameters of Equation 1. Ortner et al. This was experimentally realized in by B. Yan etal. Polar molecules trapped in different sites of the optical lattice interact by dipole-dipole interaction. The constants of the Bose-Hubbard model simulated with polarmolecules can be tuned by an external DC electric field [73].

Since the trap depthof the optical lattice has to be large enough to confine polar molecules, the quan-tum particles that simulate Bose-Hubbard model are Frenkel excitons, which in thecase of polar molecules are rotational excitations [73, 84, 86, , , , ,, ]. On the other hand, ultracold atoms can be trapped in optical lattices with nearlyuniform filling [74, ]. In Greismaier et al. This raises the follow-ing question, Can magnetic atoms trap in an optical lattice be used as quantumsimulators for Bose-Hubbard type models?

In the first part, Chapters, illustrates the use of ML to study quantum physics. In the second part ofthe thesis, Chapters 5 and 6, we study quantum walks and quantum simulators ofultracold atoms with large magnetic dipole. Chapter 2 contains the introduction to one of the most versatile supervisedlearning algorithms, GP regression. In this chapter, we also explain why GPsare a more accurate interpolation algorithm than neural networks NNs for lowdimensional problems. We interpolate the electronic energy for different spatialconfigurations of the formaldehyde molecule using GP regression.

Chapter 3 introduces the novel idea of combining different ML algorithmswith the purpose of solving the inverse scattering problem. It also introduces theBayesian optimization BO algorithm to optimize functions without computing9gradients. Chapter 4 explores the hypothesis that by extrapolating quantum observablesone can discover new phases of matter.

Illustrating that GPs with a combinationof simple kernels can extrapolate quantum observables and predict the existenceof new phases of matter. This chapter studies the evolution of the polaron dis-persion as a function of Hamiltonian parameters and shows that the change in theground state momenta leads to a new phase of matter. Additionally, using the samealgorithm one can accurately extrapolate quantum observables where traditionalnumerical methods lack convergence.

Chapter 5 demonstrates the possibility to enhance the spread of quantum walksfor various graphs by allowing the number of walks to be a non-conserved quantity. We also show that for disorderedgraphs the spread of a quantum walk is larger when number-changing interactionsare considered in the Hamiltonian. Chapter 6 is dedicated to an experimental proposal to study extended Bose-Hubbard models using highly magnetic atoms trapped in optical lattices.

The pro-posal presented uses Zeeman excitations to tune the parameters to construct variousBose-Hubbard type models. Each of these three fieldsstudies a particular task. In the case of supervised learning, the goal is to find thenumerical mapping between an input xi and an output yi. When the output value yi is discrete, the problemis known as classification.

On the other hand, when yi is continuous is known asinterpolation. This chapter describes one of the most common supervised learningalgorithms Gaussian Process GP regression []. We denote D as the training dataset that contains both X and y. One of the fewdifferences between GP regression and other supervised learning algorithms, likeNeural networks NNs , is that GPs infer a distribution over functions given thetraining data p f X,y. The kernel function plays a key role as it describes the similarityrelation between two points. Conditioning on the trainingdata is the same as selecting the distribution of functions that agree with observeddata points y.

The mean of the conditional distribution,equation 2. In Section 2. The uncertainty can be usedto sample different regions of the function space to search the location of the min-imum or maximum of a particular function. This is the idea behind a class ofML algorithms known as Bayesian optimization BO , which will be discussed inChapter 3.

An example of GP regression is illustrated in Figure 2. We use the exponential squared kernel and 7 training points. Inthe following section, we describe the most common procedure to train GPs. The parameters of the model w and L are interconnected. It must be mentioned The solid blue line is the prediction of the GP model, Equa-tion 2. The grey shaded area is the standard deviation of the predictedmean of the GP model, Equation 2.

The blue square symbols are thetraining data. This common problem in ML is known as overfitting. GPs models can also be trained using a loss function. GP models are non-parametric models, therefore, the dimensionality of the loss function depends onthe number of the parameters of the kernel function. Using a loss function todetermine the optimal value for the kernel parameters for non-parametric modelsis computationally expensive and is prone to overfitting [, ]. However, itis possible to train GP methods without a loss function. Finding the kernel parameters that maximize the marginal likelihood can be doneby maximizing the logarithm of the marginal likelihood w.

The tradeoffbetween the data-fit and the complexity term is key for the optimal value of thekernel parameters. Standard gradient-based optimization algorithm is the most common methodto find the optimal value of the kernel parameters. In the following section, we explain the use of GPs as an interpolation tool to fit15multidimensional functions.

In Chapter 3, we illustrate that two-tiered GPs mod-els are capable of solving the so-called inverse scattering problem. In Chapter 4,we also illustrate the use of GP regression to predict beyond the training data todiscover phases of matter. In this section, we introduce variouskernels that are used for training GPs and illustrate how prediction with GPs candrastically change depending on which kernel is used.

As mentioned previously,the kernel function should describe the similarity between two points. In kernelregression, two points that are similar, under some metric, should have a similaroutput value yi, predicted by Equation 2.

### Research section menu

The Cholesky factorization is the most common algorithm to invert matrices withO N3 complexity where N is the number of training points []. All of the kernels that are used in this thesis are stationary kernels except for thelinear kernel. In the following sections,we explain some of the most common kernels that are used in GP models. Thetraining of an isotropic SE kernel is faster since the total number of parameters inthe kernel is one, while anisotropic SE kernels have d parameters, d is the dimen-sion of x.

The SE kernel is infinitely differentiable. Figure 2. Both kernels, Equations 2. Periodicity can be described by trigonometric function like cos x , sin x , cos2 x or sin2 x. Since any kernel function must be a semi-positive define function,cos2 x and sin2 x are the only capable trigonometric functions that can be usedas kernel functions.

ZA is the atomic number of nucleus A. The eigenvalues ofHelec parametrically depend on the positions of the nuclei because of the electron-nucleus interaction, the riA terms in Equation 2. Finding the solutions of Helec is still one of the most challenging problemsin quantum chemistry. In , the Nobel committee for chemistry laureated W. Kohn and J. Pople for their contributions in the field of quantum chemistry.

Kohn is the father of Density functional theory DFT , which is one of the mostknown methodologies for solving Equation 2. Pople is the pioneer inthe development of various computational methods in quantum chemistry. Bothscientists dedicated their research to find the solutions of the electronic Hamilto-nian. It is out of the scope of this thesis to explain or propose new methodologiesdedicated to computing the eigenvalues and eigenvectors of the electronic Hamil-tonian.

Instead, we propose the use of machine learning to reduce the overallcomputational resources needed to study problems in quantum chemistry. As it is stated above, the electronic energy depends on the positions of thenuclei. For instance, two hydrogen atoms have different energies for different in-teratomic distances; quantum chemists call this function a potential energy surface PES. Theexact PES dashed black line is from reference [23].

In Manzhos et al. Cui et al. However, ML has rapidly evolved in recentyears with the creation of new algorithms that could reduce the number of trainingpoints needed to make more accurate fits for various PESs [13, 15, 78, ]. The prediction of bond-breaking energies, unimolecular reactions, vibrationalspectra, and reaction rates are a few of the observables that depend upon the accu-racy of PES fits [, , ].

Being able to interpolate the energy of a systemfor different molecular geometries can also give synthetic chemists informationsuch as reaction mechanisms or transition states []. Generally speaking, PESsplay a crucial role in chemistry. We also discuss the impactthat different kernels have on GP regression and the number of neurons in NNs.

Additionally, we summarize the impact the training points have on the test errorfor both algorithms. We also discuss the importance of the training data size for both algorithms andexplain under which circumstances which algorithm is more accurate for interpo-lating PESs. The results presented in this section are published in reference []. We consider a data set of , energy points for the H2CO molecule ob-tained from reference [36]. Each training data set used to train every NN or GPmodel is sampled using the Latin hypercube sampling LHS to ensure the pointsare efficiently spread [].

Figure fromreference []NNs are a powerful and complex supervised learning algorithm []. Weconsider one of the simplest NNs architectures, single layer NN with sigmoid ac-tivation functions. Even in the limit of a single layer of neurons, NNs can be usedas an interpolation algorithm. We study the effect that the number of neurons hasin the accuracy of single layer NNs. Each NN is trained using epochs withthe Levenberg-Marquardt algorithm.

The data used to train NNsis scaled to [0,1]. The results obtained using NNs are summarized in Table 2. GPs are a robust supervised learning algorithm; we test their accuracy by con-sidering different kernel functions and various number of training points. The op-timization of the kernel parameters is carried out by maximizing the logarithm of26Table 2. The number inthe parenthesis are the number of neurons used in that particular NN. Npts isthe number of training points. Interpolation results using GPsare summarized in Table 2.

- State-to-state unimolecular reaction dynamics of highly vibrationally excited molecules.
- Kinetic Theory of Nonideal Gases and Nonideal Plasmas.
- The Ordinary Acrobat: A Journey into the Wondrous World of the Circus, Past and Present;
- Professional BizTalk Server 2006.
- Cold controlled chemistry.

As discussed, using different kernel functions leads to different accuracies inthe predictions of GPs. NNs are a parametric model, meaning that the number of parameters is fixed,unlike GP regression. We consider a single layer NN, which only single hyperparameter is the number of neurons. NNs become a more robust interpolationalgorithm when the number of neurons increases which means more parameters. However, we also notice that a large number of neurons does not necessarily reducethe RMSE.

When the number of parameters for NN is large more trainingpoints are needed for better training a NN. A well-known problem in NNs is that for a fixednumber parameters, more training data could lead to a better optimization whichmeans a lower RMSE. Consequently, GP models are often easier to train than NNs. One of many applications of fitting PESs is the ability to predict the vibrationalspectrum of molecules.

The vibrational spectrum is determined using space-fixed Cartesian kinetic energyoperator and Gaussian basis functions SFGB []. GP models arenot only more accurate than NNs in the interpolation of PESs but also in predict-ing vibrational spectra. GP regression also offers the advantage of requiring fewertraining points than NNs. Furthermore,we compared two of the most common, novel regression tools, NNs and GP mod-els, to interpolate the energy for a spatial arrangement of atoms. We demonstratedthat GP models are a more accurate interpolation tool over NNs for low dimen-sional systems.

Vibrational spectra computed using both regression methods were compared as asecond test to determine which method is more accurate with GPs still outperform-ing NNs. One of the unanswered questions in quantum chemistry is, which regressionmodel needs the least number of energy points to make accurate predictions? Forsome systems, each energy point used to train a regression model may requirehigh computational resources and the resources needed to have a large number ofpoints is a problem. We argue that for low dimensional systems GPs are accurate29interpolation algorithms that require less training points, Tables 2.

The prediction accuracy of single layer NNs depends on the number of neurons. In Table 2. Given these points, GP models can be trained more accurately partly becausethe low dimensionality of their loss function which makes the minimization moreefficiently. The accuracy of fitted PESs can also be evaluated by computing some physicalobservables that depend on the PES, for instance, the vibrational spectrum of amolecule. Inthis chapter, we discussed how using energy points and two of the most simpleML algorithms, it is possible to reduce the computational time to study many-bodyphysics like the prediction of the vibrational spectrum of a molecule.

It shouldbe noted that the architecture of the NNs used in our research is not the NN thatdefeated Lee Sedol and future research should be done on how deep-NNs [72],more than a single layer of neurons, can help chemists to construct the PESs formolecules or proteins where GPs cannot be used. If you wantto learn about nature, to appreciate nature, it is necessary tounderstand the language that she speaks in.

FeynmanThe optimization of machine learning algorithms is one of the most importantresearch areas since the parameters of each supervised learning algorithm needsto be trained []. However, the same tools can be used to optimize physical-chemistry problems. Over the course of this chapter, we introduce one of the mostnovel ML algorithms to optimize black-box functions known as BO. We compared the accuracy of both methods by inter-polating the PES of the H2CO molecule when different number of training points31were used. Also, we discussed that for lowdimensional problems, GP models are more accurate than NNs.

Furthermore, inthis chapter, we exemplify how GP models can also be used to solve optimizationproblems. The values of the kernel parameters are optimized by maxi-mizing the log-marginal likelihood function, Section 2. The goal of an optimization problem is to find the best solution among allpotential solutions. In the field of ML, the optimization problem is associatedwith the search for the values of the parameters of a model that better describedthe problem, e.

Synthetic chemistry alsohas optimization problems, for example varying the reaction conditions to increasepercent yield. GD is one of the first optimizationalgorithms used to train NNs []. GD has been a widely successful optimization algorithm. However, not everyfunction can be optimized using GD. For example, there is no analytic functionthat describes the relation between the percent yield given some experimental con-ditions for a chemical reaction, therefore one can not use GD to increase the percentyield.

There are many other problems that are described by non-analytic functionsor black-box functions, where evaluations are point-wise. BO is designed to tacklethe optimization of black-box functions where gradients are not available. BO tries to infer the location of the minimum of a black-boxfunction by proposing a smarter iterative sampling scheme. In the case of GD weassume that the gradient gives us the information of where to sample the next pointin order to get closer to the minimum.

Considering that black-box functions do nothave a gradient, it is necessary to propose a metric that quantifies the informationalgain as a function of the space. The core of BO relays in two components,1. Figure 3. Algorithm 1 isthe pseudocode of BO [25, ]. In Section 3. As it was mention above, the acquisition function is designed to repre-sent which point in the space has the most information. There are many dif-ferent acquisition functions, here we cover the three most used,1. Probability of improvement PI 2. Expected Improvement EI 3. Upper confidence bound UCB 3.

For example, in Figure 3. We also show that by combining GP regression38and BO one can solve the inverse scattering problem. We conclude that GPs are more accurateregression models when the training data is few and the dimensionality of the sys-tem is low; for instance, the RMSE of a GP and NN trained with points is1. We also compare the vi-brational spectra of a PES fit with GP regression and NN with respect to the realspectrum, and our results demonstrate that when a PES is fitted using a GP modelthe vibrational spectra is also more accurate [].

PESs have a more profound meaning in the field of quantum molecular dynam-ics. PESs are used to reduce the computational complexity of quantum dynamicscalculations. PESs are also used to understand reaction mechanisms or to studytransition states. In the first approach cycle A , one would compute and add more ab initiopoints to the PES at each iteration, thus placing an emphasis on the parts of theconfiguration space most relevant for the dynamics. In the second approach cycleB , one could attempt to solve the inverse scattering problem by first computingthe global PES and then modifying the analytical fit of the PES through an iterativeprocedure [61, 92, ].

However, there are two problems that make these iterativeapproaches unfeasible in the application to quantum reaction dynamics. Firstly,39step ii above, i. Secondly, step iii takes minutes to hours of computation time,which severely limits the number of loops in any of the optimization cycles above. Here we propose a more efficient optimization cycle, i iii Cwhere step ii can be easily eliminated by fitting PESs using GP models. Cycle Ccan be implemented for low dimensional reaction systems by means of a two-tieredGP regression.

Where the first GP is used to fit PESs, and the second GP is usedto optimize the location and magnitude of the energy points to produce the PESwhich gives the best description of the observable. We compute the reactionprobabilities using the time-dependent wave packet dynamics approach describedin Ref. The ba-sis sets of the reaction dynamics calculations are chosen to ensure full convergenceof the dynamical results. Both of these chemical reactions have been studied before [42, ].

In the following sections we highlight how it is possible to optimize cycle Cwhen the quantum dynamics results are and are not known, and how to overcomethe inaccuracy of the quantum chemistry calculations to get a better fit of any PESwith experimental data. We denote thisGP model of the surface by G n. When n is small any regression model is likelyto be highly inaccurate and the quantum dynamics calculation with this surface isalso expected to produce highly inaccurate results, Figure 3.

Given G n , we then ask the following question: if one ab initio point is addedto the original sample of few points, where in the configuration space should it beadded to result in the maximum improvement of the quantum dynamics results? However, such an approach would be completely unfeasibleas it would require about 10d dynamical calculations for each added ab initio point,where d is the dimensionality of the configuration space.

Care must be taken since there are many ways to quantifythe improvement e. Fig-ure 3. We denoted asF , the GP model that learns the utility function and is also used toconstruct the acquisition function that is needed in BO. We use as utility function The black solid curve—accuratecalculations from [23]. For both systems, reactions 1 and 2 , we use the UCB acquisition function,Equation 3. After a fixed number of iterations in the BO al-gorithm, 35 quantum dynamics calculations for H3 and 60 calculations for OH3,we choose the point where the utility function is minimum, and added to the nset.

We carried iteratively the same algorithm until we converge to a PESthat has an accurate reaction probability. Accurate results for the reactionprobabilities green dashed line can be achieved with a GP model trained withonly 30 ab initio points. The black solid curve — accuratecalculations from Ref. The dashed curves — calculations based on the GPPES obtained with 22 ab initio points blue ; 23 points orange , 30points green and 37 points inset. The RMSE of the results with 37points is 0.

This surface yields the quantumdynamics results shown by the green curve in the upper panel. As thedimensionality of the configuration space increases, so does the number of pointsrequired to represent accurately the PES. The RMSE of the reaction probabilities thus obtained is 0.

Note that, as any supervised learning technique, this algorithm is guaranteed to be-come more accurate when trained with more ab initio points. However, for all systems,the quantum dynamics results may not be known before hand. The min The black solid curve — accurate cal-culations from Ref. The dashed curves — calculations based on the GP PESobtained with ab initio points blue ; points orange and points green. The RMSE of the point result is 0. To illustrate the validity of this assumption, we show in Figure 3.

We emphasizethat the accurate results black curve were not used in any way in this calculation. The black solid curve — accurate calcu-lations from Ref. The dashed curves — the results of iterative cal-culations maximizing the difference between the reaction probabilitiesin subsequent iterations. The black curve is not used for these calcula-tions.

The inset shows the agreement between the reaction probabilities red symbols based on the GP approach after 48 iterations total of 70ab initio points and the exact results.

## Personal page

To overcome this problem, there is a proposal to infer the shape of a PESusing experimental quantum dynamical observables. This scheme is known as theinverse scattering problem; however, due to its complexity, this problem has notbeen fully solved for large systems. Unfortunately, it is impossible to compute thepotential energy in step i without errors and any theoretical predictions of observables are subject to uncertainties stemming from the errors of quantum chemistrycalculations.

These errors become more significant for heavier atoms and are oftenunknown. Therefore, it would be ideal to develop an approach that either bypassesquantum chemistry calculations or corrects the errors of the ab initio calculations. This could be achieved by deriving the empirical PES from the experimental data[50, 61, 92, ]. Here, we extend the previous sections to construct a PES that, when used inquantum scattering calculations, reproduces an arbitrary set of observables.

Wefirst modify the exact scattering results of Figure 3. This produces an arbitrary energydependence of the reaction probabilities shown by the dot-dashed curve in Fig-ure 3. The goal is to construct a PES that reproduces these arbitrarily chosenreaction probabilities. Note that the dot-dashed curve extends the interval of en-ergies, where the reaction probability is zero, which means that the PES for thisreaction must have a higher reaction barrier and cannot be reproduced with theoriginal PES for H3.

As before,this ensemble serves as a starting point to fit a PES with a G n. The black dot-dashedcurve is obtained by a modification of the previous results black solidcurve involving a translation along the energy axis. The green dashed curve is a results of such training after30 iterations, which produces a surface constructed with 52 ab initiopoints. The new PES yields the reactionprobabilities described by the green dashed curve in the upper panel. The RMSE of the results shown by the green dashed curve is 0.

As it was mentioned, evaluating the energy with accurate ab initio quantumchemistry methods and quantum dynamical observables are both computationallydemanding. The results we present indicate that ML reduces the total computa-tion of similar problems by using interpolation methods that required less trainingpoints Chapter 2 , and better search algorithms like BO. Incorporating ML in thequantum dynamics calculations is a research problem that should be considered,since it could reduce the computational resources needed to evaluate quantum dy-namics observables for large systems where is currently impossible.

However the same strategy canbe used with other Bayesian optimization algorithms like the ones proposed by R.

Adams et al. The optimization of cycle C using GP models and BO show that the total num-ber of points needed to fit a PES with accurate quantum dynamics observables canbe reduced. The optimization algorithm for cycle C is greedy since at each iteration we onlyconsider points to train G n with data that represent the minimum of the utilityfunction. However, we did not explore the impact of a non-greedy policy in boththe accuracy and the number of points required for a G n with accurate reactionprobabilities.

One of the key elements in the scheme we proposed is the utility function. Equation 3. However we could benefit from theintrinsic correlations that multiple utility functions could have, e. This newstrategy can be tackled using multi-task BO [] and could cut down the numberof quantum dynamics calculations. Also, multiple quantum dynamics observablescould be used to fit a single PES instead of a single one which is the strategy wepresent. We must emphasize that the most important concept of this chapter is the abilityto optimize black-box functions with the current ML tools without using gradients.

There are many open problems in chemistry and physics that can be formulatedin terms of the optimization of black-box functions, e. It must be remembered,51however, that while using BO the utility function should capture the problem in themost robust manner so that ML can help us solve new problems.

It therefore becomes desirable that approximate practical methods ofapplying quantum mechanics should be developed. DiracIn the field of many-body physics quantum observables like ground state ener-gies, particle correlations, particle densities, to mention few, are key to understandthe underline physics of a phenomenon.

For a condensed matter system, xis the value of the parameters of the Hamiltonian; for example, for the Hubbard53model x can be the values of the onsite energy, hopping amplitude, to mention few,Chapter 6. Using ML to infer quantum observables has been proved to be a novel ap-proach to study many-body physics.

For example, J. Carrasquilla et al. The quantum observable used to train the NNs is adiscrete variable that labels the phase of matter, i. Arsenault et al. The phase diagram of this many-body systemhas one transition, from the metallic to the Mott insulator phase. For each phase,a different KRR method was used to predict the quantum observable.

The phasetransition as a function of the Hamiltonian parameters is also learned by a classifi-cation algorithm, decision forest [8]. Predicting discrete or continues quantum observables is a daily task for com-putational physicists; naturally, the use of ML algorithms has reduced the com-putational effort needed to study many-body physics.

Over the course of thischapter, we present the idea of applying ML algorithms like GP models to extrap-olate quantum observables to discover new phases of matter. We also illustrate a non-bias manner to constructmore robust kernels that can interpolate between different phases of matter and alsoextrapolate where training data is not used. For such problems, itis necessary to interpolate the quantum properties of a system between the knownlimits, if there are more than one like we did in Chapters 2 and 3.

If only one limitis accessible, one must extrapolate from this limit, which we will exemplify bellow,Section 4. Sharp transitions separate the phases of the Hamiltonian 4. Since the properties of the quantum system vary drasticallyin the different phases [], an extrapolation of quantum properties across phasetransition lines is generally considered unfeasible []. We consider the phasediagrams of polaron Hamiltonians, some of which have three phases as depicted inFigure 4.

The eigenstates of the model 4. At these transitions, the ground state momentum of the polaron changes abruptly,as shown in Figure 4. In the following section, we first illustrate how a GP model with a combinationof simple kernels became a more robust supervised learning model. Furthermore,we explain how with GP regression it is possible to extrapolate quantum observ-ables of any quantum system like the polaron Hamiltonian that can lead to thediscovery of new phases of matter.

In the limit of a large number of training points, any GP56model with a stationary kernel produces accurate results. However, there is norestriction to use more than one kernel to construct the covariance matrices, thuswe could ask what is there to gain by combining kernels?. In this section, weinvestigate the premise of using GPs with more than one kernel. For many problems, there might be a suitable kernel form that describes thesystem more accurately, for example, the periodic kernel for recurrent data. How-ever, to custom made a kernel function for every single problem is not trivial.

Oneof the restrictions is that the kernel function used in GP models must be a positive-defined function []. The core of GP models is the kernel function which must capture the similar-ity relation between two points. So far we have shown that GPs with single andmultiple kernel functions are accurate interpolation models, but can GPs work forextrapolation?. Extrapolation is defined as the ability to predict beyond the train-ing data range.

In principle, if we could design or propose a kernel function thatcaptures the correlation of the data in a robust manner GPs would be capable of ex-trapolating observables. For example, a GP model with a periodic kernel is capableof extrapolating if there is some intrinsic periodicity in the data. As we already state in Chapter 2, there are two types of kernels, stationary andnon-stationary.

In the case of the stationary kernels, it indisputable that they arenot suited for extrapolation since the kernel function for two distant points should57be zero. On thecontrary, a non-stationary kernel, like the linear kernel Equation 2. In the following sections, we illustrate the possibility of using multiple kernelsto extrapolate quantum observables to predict phase transitions.