Search Blogs

Saturday, August 30, 2025

Down memory lane: Quantum Computing

I spent about 2.5 years working on variational quantum algorithms for noisy intermediate-scale quantum (NISQ) devices [1]. The question was straightforward: can we do anything useful with these noisy, small-scale, "universal" quantum devices? Here useful was typically to mean faster, but could also mean solve far too complex problems for classical quantum chemistry.

The short answer I came to was: not in any way that clearly demonstrated a speedup over well-tuned classical solvers. The longer answer: you can get them to run, get numbers back, even match the literature, but there's no clear, reproducible speed-up. Most of the effort is in engineering the system to produce anything coherent, not in pushing computational frontiers. I'm not sure if this is a good thing or a bad thing.

So what is the state now? Are there any clear signs of utility for variational quantum algorithms? Are there any new quantum algorithms for chemistry or physics that require error correction but have proven advantage over classical computers? My guess is the answer is no not really, but I'm not sure and for now don't have the time to read up and investigate.

What I saw with Universal Quantum Computing in the NISQ Era

Universal quantum computing here means we have a qubit, quantum information analog of digital bits, and we can do arbitrary single-qubit rotations and controlled two-qubit gates to produce logic operations that put qubits into superposition and can entangle them. We can also compose them into circuits that approximate any unitary evolution.

For famous algorithms like Shor's or Grover's, the path to usefulness is clear if you had fault-tolerant quantum computing with sufficient qubits. But in the NISQ setting, VQA [2] or QAOA are the only viable options. There might be some new class of NISQ-like algorithms, I'm not sure, but it's probably still safe to say VQA and QAOA are dominant.

Lets say that you moved beyond the NISQ era, which may well be happening, and thus the noise and decoherence are under control, there's still the ansatz problem, that is how do you pick a circuit structure that can efficiently represent the solution state in the first place?

The Ansatz Problem

An ansatz is a parameterized (quantum circuit) guess for the form of your quantum state. In VQAs, it's the fixed sequence of gates you tune with a classical optimizer. In phase estimation (PEA) or Hamiltonian simulation, it's often the state-preparation step for your quantum algorithm.

The difficulty is balancing expressivity and feasibility:

  • Too shallow: can't represent the physics; optimizer converges to the wrong state.
  • Too deep: hardware noise kills you in NISQ; in fault-tolerant hardware, depth inflates T-gate and qubit costs.
  • Too generic: risks barren plateaus [3].
  • Too problem-specific: works only on one Hamiltonian.

In PEA, the ansatz problem just shifts to the state-preparation step. You might nail the controlled-unitary and inverse QFT, but if you can't efficiently prepare the eigenstate, you'll likely get garbage.

So what did I work on?

Most of the creativity in the research I did comes all from my colleague/co-author, I was mostly involved in domain application, instrumentation, and analysis. There were two works [4-5] but the one I'll highlight is probably the least interesting: we developed a hybrid quantum–classical eigensolver without variation or parametric gates[4]. The idea is to project the problem Hamiltonian into a smaller subspace, measured term-by-term with short circuits, and diagonalized classically.

This allowed us to:

  • Extract ground and excited states for small molecules (BeH₂, LiH).
  • Validate against exact diagonalization.
  • Run on the quantum hardware at the time (i.e., IBM devices).

It avoided long, problem-tailored ansatz circuits, but the choice of basis in the subspace projection is still a hidden ansatz.

A Self-Critique of Our Hybrid Eigensolver Work

We avoided variational loops and deep, problem-specific ansätze: no parameterized circuits, no barren plateau optimizers. The issue, though, was

  1. We dodged the ansatz problem: The reduced-space basis is still an ansatz, thus performance depends on making a smart choice. We didn't quantify sensitivity.
  2. Hardware vs. simulation gap unexplored IBM runs matched noiseless simulations, but was this due to noise resilience, shallow circuits, or luck?
  3. Thin classical comparisons We used exact diagonalization due to simplicty of the chemical system and basis. Real claims require benchmarks vs. DMRG [6], coupled-cluster, etc.

Why NISQ VQAs Struggle

There has been a lot of work on this in the last few years and I'm not fully up to date but this is what I gather:

  • Noise vs. depth: deeper means more decoherence.
  • Barren plateaus: gradients vanish exponentially with qubits.
  • Optimizer instability: hardware drift, shot noise, optimizer quirks.
  • Classical competition: tensor networks, DMRG [6] often scale better [7].
  • Ansatz rigidity: wrong ansatz wastes all gates and shots.

Summary of My Position

I'm not a seasoned quantum algorithm researcher, but from my limited research, I see NISQ-era of VQAs as mainly useful for benchmarking, with their progress limited by the challenge of designing effective ansätze. In quantum chemistry, there is little convincing evidence so far that VQAs offer practical advantages1. Digital quantum simulation is highly flexible but comes with significant resource costs. Analog simulation, which directly emulates physical systems, is already useful in certain specialized areas but will always be a niche. Looking ahead to fault-tolerant quantum computing, real breakthroughs may be possible, but efficient state preparation will remain a central obstacle.

Footnotes


  1. Maybe this has changed but I wager probably not and the paper by Lee et al. [7] is a good indicator. 

References

[1] J. Preskill, Quantum Computing in the NISQ era and beyond, arXiv (2018). [2] M. Cerezo et al., Variational Quantum Algorithms, arXiv (2021). [3] M. Larocca et al., Barren Plateaus in Variational Quantum Computing, arXiv (2024). [4] P. Jouzdani & S. Bringuier, Hybrid Quantum–Classical Eigensolver Without Variation or Parametric Gates, Quantum Reports 3, 8 (2021). DOI [5] P. Jouzdani, S. Bringuier, M. Kostuk, A method of determining molecular excited-states using quantum computation, MRS Advances 6 (2021) 558–563. DOI [6] S. R. White, Density matrix formulation for quantum renormalization groups, PRL 69, 2863 (1992). [7] S. Lee et al., Evaluating the evidence for exponential quantum advantage in ground-state quantum chemistry, Nat. Commun. 14, 1952 (2023). DOI


Reuse and Attribution

Saturday, July 12, 2025

Semi-Empirical Methods get a boost 🚀

Density Functional Theory (DFT) is a mainstay for those studying in-silico molecules or materials. It works well, is performant, and advances in XC functionals will probably make it even more useful and powerful. However, there have been for some time what are called semi-empirical methods that are not as accurate but are much faster and can provide a good initial idea of the electronic structure of a material. They are called semi-empirical because the inner workings are parameterized rather than derived from first principles. This, in essence, makes them fast, but with the downside that they aren't as transferable to other materials or structures, and their formalism may restrict the description of certain electronic effects.

Tight-binding is one such method. It is based on the LCAO approximation where atomic orbitals remain localized on their parent atoms, and molecular/crystal orbitals are constructed as linear combinations of these atomic orbitals. The method uses a minimal atomic orbital basis set and is straightforward to implement for describing electronic structure of molecules and materials. While accuracy is typically lower than DFT, it can be quite effective for certain material classes.

Why is TB a semi-empirical method? Because it has parameters that are fit to data, specifically the hopping integrals between atomic orbitals. These integrals describe electronic interactions between atoms and are determined by fitting to experimental data or higher-level calculations. The hopping integrals are used to construct the Hamiltonian matrix, which is then diagonalized to obtain electronic structure through the secular equation.

Since atomic orbitals are localized on atomic positions, tight-binding works well for molecules, clusters, and materials where electrons form localized chemical bonds, such as semiconductors. However, it is less effective for metals where the minimal atomic orbital basis inadequately describes metallic bonding1 or where strong electron-correlation effects are important.

I've personally never used tight-binding methods extensively, but they can be very useful for the right material or chemical system. However, I believe their traditional limitations are beginning to change, allowing for broader applicability.

A General-Purpose Extended Tight-Binding Calculator

The team behind g-xTB [1] is the Stefan Grimme Lab, which is well-known and deeply respected in computational chemistry. Their motivation for g-xTB is driven by the goal to close the cost-accuracy gap between semi-empirical tight-binding and hybrid-DFT methods without sacrificing speed. The older GFN2-xTB approaches tried to address this by adding dispersion and hydrogen-bonding corrections onto a minimal basis. Ultimately, however, GFN2-xTB aligned more closely with GGA-DFT, lacked Fock exchange, and retained the limitations inherent in rigid atomic orbitals. The idea behind g-xTB is to overcome these limitations and cover most of the periodic table, aiming for DFT-level accuracy at tight-binding speed, enabling calculations involving reaction thermochemistry, excited states, and more.

g-xTB Advancements

Some key areas in which g-xTB advances tight-binding theory are:

  1. Increased flexibility due to an atom-in-molecule adaptive q-vSZP basis.
  2. A refined Hamiltonian incorporating range-separated Fock exchange.
  3. An explicit first-order term and extensions up to the fourth order in the charge-fluctuation series.
  4. Charge-dependent Pauli-penetration repulsion.
  5. Streamlined atomic-correction potentials that address basis shortcomings.

Importantly, every element from H to Lr is covered by a single parameter set trained on 32,000 diverse data points, including "mindless" molecules. This significantly enhances usability.

Outcomes

It appears that g-xTB achieves DFT-level accuracy at tight-binding speed. Across GMTKN55, g-xTB reduces GFN2-xTB’s WTMAD-2 from 25.0 to 9.3 kcal mol⁻¹, matching low-cost hybrid composites while running only about 40% slower than GFN2-xTB and approximately 2,400 times faster than B3LYP-D4 on a 2,000-atom complex—at least according to my understanding of the paper's metrics. The authors also highlight that reaction barriers, spin-state gaps, transition-metal thermochemistry, and large biomolecular zwitterions are now tractable without SCF failures. This gives users a robust drop-in replacement for GFNn-xTB and, in many screening and dynamics workflows, a viable substitute for mid-tier DFT calculations.

Usage

The authors have released a Linux binary, which is usable if you set up your input files and parameter paths correctly.

To simplify things, I decided to create an ASE wrapper [2] for the g-xTB calculator. This is a fairly common and straightforward task in ASE. Existing calculator wrappers for other tight-binding codes might work if environment variables are adjusted correctly, but I preferred to write the file parsers from scratch (it goes quickly these days 😉).

Since the wrapper won't be regularly maintained and given that the g-xTB binary will eventually be implemented in a more mainstream TB framework, I decided to keep it as a git repo you can clone and install:

python -m venv .venv
source .venv/bin/activate
git clone --recursive https://github.com/stefanbringuier/g-xtb-ase
pip install g-xtb-ase/

Warning

Direct pip installation via git+... isn't supported because pip doesn't handle git submodules. You must clone with --recursive to get the required g-xTB binary and parameter files.

To import the calculator object:

from gxtb_ase import GxTB
...
atoms.calc = GxTB(charge=0, spin=0)

What can you do with it?

You can use it like any other ASE calculator to get total energies and forces (stress not yet supported):

from ase.build import molecule
from gxtb_ase import GxTB

atoms = molecule("H2O")
atoms.calc = GxTB(charge=0)
energy = atoms.get_potential_energy()
forces = atoms.get_forces()

This is useful for geometry optimizations or molecular dynamics (MD). For example, running 3 water molecules in MD at 25 °C for 10 ps with g-xTB takes about 45 minutes on a single CPU. However, I'm unsure how accurate structural details (e.g., RDF or structure factors) compare to other methods.

A nice feature of tight-binding methods is their ability to calculate atomic charges, allowing you to analyze charge transfer, bond orders, bond formation/breakage, and dipole moments in response to external fields.

The wrapper doesn't currently extract electronic structure details, but these are available in the log files.

Web App

I built a backend API using FastAPI and quickly assembled a front end. Compute resources are limited, so large molecules or heavy usage might cause crashes, but water or methane tests should work fine. IR and thermochemistry calculations are also supported.

Expect crashes & bugs

The API and front end are hosted on Render and Netlify, respectively, using their free tiers, limiting resources and uptime. The web app isn't fully unit-tested, so expect potential bugs or app crashes.

Here is an example using it to get the optimized (limited steps) geometry for a simple linear molecule, hydrogen cyanide, and then using that to get the IR spectrum and thermochemistry.

Figure 1. Hydrogen cyanide structure, IR, and thermochemistry using g-xTB web app

The agreement is overall pretty good with the IR and thermochemistry, although the heat-capacity is way off but could be a conversion/bug issue in the web app. As of now I think g-xTB can only be used for molecules as I don't see how with the format you specify PBC and a cell, but maybe I'm missing something.

Footnotes


  1. See Jellium Model or Uniform Electron Gas where the electrons behave like a fluid/gas and those models are useful for understanding the effect of placing a positive charge in them to create a metal. 

References

[1] T. Froitzheim, M. Müller, A. Hansen, S. Grimme, g-xTB: A General-Purpose Extended Tight-Binding Electronic Structure Method For the Elements H to Lr (Z=1–103), (2025). https://doi.org/10.26434/chemrxiv-2025-bjxvt.

[2] S Bringuier, ASE Wrapper for g-xTB, 2025. https://github.com/stefanbringuier/g-xtb-ase



Reuse and Attribution