**Quantum field theory** (QFT) extends quantum mechanics from single localised particles to fields that exist everywhere. These fields represent forces that permeate all of space and time. A familiar field is the electromagnetic field, which can be seen by scattering iron filings around a magnet. Another example is the gravitational field, which we see pointing towards the centre of the earth.

Both these examples are classical (non-quantum) fields. The application of quantum mechanics to these fields gives rise to the topic of QFT. The simplest QFT is **quantum electrodynamics**, the quantum theory of the electromagnetic field. This describes the interactions of electron and positrons (the anti-matter version of the electron – many of the particle’s properties are reversed) with the photon, the messenger particle of the electromagnetic field.

While the idea of QFT sounds straightforward, there are many conceptual complications. Although QFT reproduces classical physics easily, naive attempts to compute the quantum corrections give infinite answers. We all know it’s impossible to have a probability higher than 100%, so to get testable predictions, physicists must extract finite probabilities from the theory.

The technical term for the process is **renormalisation**. Originally this was seen as black magic: the answers were correct and agreed with nature, but the process of manipulating infinities seemed totally artificial. In the 1970s Kenneth Wilson was able to explain why the infinities were both natural and harmless. This research helped win him the Nobel Prize in 1982.

Wilson explained that the infinities arise because the calculations incorrectly assumed QFT applied down to infinitely short distances. We expect this hypothesis to be erroneous, and renormalisation is merely the sensible and systematic procedure for discarding infinities arising from this unphysical hypothesis.

QFT presents a second difficulty. The machinery involved is so complicated that you can only ever get approximate answers. Rather than doing direct computations, physicists have to estimate the solutions. We call this approach **perturbation theory**.

In some situations it works very well, indeed physicists apply it all the time at particle accelerators like the Large Hadron Collider at CERN. You can even draw a pretty picture for it called a Feynman diagram (left). But it’s not all roses. In fact it’s pretty useless for managing the strong force at large distances.

After fifty years of progress we are well versed in the QFTs of three fundamental forces - electromagnetism, weak and strong. These form the basis of the famous **Standard Model **of particle physics. Its predictions have been repeatedly confirmed, sometimes to astonishing accuracy. However the Standard Model is not particularly elegant. It requires 19 finely tuned input parameters, and nobody knows why.

And the problems don’t end there. Sadly, the techniques of QFT fail for the fourth fundamental force – gravity. The force of gravity is said to be **non-renormalisable**. QFT can be used to provide an partial sketch for gravitation at low energy, but extending this is impossible – the high energy theory is worthless for making predictions since it requires an infinite number of constants.

To solve these problems we need something new. A unifying framework would bring gravity into the fold. It might predict the mysterious quantities of the Standard Model. And perhaps it will give us a deeper insight into physics beyond perturbation theory.