In Our Time: P v NP

P v. NP is one of the unsolved problems in computer science, essentially the question is: can all problems whose answer can be quickly checked for correctness by a computer also be quickly solved by a computer? At the moment the consensus is “no” but there is a $1million reward for anyone who finds an algorithm that works, and if someone does then current computer security measures are all compromised because encrypted passwords will become trivially crackable. The experts discussing it on In Our Time were Colva Roney-Dougal (University of St Andrews), Timothy Gowers (University of Cambridge) and Leslie Ann Goldberg (University of Oxford).

This was a bit of an odd episode of In Our Time, as it felt like Melvyn Bragg was significantly more out of his depth than usual. Whilst his role is to ask the Everyman type questions, in this one he seemed to be asking for clarification on the wrong things – like wanting to know why algorithms are so-called rather than accepting it as technical jargon. Or repeatedly wanting clarification on why NP problems are hard because he clearly hadn’t grasped the issue – although the experts are partly to blame here because I think they didn’t put enough emphasis on wanting the optimal solution to their example problems rather than the sort of “good enough” answer that we actually use in everyday life. Because of this I didn’t really feel like I learnt anything much from the programme & I don’t consider myself particularly well educated on the subject (just see my clumsy summary at the beginning of this post for evidence of that!).

The programme started with a couple of bits of background information – first the invention of computers (as a theoretical idea) by Alan Turing. He imagined a machine that accepted inputs, generated outputs and at any given point would know what to do based solely on its current state and the symbol it was currently looking at. This is the theoretical underpinning for how all computers work. This segment felt a little detached from the later discussion, but I think the linking idea was that Turing also realised that even with this sort of computing machine (which would do calculations much more rapidly than a person could) there would still be algorithms that would fail to finish in a sensible time (like before the end of the universe).

The other piece of background information was a brief discussion of what algorithms are. In essence an algorithm is a series of instructions, Roney-Dougal used the example of a cooking recipe as an algorithm for making some specific food. It was stressed that algorithms generally aren’t entirely linear – they’ll loop back on themselves to run through several steps multiple times.

The difference between P and NP problems boils down to how the time taken to solve the problem scales when you increase the number of items in the problem (n). As I said above, what wasn’t stressed on the programme was that by “solved” they meant “found the best possible answer”. P problems scale in a polynomial fashion – hence the P nomenclature, although Gowers said you could also think of them as Practical. As n increases the time taken increases by some polynomial amount, for instance n2: i.e. if there’s one item then the algorithm takes 1 unit of time (whatever that might be); if there are 2 items, then it takes 4 units; 3 items = 9 units and so on. So the amount of time taken increases faster than the number of items does, but relatively slowly and given sufficient computing power the algorithm will finish running in a useful timescale for practical applications. Goldberg gave an example of this sort of problem: say you have a group of people and you want them to work together in pairs, but not everyone likes everyone else. As the number of people in the group increases then the number of combinations you have to look through to check whether it’s the optimal solution or not goes up. But it goes up in a polynomial fashion so an algorithm that does that checking of possible combinations will finish in a sensible time.

NP problems are those that are not P. The number of possible solutions increases exponentially rather than polynomially, and as n increases a “dumb” algorithm that just checks every one in turn will very quickly get to the point where it will take billions of years to complete. (And if computing power increases then all you do is push that point just a little further out, but not far enough to make a practical difference.) The P v NP question is concerned with a sub-group of the NP problems that are called “NP complete” – these are the ones where a potential answer can be easily checked for correctness by a computer, but the answer cannot be trivially found. And the question is: can we figure out a clever algorithmic “trick” to turn an NP complete problem into a P problem, hence making it solvable.

They discussed a few examples of NP complete problems – one of the better known variants is the Travelling Salesman problem. In this you have a number of cities (n) linked by roads, and the salesman wants to travel to each one once and only once, and use the shortest possible route to do so. With small n you can figure out the answer by inspecting all the possible solutions and discovering which one’s best. But as n increases the number of routes goes up exponentially and this becomes a non-viable way of attacking the problem. Obviously for this specific scenario we find “good enough” answers for real world purposes (deliveries from central warehouses to local retail outlets, for instance). But if any of those answers is the optimal solution then that’s by chance rather than because the distributor figured it out.

To give another example, Goldberg returned to her example of a P problem – dividing up a large group into pairs of people working together optimising it for most people ending up working with the best possible partner. When it’s pairs, it’s a P problem … but if you’re looking to divide them into groups of three then it’s an NP problem. Another example is seating wedding guests when many of them hate many of the others, and optimising for fewest arguments. And all modern cryptography is based on an NP complete problem – passwords & so on are encrypted using a method involving multiplying together two very large prime numbers. To reverse engineer that (i.e. crack the encryption) you have to find the prime factors of a very very large number, which is an NP complete problem, so a brute force approach won’t complete until long after the lifetime of the person who might find it useful. (Decoding it by the intended recipient is a case of checking an answer you’ve been provided, which is easily done for NP complete problems.)

Although these examples of NP complete problems all sound quite different mathematically speaking they collapse to the same underlying problem. You have a collection of nodes (cities, guests, whatever) which are joined by links of varying lengths (roads, levels of hatred, etc) and you’re looking for the shortest route between them. So if someone figures out an algorithm that turns one NP complete problem into a P problem (i.e. finds that P = NP) then all NP complete problems are solved. Which has both good and bad implications for how our modern world works. On the bad side, all our encryption is broken so no more secure payment sites, no secure online banking (amongst many other effects). But on the other side many products may become cheaper – it’s pretty likely that our “good enough” answers to the Travelling Salesman problem aren’t the optimal one, and once you can move things around optimally then logistics gets a lot more efficient. Circuit design also gets a lot more efficient.

However, most mathematicians think that we won’t find a solution to NP complete problems (i.e. that P != NP). As yet no clever ideas have played out, but there is that $1million reward if someone does find a solution. The experts briefly mentioned that quantum computing had once seemed a promising lead – using quantum entanglement to make the right answer somehow pop out without needing the time to do the calculations. But this was another lead that didn’t go anywhere. (Although cynical me did think that if it had been made to work by some government agency or another, then we’d not know it had …)

In Our Time: Complexity

Complexity theory is a relatively new discipline, about 40 years old, that looks to model and understand the behaviour of complex systems. The systems themselves can be as diverse as the weather, crowd movements, epidemic spread or the brain. The experts talking about it on In Our Time were Ian Stewart (University of Warwick), Jeff Johnson (Open University) and Eve Mitleton-Kelly (London School of Economics).

One of the important first steps in understanding what complexity theory is about is to distinguish between a complicated system and a complex system. Mitleton-Kelly talked about this and said that a jet engine is a good example of a complicated system – it has many parts, but it is still something that can be designed and its behaviour can be predicted precisely. A complex system is something like the movement of a crowd through a building – again there are many parts (each person, the building itself) but whilst you can change things or design things in the building to influence the system the system as a whole is not designable. It is also unpredictable in a mechanistic sense, in particular complex systems are very sensitive to changes in the starting conditions and so a relatively minor difference can change the eventual outcome significantly. However complexity theory can be used to build models of the system that can predict the sorts of things that are likely to happen.

Complex systems are generally made up of a network of interacting units – people in a crowd, neurones in a brain, etc. These units may be (are always?) governed by straightforward and knowable rules. An example is that each person in a crowd is trying to get somewhere, and as a space opens up in the right direction they move into it. The complex system itself emerges from the interactions between the individual units (emergent behaviour is one of the key concepts of complexity theory). Crowd movements are apparently a solved problem and there are commercial packages that can be used to model crowds in different situations. Whilst there’s no way of predicting where any given individual is going to be at any given time, nor precisely how many people will be trying to go from any given A to B, you can model what the crowd as a whole will look like under different conditions.

Feedback and equilibrium are two important concepts in thinking about complex systems. Normally when we think about these concepts we are thinking about a system like a central heating system. When the thermostat detects the temperature has dropped, the heating switches on and so the temperature rises again until it’s in the desired range at which point the heating switches off. So that’s negative feedback acting to keep the system in equilibrium. But in a complex system there may be many equilibria, and feedback between the units is more likely to be positive and to reinforce the change from the equilibrium. An example of positive feedback between units in a network was given by Johnson (I think) – think of a rumour which starts with person A. Person A tells person B it’s definitely true (whatever it may be) even tho they’re not 100% sure, person B passes it on to person C (“guess what I heard?”) who passes it on to D & E and so on. Eventually someone repeats it back to person A, who then thinks to themselves “see, I was right” and doubles down on telling people about this “fact”.

Because of how feedback works in this sort of complex system, and because there are multiple stable points, it’s unlikely that once the system is perturbed from one equilibrium that it will return to that one. It’s particularly unlikely that an attempt (following a more mechanistic model) to generate negative feedback to return the system to equilibrium will work as intended. This has important implications for controlling the economy. Mitleton-Kelly also talked about work she’s involved with in Indonesia in helping the government attempt to stop deforestation – just passing a law saying “don’t do it” as a negative feedback mechanism is unlikely to have the right effect. Instead her work is on trying to model the complex system that arises from all the factors that affect who cuts down the forest where, and why. Then the Indonesian government should be able to try several strategies in the model and see what sort of effect they have and then pick something more effective.

Another example, that Bragg brought up, of a complex system that’s been perturbed from one equilibrium and is gradually (hopefully) settling into another was the Arab Spring. Which also illustrated something else – the idea that the system might be in equilibrium but also be ripe for a change. The way the world has evolved with it being easier to interact with each other via the internet meant that the situation in the Middle East & North African autocratic regimes was actually changing before it became apparent. Then the effects from a key event in Tunisia rippled through the network, and the system abruptly moved away from equilibrium. And it’s still shifting and trying to come to a new equilibrium.

This was fascinating as a look at a new and still rapidly evolving discipline. I did think Bragg came across as a bit out of his depth tho – amazing how rarely that happens to be honest. There was also a slightly odd mix on the panel, with two theoreticians (and mathematicians) and one more applied practitioner of the science. Sometimes Stewart and Johnson seemed to be having a different conversation to the one Mitleton-Kelly was having.

Heart vs Mind: What Makes Us Human?; The First World War; How to Get Ahead; Precision: The Measure of All Things

We finished three different series over the last week so I wasn’t going to write about any of the one-off programmes as well, but Heart vs Mind: What Makes Us Human? irritated me sufficiently that I wanted to say why! The premise of this film was that the presenter, David Malone, had always thought of himself as a wholly rational person but then his life had become derailed – his wife had started to suffer from severe depression and it was as if the person she had been no longer existed. In the wake of that, and his responses to it, he started to think emotions were more important to what makes us human than he’d previously thought. So far, so good – I mean I might quibble about how it’s a known thing that no-one’s really totally rational and we know that the mind affects the body & vice versa; and I might wonder what his wife thinks about being talked about as if she might as well be dead. But those are not why I found this programme irritating.

I found it irritating because the argument he was putting forward had the coherency and strength of wet tissue paper. He took the metaphorical language of “brain == reason; heart == emotion” and then looked for evidence that the physical heart is the actual source of emotions. There was some rather nice science shown in the programme – but whenever a scientist explained what was going on Malone would jump in afterwards and twist what was said into “support” for his idea.

For instance, take heartbeat regulation. It is known that there are two nerves that run from the brain down to the heart and they regulate the speed of the heartbeat. There is a physiologist in Oxford (I didn’t catch his name) who is looking at how that regulation works. It turns out there is a cluster of neurones attached to the heart which do the actual routine “make the heart beat” management. The messages coming from the brain tell the heart neurone cluster “speed up” or “slow down” rather than tell the heart muscle to “beat now; and now; and now”. Interesting, but not that astonishing – I think there are other examples of bits of routine tasks being outsourced to neurones closer to the action than the brain is (like the gut, if I remember correctly). Malone took this as proof that the heart had its own mini-brain so it would be possible for it to generate emotions. And so it’s “like a marriage between heart and brain with the brain asking the heart to beat rather than enslaving it and forcing it to beat.”

There were other examples of his failure to separate metaphor from reality – indeed his failure to realise that there were two things there to separate. Take, for instance, the metaphor of the heart as a pump. Malone hated this metaphor, so industrial and mechanical and soulless. Practically the root of everything wrong with modern society! (I exaggerate a tad, but not much.) However, the heart undeniably does pump blood round the body. So he looked at visualisations of blood flowing through his heart (another awesome bit of science) and talked about how beautiful this was – as the blood is pumped around the shape of the heart chambers encourages vortices to form in the flow which swirl in the right way to shut the valves after themselves on the way out. Which is, indeed, beautiful and rather neat – and I learnt something new there. However Malone then carried on about how we shouldn’t keep saying the heart is a pump because the complexity of the heart’s pumping mechanics are too beautiful to be reduced to what the word pump makes him imagine. Er, what? Saying you can only imagine pumps as simple metal cylinders with pistons says more about paucity of your imagination than the pumpness of the heart.

I think part of my problem with this was that I’m not actually that much in disagreement with him so it was irritating to watch such a poor argument for something reasonable. I too believe Descartes was wrong – you can’t separate mind from body. The mind is an emergent property of the body. And there is feedback – our mental state, our emotions and beliefs, affect the body and its functions. Our physical health and physical state affects our minds. It’s not surprising to me that it’s possible to die of a broken heart (ie mental anguish can affect the physical system including disrupting heartbeat potentially fatally in someone whose heart is already weak). But this is not because the metaphor of the heart as the seat of emotions is a physical reality. It’s because mind and body are one single system.

None of us are rational creatures. Emotions are a central part of what makes us human. And metaphors do not need to be based in a physical truth to be both useful and true.

(I also get grumpy about people who think that explaining something necessarily robs it of beauty but that’s a whole other argument. As is the one where I complain about the common equation of industry with ugliness.)


Moving on to what I intended to talk about this week: we’ve just finished watching the BBC’s recent 10 part series about The First World War. This was based on a book by Hew Strachan, and used a combination of modern footage of the key places, contemporary film footage, photographs and letters to tell the story of the whole war from beginning to end. Although obviously the letters were chosen to reflect the points the author wanted to make, using so many quotes from people who were there helped to make the series feel grounded in reality. It was very sobering to watch, and the sort of programmes where we frequently paused it to talk about what we’d just seen or heard. It wasn’t a linear narrative – the first couple of episodes were the start of the war, and the last couple were the end, but in between the various strands were organised geographically or thematically. An episode on the Middle East for instance, or on the naval war, or on the brewing civil unrest in a variety of the participating countries.

I shan’t remotely attempt a recap of a 10 episode series, instead I’ll try and put down a few of the things that struck me while watching it. The first of those was that there is so much I didn’t know about the First World War. This wasn’t a surprise, to be honest, I’ve not really read or watched much about it and didn’t spend much time on it at school (having given up history pre-GCSE). But I’d picked up a sort of narrative by osmosis – the Great War is when Our Men went Over There and Died in a Brutal Waste of Life. And that’s true as far as it goes, but it doesn’t go anywhere near far enough. Even for the Western Front – the British narrative is all about it being “over there” but (obviously!) for the French and the Belgians this was happening in their country and in their homes. One of the sources used for this part of the war was a diary of a French boy – 10 years old at the start, 14 by the end – which really brought that home. And (again obviously) the Western Front and the French+British and German troops weren’t the only participants nor the only areas of conflict. I thought separating it out geographically & thematically was well done to help make that point.

It was odd to note how much the world has changed in the last century. Because there was film footage of these people – dressed a bit too formally, but looking like ordinary people – the casual anti-Semitism and racism in their letters and official communications was more startling than it would’ve been from more distant seeming people. Things like referring to Chinese or African troops as “monkeys” in relatively official documents. I’m not saying that racism or anti-Semitism have vanished in the modern world, but there’s been a definite change in what’s acceptable from politicians and so on.

Throughout the whole series the shadow of the Second World War loomed. Obviously no-one knew at the time how things would turn out (tho it seems one of the French generals did make some rather prescient remarks about only getting 20 years of peace at the end of the First World War). But it’s rather hard to look at it now without the knowledge that hindsight gives us. Which ties in with my remark about anti-Semitism above, because one of the things that changed cultural ideas of “what you can say about Jews” is the Holocaust. And other hindsight spectres included the situation in the modern Middle East as set up in large part by the First World War, and of course the Balkans too.

Interesting, thought provoking, and I’m glad I watched it.


Very brief notes about the other two series we finished:

How to Get Ahead was Stephen Smith examining three different historical courts and looking at both the foibles of the monarch and the ways a courtier at that court would need to behave & dress in order to succeed. He picked out a selection of very despotic rulers – Richard II of England, Cosimo Medici of Florence and Louis XIV “the Sun King” of France. I wasn’t entirely convinced about Smith as a presenter, a few more jokes in his script than he quite managed to pull off, I think. But good snapshots of the lives of the elite in these three eras/areas.

Precision: The Measure of All Things was Marcus du Sautoy looking at the various ways we measure the world around us. For each sort of measurement (like length, or time) he looked at how it had evolved throughout history, and at how greater precision drives on technology which in turn can generate a need for even greater precision. I think I found this more interesting than J, because I think it’s kinda neat to know why seemingly arbitrary units were decided on when they could’ve picked anything. I mean the actual definition settled on for a meter is arbitrary (the distance light travels in a vacuum in 1/299,792,458 of a second) but there’s a rationale for why we decided on that particular arbitrary thing (the definition before the definition before the current one was that it was 1/10,000,000th of the distance from the north pole to the equator).


Other TV watched this week:

Episode 2 of Churches: How to Read Them – series looking at symbolism and so on in British churches.

Krakatoa Revealed – somewhat chilling documentary about the 19th Century eruption of Krakatoa and what we’re learning about the certainty of future eruptions of Krakatoa.

24 Hours on Earth – nature documentary looking at the effects of the diurnal cycle on animals and plants. Lots of neat footage and a voiceover with somewhat clunky and distracting metaphors (“Soon the sun’s rays will flip the switch and it will be light” !?)

Episode 1 of David Attenborough’s First Life – series about the origins of life and the evolution of animals.

In Our Time: Pascal

Blaise Pascal was a 17th Century Frenchman who was a scientist, mathematician & philosopher. Several of his ideas are still recognised today – either still in use (for instance some of his mathematical work) or recognised by the naming of modern things (like the programming language Pascal). Discussing him on In Our Time were David Wootton (University of York), Michael Moriarty (University of Cambridge) & Michela Massimi (University of Edinburgh).

Pascal was born in 1623, and died in 1662 age 39. David Wootton gave us some context for the France of the time which he called essentially the time of the Three Musketeers – so Richelieu is in charge in France, the country is allying with Protestants in the Thirty Years War but in terms of internal politics there is a big crackdown on Protestantism. In the wider world Galileo is active at this time – which took me by surprise as I think of Pascal as nearly-modern but Galileo as end-of-medieval and clearly that’s not a sensible distinction! Descartes is also still alive when Pascal is born.

Pascal was educated at home, his father had planned that the boy should be told about various subjects young but then not study them until later when he was ready for them. But the young Pascal had other ideas – for instance figuring out Euclidean geometry himself once he’d been told about it, rather than waiting till he was taught the subject. One of the people on the programme (I forget which one) said that Pascal was Mozart type levels of genius – just in maths, science & philosophy rather than music. One of Pascal’s first notable works was inventing a mechanical calculator while he was still in his teens – he did this to save his father time (his father was a banker and thus had lots of adding up to do).

Pascal’s work in physics was on one of the big questions of the day – could there be such a things as a vacuum or not. Aristotelian ideas said no, but an actual experiment suggested yes. Pascal repeated the experiment – taking a tube filled with mercury & closed at one end, then inverting it in a basin of mercury. The level of the mercury in the tube drops and a space opens up at the top of the tube, there’s nothing this space can be full of, so it must be a vacuum. Pascal also took these experiments further – looking at different liquids (like water), and testing the effects on the height of the mercury at different heights above sea level. He was one of the first to demonstrate that air had pressure, and that this pressure varied with altitude.

Pascal also had an influence on the future of science & the scientific method. He hadn’t been brought up reading Aristotle as the “answer” to all the questions about the natural world, and he didn’t believe that you required a metaphysical starting point to answer a physical question. So he said that in science there was no appeal to authority, nor was there Truth, just that you looked at the facts as they were and explained them as best you could. Then when more facts were known you might have to change your mind – you’d not have Truth, just have got as close as you could under the circumstances. One of the experts said that Pascal was one of the first people to actually demonstrate this way of having scientific progress – other writers before him had talked about how you could progress in science but he actually did it.

Pascal was also interested in mathematics & he corresponded with Fermat. One of his theorems to do with the geometry of conic sections is still used by mathematicians today. Pascal’s triangle was mentioned briefly on the programme as another example of his mathematical legacy. He was particularly interested in probability, and would work on gambling problems for French aristocrats he knew. He & Fermat worked on a particular problem to do with what the pay out should be for a game of Points that is interrupted before the end. In Points a coin is flipped multiple times, each time it’s heads player A gets a point, each time it’s tails player B does. First player to 10 points wins the pot. How the pot should be split if it’s terminated early depends on what the probabilities of each player winning from the state it’s in (rather than just splitting it according to how many points so far). Pascal & Fermat’s work has had far reaching implications in a lot of the business world, not just in gambling or the specific problem – like insurance for instance.

Later in life (if you can call it that for someone who dies so young) after some sort of intense religious experience Pascal turned away from science & towards religion & religious philosophy. Here he believed strongly in appeal to authority – he built on the work of earlier philosophers who said that human reason is too weak to comprehend the Truth of the world in a metaphysical sense. And so in contrast to his scientific ideas Pascal felt that religious Truth is revealed and is unchanging. Pascal had become a member of the Jasenists, a Catholic sect that built on the ideas of Augustine in the same sort of way that Protestants did – in particular believing that people cannot come to a state of grace through their own efforts, they must be chosen by God to receive God’s Grace and so only the chosen are saved. Mainstream Catholicism of the day believed that by doing good and repenting sin you could come closer to being saved, and so the Jesuits regarded the Jansenists as heretics just as much as Protestants were. One of Pascal’s later works was written to argue that the Jesuits & mainstream Catholicism were wrong, and it was partly arguing based on appealing to the authority of Augustine and saying that the Jesuits were diluting the true Christian morality to make it more palatable to the masses. This work is credited by some later Catholics as having damaged the reputation of the Jesuits enough to have been a contributing factor in their suppression in the late 18th Century.

Pascal’s Wager is one of his philosophical ideas that is still remembered today. Massimi pointed out that it was never intended to convert an atheist, but was aimed at sceptical Christians. In it Pascal says that given there are two states – either there is a God or there isn’t – then there two ways to wager: either bet for God or bet against God. Given this, how should you bet to maximise the chance of a good outcome? If you bet against God and you are wrong, then you will suffer eternal damnation after death, so the best thing to do is to avoid that – bet for God and even if you’re wrong you’ll suffer no consequences. This doesn’t work if you believe there is no God, you need to have doubt about that. It also doesn’t say anything about whether or not Christianity is the Truth – Massimi pointed out that one objection to Pascal’s Wager is that the same argument can be made for any religion. And if you enjoy this world’s pleasures then there is also a down side to betting for God, making it a less obvious choice (definitely no pleasure now as vs. possibly no pleasure later, a more complex situation to weigh up) – which was not a problem that Pascal had. He said once that life was like being chained in a dungeon in the dark, and every so often the guards come in and strangle someone. Cheerful fellow …!

In the summing up section of the programme they discussed how Pascal’s legacy lives on in science & mathematics but is most influential in religious thought. The three experts credited him with laying the foundations of modern Christianity – in that faith & religion now are seen as something that you choose to believe in without needing a rational argument. And that is a very Pascalian way to see it.

In Our Time: Fermat’s Last Theorem

Fermat was a 17th century lawyer who did maths in his spare time, corresponding with many other mathematicians around Europe. He had a habit of setting little challenges to his correspondents – “I can prove this, can you?”. He’s famous now for an annotation he made in a book – that he had found a proof that an + bn = cn has no positive integer solutions when n>2 “which this margin is too narrow to contain”. The guests on the episode of In Our Time that discussed it were Marcus du Sautoy (University of Oxford), Vicky Neale (University of Cambridge) and Samir Siksek (University of Warwick).

They started off by setting the theorem in context. It’s a generalised form of Pythagorean Theorem – the one we all (probably!) learnt at school. For a right-angled triangle the sum of the squares of the two shorter sides is equal to the square of the longer side (the hypotenuse). And du Sautoy pointed out that this has a very practical application – if you have a rope with equally spaced knots in it and you arrange it into a triangle with sides 3, 4 and 5 then you are guaranteed a right-angle between the sides of length 3 & 4. Useful for building pyramids. And other things you want the corners to be right-angles on. So for n=2 we know there are some positive integer solutions.

It’s also a sort of equation called a Diophantine Equation – these are polynomials that only have positive integer values of their variables. So other examples are things like x2=y3-2 (which has at least one solution – 52=33-2). Some Diophantine Equations have no solutions, some have finite numbers of solutions, some have infinite. And the question is what sort of equation Fermat’s Last Theorem is.

Fermat never wrote his proof down anywhere, and the experts were suggesting that perhaps he never actually had a generalised one. That his proof by infinite reduction of the case where n=4 was all he’d done (and then was suggesting that must be the case for all the other possible values). The equation itself isn’t a particularly interesting one and hasn’t any direct practical applications, but it became famous because no-one could find the proof that Fermat said he had. Various famous (and otherwise) mathematicians tried to find the proof – the one that they discussed that I particularly remember is Sophie Germain, who was a French mathematician in the late 18th/early 19th Century. At that time she couldn’t study mathematics formally because she was a woman, so she was self-taught & corresponded with other mathematicians using a male pen-name. She found a way to inspect particular values for n to show that there were no solutions – and used this to prove the theorem for values of n up to 100. Neale clearly found Germain particularly interesting as she nearly got side-tracked into a bio of her before being pulled back to the subject at hand 🙂

During the 19th & 20th Centuries there were several monetary prizes on offer to people who found a proof, but no-one did until Andrew Wiles in 1997 (just before the time limit on that particular competition). They did discuss a bit about what his proof was, but I didn’t follow it well enough to remember it well enough to explain it – it had something to do with mapping these sorts of algebraic equations to elliptic curves, and if you could show there was no possible curve then there must be no solutions to the equation.

They summed up the programme (very briefly) by saying that even though the equation isn’t itself terribly important the effect of the competition to solve it was to drive forward several other areas of mathematics that do have practical applications.