Posted by Margaret on Wednesday, June 8, 2016 - 12:00

P v. NP is one of the unsolved problems in computer science, essentially the question is: can all problems whose answer can be quickly checked for correctness by a computer also be quickly solved by a computer? At the moment the consensus is "no" but there is a $1million reward for anyone who finds an algorithm that works, and if someone does then current computer security measures are all compromised because encrypted passwords will become trivially crackable. The experts discussing it on In Our Time were Colva Roney-Dougal (University of St Andrews), Timothy Gowers (University of Cambridge) and Leslie Ann Goldberg (University of Oxford).

This was a bit of an odd episode of In Our Time, as it felt like Melvyn Bragg was significantly more out of his depth than usual. Whilst his role is to ask the Everyman type questions, in this one he seemed to be asking for clarification on the wrong things - like wanting to know why algorithms are so-called rather than accepting it as technical jargon. Or repeatedly wanting clarification on why NP problems are hard because he clearly hadn't grasped the issue - although the experts are partly to blame here because I think they didn't put enough emphasis on wanting the *optimal* solution to their example problems rather than the sort of "good enough" answer that we actually use in everyday life. Because of this I didn't really feel like I learnt anything much from the programme & I don't consider myself particularly well educated on the subject (just see my clumsy summary at the beginning of this post for evidence of that!).

The programme started with a couple of bits of background information - first the invention of computers (as a theoretical idea) by Alan Turing. He imagined a machine that accepted inputs, generated outputs and at any given point would know what to do based solely on its current state and the symbol it was currently looking at. This is the theoretical underpinning for how all computers work. This segment felt a little detached from the later discussion, but I think the linking idea was that Turing also realised that even with this sort of computing machine (which would do calculations much more rapidly than a person could) there would still be algorithms that would fail to finish in a sensible time (like before the end of the universe).

The other piece of background information was a brief discussion of what algorithms are. In essence an algorithm is a series of instructions, Roney-Dougal used the example of a cooking recipe as an algorithm for making some specific food. It was stressed that algorithms generally aren't entirely linear - they'll loop back on themselves to run through several steps multiple times.

The difference between P and NP problems boils down to how the time taken to solve the problem scales when you increase the number of items in the problem (*n*). As I said above, what wasn't stressed on the programme was that by "solved" they meant "found the best possible answer". P problems scale in a polynomial fashion - hence the P nomenclature, although Gowers said you could also think of them as Practical. As *n* increases the time taken increases by some polynomial amount, for instance *n ^{2}*: i.e. if there's one item then the algorithm takes 1 unit of time (whatever that might be); if there are 2 items, then it takes 4 units; 3 items = 9 units and so on. So the amount of time taken increases faster than the number of items does, but relatively slowly and given sufficient computing power the algorithm will finish running in a useful timescale for practical applications. Goldberg gave an example of this sort of problem: say you have a group of people and you want them to work together in pairs, but not everyone likes everyone else. As the number of people in the group increases then the number of combinations you have to look through to check whether it's the optimal solution or not goes up. But it goes up in a polynomial fashion so an algorithm that does that checking of possible combinations will finish in a sensible time.

NP problems are those that are not P. The number of possible solutions increases exponentially rather than polynomially, and as *n* increases a "dumb" algorithm that just checks every one in turn will very quickly get to the point where it will take billions of years to complete. (And if computing power increases then all you do is push that point just a little further out, but not far enough to make a practical difference.) The P v NP question is concerned with a sub-group of the NP problems that are called "NP complete" - these are the ones where a potential answer can be easily checked for correctness by a computer, but the answer cannot be trivially found. And the question is: can we figure out a clever algorithmic "trick" to turn an NP complete problem into a P problem, hence making it solvable.

They discussed a few examples of NP complete problems - one of the better known variants is the Travelling Salesman problem. In this you have a number of cities (*n*) linked by roads, and the salesman wants to travel to each one once and only once, and use the shortest possible route to do so. With small *n* you can figure out the answer by inspecting all the possible solutions and discovering which one's best. But as *n* increases the number of routes goes up exponentially and this becomes a non-viable way of attacking the problem. Obviously for this specific scenario we find "good enough" answers for real world purposes (deliveries from central warehouses to local retail outlets, for instance). But if any of those answers is the optimal solution then that's by chance rather than because the distributor figured it out.

To give another example, Goldberg returned to her example of a P problem - dividing up a large group into pairs of people working together optimising it for most people ending up working with the best possible partner. When it's pairs, it's a P problem ... but if you're looking to divide them into groups of three then it's an NP problem. Another example is seating wedding guests when many of them hate many of the others, and optimising for fewest arguments. And all modern cryptography is based on an NP complete problem - passwords & so on are encrypted using a method involving multiplying together two very large prime numbers. To reverse engineer that (i.e. crack the encryption) you have to find the prime factors of a very very large number, which is an NP complete problem, so a brute force approach won't complete until long after the lifetime of the person who might find it useful. (Decoding it by the intended recipient is a case of checking an answer you've been provided, which is easily done for NP complete problems.)

Although these examples of NP complete problems all sound quite different mathematically speaking they collapse to the same underlying problem. You have a collection of nodes (cities, guests, whatever) which are joined by links of varying lengths (roads, levels of hatred, etc) and you're looking for the shortest route between them. So if someone figures out an algorithm that turns one NP complete problem into a P problem (i.e. finds that P = NP) then all NP complete problems are solved. Which has both good and bad implications for how our modern world works. On the bad side, all our encryption is broken so no more secure payment sites, no secure online banking (amongst many other effects). But on the other side many products may become cheaper - it's pretty likely that our "good enough" answers to the Travelling Salesman problem aren't the optimal one, and once you can move things around optimally then logistics gets a lot more efficient. Circuit design also gets a lot more efficient.

However, most mathematicians think that we won't find a solution to NP complete problems (i.e. that P != NP). As yet no clever ideas have played out, but there is that $1million reward if someone does find a solution. The experts briefly mentioned that quantum computing had once seemed a promising lead - using quantum entanglement to make the right answer somehow pop out without needing the time to do the calculations. But this was another lead that didn't go anywhere. (Although cynical me did think that if it *had* been made to work by some government agency or another, then we'd not know it had ...)