Hacker News
3 years ago by gavagai691

Two inaccuracies in the article. For purposes of simplicity, the author writes "But the Lindelöf hypothesis says that as the inputs get larger, the size of the output is actually always bounded at 1% as many digits as the input." But this is a case where simplification goes too far. What Lindelof says is that the size of the output is always bounded by Δ% as many digits as the input, for ANY (arbitrarily miniscule) Δ > 0.

Second, the subtitle "Paul Nelson has solved the subconvexity problem..." is strange. The subconvexity problem, for a given L-function, is to lower the percentage described above from 25% to "any positive number"; in other words, to bridge the gap between the convexity bound (which is "trivial") and Lindelof (a consequence of GRH). The only way that the statement "Paul Nelson has solved THE subconvexity problem..." could maybe be accurate is if Nelson proved the Lindelof hypothesis for "all" L-functions. Which is far from the case. (What makes subconvexity so interesting, as Nelson says in the article, is that it is a problem where you can make partial progress towards a particularly important consequence of GRH. And Nelson's result is exactly that: partial progress.) More accurate would be "Paul Nelson has made significant progress on the general subconvexity problem."

3 years ago by civilized

These slips from Quanta are puzzling. Quanta normally seems to be more meticulous about accuracy, and the author is a veteran.

The attraction of Quanta has always been that its writers seemed to have an intrinsic passion for math, treating it as more than just a gee-whiz topic to gloss over in between the latest panicked missives about politics and how cellphones are destroying our children. One of its most delightful qualities has been the consistent willingness to take a few extra sentences to explain even advanced concepts in a manner that is basically technically correct.

Certainly the article remains far better than the average mass media STEM writing, but Quanta should take greater care to keep their quality pristine. They have been utterly unique and have set the example for everyone else.

3 years ago by gavagai691

It's not entirely fair to call the second point a major slip. I think that it is still sort of inaccurate, but not exactly for the reason I had said above; see the thread following kevinventullo's comment below. In any case, my second point is sort of pedantic. I don't love the terminology used, but I wouldn't heap very much blame on the author.

The first one is a bit worse. I think if I had read this knowing nothing about convexity I would have gotten the wrong idea from the arbitrary choice of 1%. I understand the desire to simplify, but it is an art to simplify while keeping what you say technically correct. Quanta usually does an excellent job of this. I wouldn't say that the first point above is an egregious error by any means, but I think it is a slip.

3 years ago by civilized

Ian Petrow defines the Subconvexity Problem as "Prove non-trivial upper bounds for L-functions on the critical line" [1]

Given that, it seems fair-ish to say that Nelson solved the Subconvexity Problem. You just have to understand that the problem is really a family of problems of increasing hardness (e.g. prove tighter and tighter bounds), and solutions more powerful than Nelson's may come later.

[1] https://www.ucl.ac.uk/~ucahpet/SubconvexityIntro.pdf

3 years ago by CWuestefeld

Over the past few years, Quanta has become my go-to place for learning about current science developments. A decade or more Scientific American started focusing on their political agenda. And in the last year or two, American Scientist has been completely taken over by wokeness. I haven't found a better venue for pleasure-type learning about science than Quanta.

3 years ago by civilized

My thoughts exactly.

To be fair, I think there are economic conditions that drive the unfortunate situation. Quanta is very lucky to have enormous free support from a billionaire-funded foundation (Simons Foundation).

When publications become desperate for cash, they dive into the politics and culture war rathole.

3 years ago by undefined
[deleted]
3 years ago by kevinventullo

I’m confused by your second paragraph. The article describes the subconvexity problem for a given L-function as lowering the bound from 25% to x% for some x < 25. That is, 25% is the “convexity bound” and the existence of such an x is a “subconvexity bound.”

My understanding is that Nelson has indeed done this for every L-function, although the exact x varies with the L-function. How is that not solving the subconvexity problem?

3 years ago by gavagai691

A subconvex bound is any bound that beats the convexity bound of 25%. If you would define "the subconvexity problem" as beating the 25% bound AT ALL (or in other words providing any subconvex bound), then the statement in the subtitle would be closer to accurate.

But this is not (I think) how most people would speak, and I wouldn't say this. I would say the subconvexity problem is the problem of bridging the gap between 25% (convexity) and 0% (Lindelof). Subconvexity bounds for various L-functions (zeta, Dirichlet L-functions, and higher rank L-functions) are a very active area of research, and so it seems strange to me to say "the subconvexity problem is solved."

"My understanding is that Nelson has indeed done this for every L-function." One minor note. Nelson's work applies if the L-function is automorphic. By the Langlands philosophy, this should be true for any reasonable L-function, but this is far from known, even in, e.g., the simple case of the L-function corresponding to the tensor product of two automorphic L-functions.

Edit: I am wrong about the statement "this is not (I think) how most people would speak." It looks like beating the convexity bound at all is often described as "solving the subconvexity problem," e.g. in the introduction here https://link.springer.com/article/10.1007/s002220200223. This description is a bit strange to me, but if it is standard then it is unfair for me to call it an "inaccuracy," persay; thanks for pointing this out.

3 years ago by jacobolus

Friendly note: per se is a Latin phrase with non-obvious spelling.

3 years ago by undefined
[deleted]
3 years ago by gavagai691

As a further addendum, there are different "aspects" in which you could want the bound to be subconvex. As a result, even for a given family of L-functions) there is more than one "subconvexity problem." In this case, Nelson's bound is not subconvex in the non-Archimedean aspect. This is another reason why it doesn't seem quite right to me to say without qualification that "the subconvexity problem has been solved."

3 years ago by CrushItNerds519

As you say, we now know that each standard L-function satisfies a subconvex estimate as its argument varies. This falls short of "solving the subconvexity problem" in two respects.

The first, pointed out already by gavagai691, is that it is not known that "every L-function" is "standard" (this is a major open problem in the Langlands program).

The second is that the general "subconvexity problem" asks for a nontrivial estimate not only for each fixed L-function as its argument varies, but more generally as the L-function itself varies. The general case of the problem remains open.

3 years ago by undefined
[deleted]
3 years ago by ascar

> What Lindelof says is that the size of the output is always bounded by Δ% as many digits as the input, for ANY (arbitrarily miniscule) Δ > 0.

Is there are reason or benefit of stating it like that rather than saying the fraction of output and input digit size asymptotically approaches zero? Or did I misunderstand the explanation?

3 years ago by gavagai691

Thinking about digit size is sort of fine for simplification / not turning off general audiences with equations, but a bit of a pain to think about if , e.g., the things you're comparing might be smaller than one.

What Lindelof says more precisely is that for any Δ > 0 and any L-function L(s), there is a constant C_(Δ,L) (depending only on Δ and L, but crucially not on s) such that |L(s)| <= C_Δ (1+|Im(s)|)^Δ.

3 years ago by CrushItNerds519

There may be some linguistic benefits to the "epsilon" formulation when discussing subconvexity rather than Lindelof. For instance, I think "the output is eventually bounded by 24% of the input" sounds more natural than "the limsup of output/input approaches something less than 24%".

3 years ago by easytiger

I remember as a teen when my parents got me a pc (they could Ill afford) as an upgrade from the, by then 10 years old, hand-me-down ZX Spectrum.

I was in awe at how many primes I could find on my cyrix 333Mhz on my own noddy code

Left it running all day when at school.

Never found any thing of value but I learned how to Beowulf cluster and learned a lot about arbitrary integer implementation

Simpler times

3 years ago by zw123456

When I was a sophomore in high school, in 1972, our school started a "computer class" which was comprised of a teletype in a corner of a former storage room. Me and my best friend were the only ones to sign up for it. The teacher was the math teacher and he had a manual from the local community college he tossed to us and basically turned us loose on it. We logged into their Univac. The first thing we did was find the square root of a number using the Newtonian approximation. The second thing we did is find the 1st 100 primes which took forever :)

Awesome your parents got you a PC, we would have died for one if it existed then!

3 years ago by earleybird

For me it was learning Fortran IV (after Focal then Basic - PDP-8i) because my dad let me use his account on the uni CDC 6400 - basically built basic bigint math library that I then put to use to determine if 1000! + 1 was prime :-)

3 years ago by marvel_boy

And?It was prime?

3 years ago by earleybird

From memory, "I don't recall" - it was 50yrs ago :-) Though wolfram alpha tells me "nope,it ain't"

Though I do have some recollection of hitting account limits on compute time on the CDC machine. My dad never mentioned any overages though. He was working in CDC assembler building a simulator for something with the odd name (at the time) of a "microcontroller". It was a profs project that needed an emulator so that was his term project.

3 years ago by undefined
[deleted]
3 years ago by tapvt

Hah! Reminds me of myself as a youth, but I was trying to find people primes with an utterly horrible Perl script. I wish I still hade my early code from my salad days.

3 years ago by Victerius

Young enough for a Fields? There is a ceremony later this year, but it's probably too soon. So he could be awarded a Fields in 2026 if he's not yet 40. He received his PhD in 2011.

Also, according to MathGenealogy, his grand-grand-grand-grand-grand-grand-grand-grand thesis advisor was none other than Poisson himself. Poisson's advisers were Lagrange and Laplace. Lagrange's was Euler. Euler's was Bernoulli. Bernoulli's was another Bernoulli.

You can go back to the late 15th century to find mathematicians with "unknown" advisors.

3 years ago by JadeNB

> There is a ceremony later this year, but it's probably too soon.

Maybe this is what you meant by "it's probably too soon", but almost certainly the decisions about who gets it have already been made.

3 years ago by gavagai691

My bets are on another analytic number theorist for the Fields this cycle, namely James Maynard.

3 years ago by gavagai691

I don't believe it... it's really him https://www.youtube.com/watch?v=ZHpG135ge7Y !

3 years ago by fourseventy

Holy crap! I had no idea. I was a big follower of the quake 3 and ut2k4 scene back in the day.

3 years ago by optimalsolver

Is there a reason we're obsessed with primes beyond aesthetics? Why does this set of numbers garner all the headlines as opposed to some other arbitrary integer sequence like the RecamĂĄn numbers [0] ?

If tomorrow someone discovered a closed-form equation for the nth prime, how would mathematics/the world change?

[0] https://en.wikipedia.org/wiki/RecamĂĄn%27s_sequence

3 years ago by iNic

Yes. Just within mathematics prime numbers have a tendency to show up in many fields not obviously related to number theory. As an example: in the study of symmetries (group theory [0]) important to number theory, geometry and even many parts of physics prime numbers help us decompose a complicated symmetry into simpler ones (using Sylow theorems [1].

Prime numbers are also used for cryptography due to their computational properties.

Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime.

But there are more headlines about the primes relative to its prevalence in research. I would guess that this is because - they can easily be explained to people - have been studied for thousands of years - open problems still persist

In other parts of research you might need at least an undergraduate degree just to understand the definitions, which makes headlines less sexy.

[0] https://en.wikipedia.org/wiki/Finite_group

[1] https://en.wikipedia.org/wiki/Sylow_theorems

3 years ago by slx26

I think it's about the fact that they are a mystery at the heart of maths. You can start describing maths from the unit, the addition and the negative sign. If you start combining those, you get the natural numbers, the integers, ... but even before the natural numbers come the prime numbers. Primes are the most fundamental set of numbers in mathematics, from which you can generate the natural numbers.

But as you say, even after so many years they are still relevant, useful and mysterious. They are on a wildly different category from other sets and numerical series. They are the most central element of maths that we still don't understand. And central means that so many other parts of maths derive from it, and therefore we end up coming across prime numbers everywhere. We use them to analyze so many other parts of maths, but yet they remain elusive to analysis themselves. It's a fundamental, recurrent mystery that's also an extremely useful tool... one of the most beautiful things we know.

3 years ago by throwaway525142

> you get the natural numbers, the integers, ... but even before the natural numbers come the prime numbers. Primes are the most fundamental set of numbers in mathematics, from which you can generate the natural numbers.

I don't really see how you can define prime numbers before the natural numbers.

3 years ago by gavagai691

"Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime."

I don't think this is correct. For one, it is not clear what "closed form" would mean in this context. I think a reasonable variant would be "is there a polynomial time algorithm that, given n, outputs the nth prime." While my guess would be that the answer is no, I am certain that this is not known (and probably far, far out of reach).

3 years ago by pas

Anyone else wondering more details:

https://www.quora.com/Is-finding-prime-numbers-in-P-or-NP

Interestingly the Polymath4 project in/around 2009 attempted to find such a polynomial algorithm.

3 years ago by suyjuris

> Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime.

Do you have a reference? This sounds interesting.

There are, of course, integer polynomials where the set of positive values is precisely the set of primes (think something like xÂČ(y+1)-(z+x)y, but much more complicated) [2].

(This might seem like an interesting fact, but it really is not. All sets of numbers where a computer can decide whether a number belongs to the set have such a polynomial.)

[2] https://en.wikipedia.org/wiki/Formula_for_primes

3 years ago by btmiller

What impact would this have on elliptic curve cryptography?

3 years ago by echelon

I am not a mathematician or cryptographer, but here's my understanding (please, please correct me if I'm wrong)

Modern cryptography relies on the hardness of integer factorization. Things you want to hide are intractable to calculate on even the most powerful machines due to the underlying math not being workable in polynomial time (or similar low degree time). Age of the universe hard to brute force with our machines and known algorithms.

It has to do with the complex structure of the problem in our intuitive dimension. We haven't thought of any possible way to speed it up classically, and it's not apparent if there are such techniques.

The Riemann Hypothesis conjectures that there is a way to know the distribution of primes in a higher dimensional imaginary math. That the unintuitive and difficult system that gives us primes in real number space is somehow contained in a larger imaginary space, with some kind of regular structure, and that they are immediately accessible and calculable once you know the trick.

If the hypothesis is true, it's quite possible that NP hard problems are the same. That in some unintuitive higher dimension math we can solve the hardest problems in polynomial time.

NP-hard problems are also proven to be equivalent complexity, and if you figure out one then the rest are also solvable. If we figure out the trick behind it all (if such a thing exists), we might break everything. Or maybe we'll prove that there is no such free lunch.

I'm one of the ones hoping for computing to be easy. It would unlock so many possibilities. Fast math, in silico experiments, protein folding, etc. It unfortunately would also break all of the underpinnings of our industry, though. SSL, privacy, crypto, infosec, everything about the Internet...

Most people are betting that primes are hard, and it certainly does look that way.

3 years ago by eyegor

> Modern cryptography relies on the hardness of integer factorization... P vs NP

This is not true for elliptic curve cryptography, or exactly true for RSA et al. ECC is based on the hardness of the elliptic discrete logarithm problem which is not exactly the same as solving integer factorization. However both can be tackled in polynomial time using Shor's algorithm [0], so both will be vulnerable once we have quantum computers. There are a handful of promising post quantum approaches, such as lattice cryptography. You don't need to solve P vs NP to break modern cryptography, you just need a computer that can run Shor.

[0] https://eprint.iacr.org/2017/598.pdf

3 years ago by gavagai691

"The Riemann Hypothesis conjectures that there is a way to know the distribution of primes in a higher dimensional imaginary math. That the unintuitive and difficult system that gives us primes in real number space is somehow contained in a larger imaginary space, with some kind of regular structure, and that they are immediately accessible and calculable once you know the trick."

RH does not imply this (under any reasonable interpretation of what you wrote; e.g. RH would not imply a fast algorithm that given n outputs the nth prime).

The prime number theorem says that the number of primes up to x (denoted pi(x)) is "well approximated" by Li(x). The Riemann hypothesis says that this is a REALLY good approximation. In particular, it says that the error incurred in approximating pi(x) by Li(x) is of size about sqrt(x). (More precisely sqrt(x)log(x).)

3 years ago by gavagai691

There is actually one interesting connection between RH and integer factorization. This has to do with the idea of "Mobius pseudorandomness." For simplicity, I'll phrase things in terms of not Mobius, but the Liouville function lambda.

The Liouville function comes up naturally if you are interested in whether the "typical" number has an even or odd number of prime factors. More precisely, define the function lambda(n) (the "Liouville function") as follows. lambda(n) = 1 if n has an even number of prime factors, and -1 if n has an odd number of prime factors. If you expected that about half of the positive integers have an even number of prime factors, then you would expect that lambda is 0 on average, i.e. that |(1/x) * sum_(n <= x) lambda(n)| tends to zero as x tends to infinity.

It turns out that this statement is equivalent to the prime number theorem. Further, it turns out that the Riemann Hypothesis is equivalent to the STRONGER statement that the size of the sums |sum_(n<=x) lambda(n)|, where lambda(n) is the "Liouville function," don't exceed size about sqrt(x). This is exactly what you would expect from the Central Limit Theorem if you thought that lambda(n) behaved like a Bernoulli random variable taking the values +1,-1.

What does this have to do with factorization? It's easy to compute lambda if you know how to factor n, so if computing lambda is certainly no harder than factoring. On the other hand, we don't know of any way to compute the Liouville function without factoring n. This isn't a proof that computing lambda(n) is at least as hard as factoring n, but it certainly seems to be the case.

Now, one rigorization of what it means for a sequence to behave randomly is that it doesn't correlate with any easily computable sequences. (Replace "easily computable" with "computable" and you are not too far from the definition of Martin-Lof randomness.) In particular, it shouldn't be easily computable.

So in other words, if you think that lambda behaves randomly, then it shouldn't be easily computable, which in turn means that factoring is hard!

But as mentioned above, one of the simplest consequences of lambda behaving randomly is (by the Central Limit Theorem) the Riemann hypothesis!

3 years ago by ilya_m

Hmmm? Is someone experimenting with GPT-3 commenting on NH? Because it sure feels like it - the comment is mostly grammatical, uses approximately correct jargon, and completely nonsensical.

3 years ago by jacquesm

Hello Ilya, if you're wondering why your comment isn't appreciated, it is because you are breaking the rules.

Please do not accuse people of being bots or shills, and if you have proof that someone is a bot (or an actual shill) then mail hn@ycombinator.com

Have a look at:

https://news.ycombinator.com/newsguidelines.html , the bot bit isn't explicitly mentioned but "Please don't post insinuations about astroturfing, shilling, brigading, foreign agents and the like." covers it nicely with the 'and the like' bit.

3 years ago by jayski

i have no idea why this great comment got down voted.

3 years ago by beebmam

Elliptic curve cryptography does not rely on the difficulty of prime factorization. That is likely why that comment you are replying to is downvoted; that comment doesn't answer the question that it is replying to.

For an intro to the math behind elliptic curve cryptography, here's a good read: https://hackernoon.com/what-is-the-math-behind-elliptic-curv...

3 years ago by undefined
[deleted]
3 years ago by nefitty

Lazy trolling.

3 years ago by undefined
[deleted]
3 years ago by mikewarot

I suspect somewhere in the Riemann-Zeta function is a reciprocal that only cancels out all the imaginary terms once and only once, thus any number with more than one factor wouldn't cancel out.

I haven't the math skills to find that reciprocal.

Daily Digest

Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.