http://hyperelliptic.org/tanja/nonuniform-reviews.txt (also posted anonymously as http://pastebin.com/BxQkkq6y): the reviews from Asiacrypt 2012 and Eurocrypt 2013, and our comments dated 2013.02.15. http://hyperelliptic.org/tanja/nonuniform-diff.txt (also posted anonymously as http://pastebin.com/fLtiPYQ0): the svn diff output, more than 2000 lines, between our Eurocrypt 2013 submission and our Crypto 2013 submission. http://hyperelliptic.org/tanja/nonuniform-reviews-2.txt (also posted anonymously as http://pastebin.com/kUrpYZHE): the initial reviews from Crypto 2013, and our comments dated 2013.04.04. http://hyperelliptic.org/tanja/nonuniform-reviews-3.txt (also posted anonymously as http://pastebin.com/TUbzk0G1): this file ======================================================================== The point of this document is to respond to two new error-filled paragraphs from a Crypto 2013 reviewer. We received these paragraphs on 19 April 2013. Some background first: Crypto 2013 allowed a rebuttal phase in which authors were given a few days to respond to initial reviews. http://pastebin.com/kUrpYZHE shows the four reviews that we received for "Non-uniform cracks in the concrete"; two reviews sounded positive, and two sounded negative. Our rebuttal was strictly limited to 3000 characters but we squeezed in as many comments as we could, plus links to much more detailed comments. Here is the rebuttal (4 April 2013): > > #2: We'll add more summaries; thanks. > > > > #4: Thanks for the detailed comments and reasonable questions. We're > > happy to take concrete suggestions regarding exposition/citations/etc. > > > > "Q1a": No. "Q1b": No. Sorting, multiplication, etc. have AT cost > > essentially n^1.5 (1981 Brent--Kung). RAM T+S and RAM T*program size are > > much smaller, n^1 (which is physically unrealistic; see Q.6), because > > RAM time ignores communication cost. RAM T*S is much larger, n^2; it > > pays for space while failing to allow any parallelism. > > > > Analogous gaps appear in §5. Best RAM T+S for NFS (1993 Coppersmith) is > > L^1.902 normally but L^1.638 with precomputation. AT for NFS (2001 > > Bernstein) is L^1.976 either way. RAM T*S (not mentioned in the paper) > > for NFS is 2.760 (2002 Pomerance). These gaps appear for the reasons > > stated above: NFS needs space and large-scale communication, and is not > > parallelizable enough to hide all of the communication cost. The > > algorithm of §4 has the same issues but §5 is more detailed. A similar > > gap is stated in §2.2 and appears in G.1. "Q2": Yes. > > > > "Singularly uninteresting": There is an interplay between analysis and > > optimization of algorithms. Often the analysis is the interesting part. > > Reduction algorithms are no exception: e.g., the CBC-MAC reduction is > > trivial (compose A with CBC), while its probability analysis is much > > more interesting. Sometimes the _speed_ analysis is the interesting > > part: e.g., Shoup's RSA-3-OAEP security proof needs Coppersmith's famous > > LLL-based root-finding algorithm, and relies critically on the subtle > > fact that LLL runs in polynomial time; concrete bounds need more > > detailed LLL analyses. > > > > "B can ... *run* A": No. A small chip can _print_ a large key-search > > chip, a large NFS chip, etc., but cannot _run_ those algorithms at the > > same speeds as the large chips. Parallelism is again critical. > > > > "Oddball": The paper highlights repeated-query elimination. This is > > cheap for RAM, but very expensive for NAND and for AT. > > > > Please see pastebin.com/kUrpYZHE for more comments. > > > > #1: After receiving a wide mix of earlier reviews we made huge changes > > to this paper---the svn diff from Eurocrypt to Crypto is more than 2000 > > lines. We also added Q.27 pointing to our anonymous page > > pastebin.com/BxQkkq6y with thorough comments on the reviews. So it is > > quite surprising to receive a "new" review that is almost word-for-word > > identical to one of the Eurocrypt reviews---including many obsolete > > statements, misconceptions, and outright errors, all handled or rebutted > > as part of our Crypto submission. > > > > The review is slightly edited, most prominently to add claims that (1) > > we don't discuss explicit reduction theorems; (2) those are a "remedy" > > for the problems we point out; (3) §2 contains "nothing new"; (4) our > > disproofs of various conjectures are not new. All of these claims are > > false. See B.6, Q.20, and §1.3 paragraph 2 ("new cost analyses" and > > "first to point out these contradictions"). > > > > #3: Reviewer evades the rebuttal process. On 19 April 2013, halfway through the post-rebuttal discussions, the Crypto 2013 program chairs accidentally sent authors the latest edits of the reviews. Both of the negative-sounding reviews of our paper were edited. One of those reviews (labelled #3 above), the one that wasn't even pretending to be a normal scientific review, was eliminated entirely. The other, the "In 1980" review full of errors (labelled #1 above), added two paragraphs with even more errors: > The rebuttal does not directly address any of the points made above. If the > comments were seen in large part before, I am not seeing a response to them > reflected in any prominent way in the paper. The Introduction remains largely > the same and is highly misleading. The rebuttal points to B.6 and Q.20. > Perhaps the authors imagine that discussion of Rogaway's human ignorance paper > relates to the issues above. This ignores the fact that hundreds of papers in > the literature prior to Rogaway use blackbox reductions and make theorems > statements of the form indicated above which bypass the use of the maximizing > notation and thus allow us to conclude exactly the security guarantees we want > from the proofs. This deserves to be prominently stated in the Introduction. > After that, the authors can certainly debate the question of finding an > alternative notation, but they should first admit no such notation is needed > to draw the conclusions we want from proofs, this being something we can do > right now. > > I have been ambivalent about this paper. I learned something from it, namely > that the maximizing notation leads to non-tight conclusions. I would like to > see this followed by saying that the proofs continue to give the tight > conclusions we want, that many theorem statements avoid the offending notation > and allow us to get the tight conclusions, but that we lack a notation to say > this concisely, and the paper investigates different choices. If the paper > clearly stated a view like this in the Introduction, I would support it. We're aware that these two paragraphs could be edited further before being officially sent to us, but since we _did_ receive them it seems silly to leave the errors uncorrected. The rest of this document consists of our detailed comments. > The rebuttal does not directly address any of the points made above. False. Let's start with an illustrative example, looking at how directly the rebuttal addressed one of the earlier errors from this reviewer (the "nothing new" claim), and then let's look at the bigger picture of how thoroughly we've responded to this reviewer. To understand this particular example it's easiest to start from the facts. Section 1.3 paragraph 2 of the paper says Our attacks ... use many standard cryptanalytic techniques cited in Sections 2, 3, 4, and 5. We introduce new cost analyses in all four sections, and new algorithm improvements in Sections 3, 4, and 5 ... Both Section 1 and Section 2 clearly state that * the algorithms in Section 2 are not new, but * some of the cost analyses in Section 2 are new. Most importantly, the AT analyses stated in Section 2 (also summarized in the middle line in Figure G.1) are new, and are important for our subsequent analyses of various ideas to fix the definitions. One of the reviewer's claims is that Section 2 "contains nothing new". Here's the complete quote, from the final paragraph of the review before the edits: Section 2 of the paper, breaking AES, is 3 pages long, and contains nothing new. But this is simply not true. Again, the _algorithms_ in Section 2 are not new, but some of the _cost analyses_ are new, especially for AT. Our rebuttal * correctly paraphrases this reviewer as claiming that "§2 contains 'nothing new'"; * specifically identifies this as a false claim; * points to "§1.3 paragraph 2"; and even * quotes "new cost analyses". This is very directly addressing a blatant error from this reviewer. How, then, can the reviewer claim that the rebuttal "does not directly address any of the points made above"? The bigger picture is that we have responded in detail to every comment from every reviewer, as shown on the following anonymous web pages: http://pastebin.com/BxQkkq6y http://pastebin.com/kUrpYZHE This particular reviewer has piled up so many errors that responding to all of them in a 3000-character rebuttal simply wasn't possible (never mind the fact that another reviewer had asked reasonable questions that were much more deserving of answers), but our rebuttal does have the following text clearly labelled as a response to this reviewer: After receiving a wide mix of earlier reviews we made huge changes to this paper---the svn diff from Eurocrypt to Crypto is more than 2000 lines. We also added Q.27 pointing to our anonymous page pastebin.com/BxQkkq6y with thorough comments on the reviews. So it is quite surprising to receive a "new" review that is almost word-for-word identical to one of the Eurocrypt reviews---including many obsolete statements, misconceptions, and outright errors, all handled or rebutted as part of our Crypto submission. The review is slightly edited, most prominently to add claims that (1) we don't discuss explicit reduction theorems; (2) those are a "remedy" for the problems we point out; (3) §2 contains "nothing new"; (4) our disproofs of various conjectures are not new. All of these claims are false. See B.6, Q.20, and §1.3 paragraph 2 ("new cost analyses" and "first to point out these contradictions"). The reviewer might respond that the conference rules don't oblige him to read supplementary documents, but this hardly justifies his attempt to portray us as unresponsive. More to the point, the rebuttal _directly_ addresses four false claims from this reviewer and points to the exact paragraphs in the paper that disprove those claims. > If the comments were seen in large part before, I am not seeing a > response to them reflected in any prominent way in the paper. It's rather difficult to respond to a comment devoid of specifics. Most likely the problem here is the reviewer's failure to _read_ what we actually submitted to Crypto. > The Introduction remains largely the same False. The Eurocrypt version of the Introduction section was only 1500 words, while the Crypto version of the section is 2400 words. There's some overlap but there are also huge additions and other changes. The svn diff output from the Eurocrypt submission to the Crypto submission has _hundreds_ of lines within the Introduction section. > and is highly misleading. As above, it's rather difficult to respond to a comment devoid of specifics. One might speculate---since elsewhere this reviewer keeps harping on the point that most of the proof ideas are fine---that what the reviewer is trying to say here is that the introduction misleads readers into thinking that there's a problem with most of the proof ideas. But the introduction never suggests any such thing, and in fact the Crypto version of the introduction is completely explicit in saying that most of the proof ideas are fine. > The rebuttal points to B.6 and Q.20. Perhaps the authors imagine that > discussion of Rogaway's human ignorance paper relates to the issues > above. This reviewer spent a long paragraph enthusiastically repeating (without credit) the recommendation of explicit reduction theorems from Rogaway's 2006 "human ignorance" paper; falsely claiming that we "fail to discuss" such explicit reduction theorems; and falsely claiming that explicit reduction theorems are a solution for the definitional problems raised in the paper. http://pastebin.com/kUrpYZHE responds to this in detail (search for the text "A whole new paragraph, full of errors"). The rebuttal was under severe space limits but points to B.6 and Q.20 as responses to the reviewer's core errors. Anyone looking at B.6 can see that the reviewer is wrong in claiming that we "fail to discuss" explicit reduction theorems (including 2001 Stinson, 2006 Rogaway, other literature, and a more advanced theorem structure that the paper recommends), while anyone looking at Q.20 can see why this isn't a solution for the definitional problems. Not only do we "imagine" that this is related, but we find the relationship blazingly obvious. > This ignores the fact that hundreds of papers in the literature prior > to Rogaway use blackbox reductions and make theorems statements of the > form indicated above which bypass the use of the maximizing notation > and thus allow us to conclude exactly the security guarantees we want > from the proofs. This sentence combines so many mistakes that it's hard to know where to begin. Perhaps the most basic questions---questions that, unfortunately, have to be asked again and again for this reviewer---are (1) precisely which statement in the paper the reviewer thinks he's arguing against and (2) precisely what evidence the reviewer has for his claims. The paper begins by reviewing what is _actually_ done in the literature on concrete provable security. First, and most fundamental, is to * define security of a primitive or protocol X as low success probability of all limited-cost algorithms attacking X. The paper illustrates this with the classic Bellare--Kilian--Rogaway security definition for ciphers, and then states the general pattern, referring to Appendix L for a literature survey showing that this is completely standard. (Some relevant history: The Eurocrypt review from this reviewer had included the ludicrously inaccurate claim that "use of the offending notation is not so widespread". That's why, for the Crypto submission, we wrote much more introductory text on this point, and added Appendix L with an unbiased sample of what the literature actually says. The reviewer's latest review repeats the same ludicrously inaccurate claim, not because the reviewer disputes anything our Crypto submission says about this, but because the reviewer copied his Eurocrypt review without actually _reading_ our Crypto submission. The reviewer now tries to hide this procedural error by adding claims such as "The Introduction remains largely the same", which is manifestly untrue. Is it really so hard for a Crypto reviewer to read _what we submitted to Crypto_?) There is overwhelming consensus in the literature on the importance of security definitions. Consider, e.g., 1994 Bellare--Kilian--Rogaway, Section 1.2: In this paper we will show that CBC MAC construction is secure if the underlying block cipher is secure. To make this statement meaningful we need first to discuss what we mean by security in each case. Or consider 2007 Katz--Lindell: The Basic Principles of Modern Cryptography ... Principle 1---Formulation of Exact Definitions One of the key intellectual contributions of modern cryptography has been the realization that formal definitions of security are _essential_ prerequisites for the design, usage, or study of any cryptographic primitive or protocol. Or consider this reviewer's claim that existing literature allows us to "conclude exactly the security guarantees we want from the proofs". What exactly _are_ the security guarantees that the reviewer claims to exist for, e.g., AES-CBC-MAC? It's nonsense to _guarantee_ security if we haven't even _defined_ security! The literature is perfectly clear in * _defining_ what security means for AES, * _conjecturing_ that AES has security 2^128 (with appropriate quantifications of success probability etc.), * _defining_ what security means for AES-CBC-MAC, * _conjecturing_ that AES-CBC-MAC has security 2^128, and * proving a _theorem_ guaranteeing that the second conjecture follows from the first. Our paper shows, however, that the first conjecture is _false_, and that this error arises from a serious gap between formal-definition security and actual security. (The second conjecture is also false.) The reviewer is completely wrong in describing this as a question of "the maximizing notation". The security definitions work the same way, and have the same problems, whether they're phrased using the maximizing notation, or using the (...,epsilon) notation, or using other notation for the same standard concept. As the paper says in Section 1: Often "the (q,t)-insecurity of X is at most epsilon" is abbreviated "X is (q,t,epsilon)-secure". Many papers prefer the more concise notation and do not even mention the insecurity function. We emphasize, however, that this is merely a superficial change in notation, and that both of the quotes in this paragraph refer to exactly the same situation: namely, the nonexistence of algorithms that cost at most (q,t) and that break X with probability more than epsilon. Our paper's analysis can trivially be expressed with either notation. Furthermore, the replacement definitions formulated later in the paper are much more than changes of notation---they are heavy restrictions on the set of algorithms allowed to the attacker. As for explicit reduction theorems, we specifically recommend factoring security theorems such as the classic Bellare--Kilian--Rogaway theorem Thm. Adv_{CBC^m-F}^prf(q,t) <= Adv_F^prp(mq,t+O(mql)) + q^2m^2/2^(l-1) through lower-level probability theorems such as Thm. AdvPRF_{CBC^m(F)}(A) <= AdvPRP_F(UseCBC(A)) + q^2m^2/2^(l-1) so that changes in the cost metric affect only the higher-level security theorem while leaving the lower-level theorem unchanged. But the pure probability theorem is obviously inadequate for drawing any security conclusions---one also needs the cost comparison of UseCBC(A) with A. If there's no limit on costs then proving a reduction is trivial for any concrete system and loses all connection to security. As for "prior to Rogaway", the reviewer is certainly wrong in claiming that we're ignoring the pre-2006 literature. Specifically, B.6 says This type of modularization of provable-security theorems can be traced back to at least 1999 (see [13, Theorem 4.1]), if not earlier, but at that point was not claimed to have any particular advantages. B.6 continues by reporting the recommendations from 2001 Stinson and 2006 Rogaway ("human ignorance") and the advantages that they claim (specifically for collision resistance---for example, Rogaway begins his paper by describing "The Foundations-of-Hashing Dilemma"). B.6 then formulates a more advanced recommendation with further advantages. We're not aware of any missing citations here. However, as Q.20 says, the reviewer is wrong in thinking that the line of work analyzed in B.6 solves the definitional problems raised in our paper. This line of work makes failures in the definitions easier to handle, which is exactly why we recommend it, but it makes zero progress towards the fundamental goal of properly defining security. > This deserves to be prominently stated in the Introduction. A sentence full of errors is something that we're supposed to state prominently in the introduction? Ridiculous. > After that, the authors can certainly debate the question of finding > an alternative notation, The reviewer is again completely wrong in describing this paper's recommendations as mere changes of "notation". > but they should first admit no such notation > is needed to draw the conclusions we want from proofs, this being > something we can do right now. This paper is about bad _definitions_ and bad _theorem statements_ and bad _conjectures_. It never says that there's a problem with most of the _proofs_. Furthermore, a huge part of the paper is devoted to saying how the _definitions_ can be fixed---which would make no sense if we thought that there were serious trouble with the _proofs_. The introduction quite explicitly says that most of the proofs are fine. The paper is quite clear about all of this, and yet we find ourselves writing the same thing again and again in response to this reviewer. The starting problem is that the reviewer hasn't even read the paper. > I have been ambivalent about this paper. I learned something from it, > namely that the maximizing notation leads to non-tight conclusions. Yet another sentence that manages to pack multiple errors together. The gaps that we describe are not failures of tightness. Perhaps the most important difference is that we show _lower_ bounds on our gaps, whereas loose theorems still have _upper_ bounds on their gaps. One can confidently compensate for a lack of tightness by increasing parameters, whereas there's no basis for confidence in (e.g.) a conjecture of 2^85 "security" for AES. Maybe our attacks _are_ optimal in this model, and one can then compare our gaps to tightness gaps (see Q.19), concluding that our gaps are quantitatively larger than typical tightness gaps; but it's still wrong to describe our gaps as failures of tightness. As for "the maximizing notation", see above. > I would like to see this followed by saying that the proofs continue > to give the tight conclusions we want, Here the reviewer is once again arguing against something that the paper doesn't say. In fact, the introduction has the following completely explicit statement regarding the proofs---very prominently positioned at the conclusion of Section 1.2: Our impression is that _most_ of the proof ideas in the literature are compatible with the new definitions, modulo quantitative changes, so _most_ concrete-security theorems in the literature can be replaced by meaningful concrete-security theorems using the new definitions. ^1 The conjectures and theorems together will produce exactly the desired conclusions regarding the concrete security of protocols such as AES-CBC-MAC. ^1 We do not claim that _all_ proofs can be rescued, and it is even possible that some theorems will have to be abandoned entirely. Some troublesome examples have been pointed out by Koblitz and Menezes in [45] and [46]. Our experience indicates, however, that such examples are unusual. For example, there is nothing troublesome about the CBC-MAC proof or the FDH proof; these proofs simply need to be placed in a proper framework of meaningful definitions, conjectures, and theorem statements. The reviewer keeps harping on his most-proofs-are-fine theme as if this were contradicting something in this paper. It isn't. > that many theorem statements avoid the offending notation and allow us > to get the tight conclusions, It's _not_ a matter of notation; it's _not_ a matter of tightness; and the existing theorem statements do _not_ avoid the problems identified in this paper. See Appendix L. > but that we lack a notation to say this concisely, and > the paper investigates different choices. Again, the problem is _not_ a matter of notation. The problem is that the standard definitions of security do not accurately model what attackers can actually do. > If the paper clearly stated a view like this in the Introduction, I > would support it. This reviewer's "view" is full of serious errors. The reviewer refuses to read anything that we've written in response to those errors; the reviewer hasn't even read the paper that we submitted to Crypto. The reviewer _is_ correct in stating again and again and again that most of the proof ideas in the literature are fine---but the introduction _already_ states this. The reviewer is completely wrong in claiming that this contradicts anything in the paper.