Saturday, October 30, 2010

Sections 7.3 - 7.5 due Nov. 1st

(Difficult) In section 7.5, I had a hard time understanding the proof concerning the machine that could solve Decision Diffie-Hellman problems and decide the validity of mod p ElGamal ciphertexts. The main point I didn't understand was why the ElGamal machine only outputs "yes" when m = 1 is the same as ca^(-xy).  


(Reflective) When reading about bit commitment and the key exchange, I was thinking about how much more versatile public key encryption was compared to private key encryption. I know we discussed this at the beginning of the class, but it is nice to see all the examples of how public key systems can be used. When we had talked about using RSA to transfer a DES key I wasn't completely sure how this would work, but the Diffie-Hellman Key Exchange is an ingenious approach to using discrete logarithms to construct a key between two people.

Thursday, October 28, 2010

Section 7.2 due Oct. 29th

(Difficult) I am not sure if I completely understand this, but does the difficulty of discrete logarithms come from the need to factor p-1? I noticed in the example of 7^x = 12 (mod 41) it was necessary to first factor the number 40. In practice, would p-1 we be choosen as a multiple of two large primes, or does it matter if p-1 factors into more than 2 primes?

(Reflective) The Pohlig-Hellman Algorithm reminded me of the p-1 factoring algorithm with RSA. Although I don't completely understand it, it is interesting how p-1 is used to solve for the modular exponentiation mod p, both with RSA and discrete logarithms. The two methods are obviously related, since they both involve modular exponentiation, but after reading this section, computing discrete logs still seems rather foreign compared to the RSA algorithm.

Tuesday, October 26, 2010

Section 6.5-6.7 and 7.1 due Oct. 27th

(Difficult) The concept of a trapdoor was a little confusing to me. I can understand how it might be possible to find an easier way of dealing with a one way function, but it is hard to imagine why the designer of the original function would have any advantage in finding a trap door over anyone else. If everyone has access to the public encryption method, it would seem that everyone would also have an opportunity to try to find a trapdoor. It would be nice to see an example of how this might work to better understand how a trapdoor could be built into a cryptosystem.


(Reflective) The RSA challenge was a great example of overestimating the security of a cryptosystem. It was interesting to see that they estimated it would take 4 x 10^16 years to decrypt the message, but in reality it took less than a year in 1994. It goes to show how important it is to analyze a cryptosystem and look for its weaknesses before making any assumptions about the overall security.

Sunday, October 24, 2010

Section 6.4.1 due Oct. 25th

(Difficult) Th hardest part of this section to understand was where the author came up with the original squares he was using in the start of the section. If I understand it correctly, he takes squares which are just beyond n, so that they are small (mod n). This sounded alot like the what we did in class a few days ago, but the book's explanation seemed a little out of order. The book sort of pulled the squares out of thin air, then tried to explain them later. Was this method similar to the Fermat Factorization method? Finding values close to n, such as n + 1^2, n + 2^2, etc. until we found a square? it seems this method would also yield similar squares that would be small (mod n), which should give us relatively small prime factors.


(Reflective) I liked the table included in the book about the number of digits of factorization records over the last few decades. Clearly RSA has had a great influence on the study of factoring, and even today people continue to develop or improve existing methods of factoring. This is a great example of what interests me the most about cryptography: Cryptography continues to expand, and the study of cryptography is always new, and always interesting. Even though the math community has a great understanding of encryption methods, there is always more to learn, more to explore. It leaves open a great deal of opportunity for continued research, for greater understanding, and continued education. As I get closer to graduation, I realize I want to get into a career that always me to further my education, to always be pushing my self and my mental capacity a little further.

Thursday, October 21, 2010

Section 6.4 due Oct. 22nd

(Interesting) Fermat's Factorization method is a rather clever method of factoring. It appears to be similar to the continued fraction attack, since they both involve squares. The nice thing about Fermat's method is the simplicity of it. It really only involves squaring numbers and finding square roots, everything else is addition and subtraction.

(Difficult) The p-1 factoring algorithm was difficult to follow. It appeared more like an RSA attack than a factoring method, since factorization is not guaranteed by this method. I would really appreciate an example of this algorithm in class.

Monday, October 18, 2010

Section 6.3 due Oct. 20th

(Difficult) The most difficult part to understand in this section was the Miller-Rabin Primality Test: The process itself seemed straightforward, except for how we get the value m for n -1 = 2^k (m). It seems as though we take n - 1, then divide it by 2 until the quotient is an odd number, then we use that odd number as m? Once we have m, the test is fairly straight forward: square 2^m (mod n) until we get a result that is +/- 1.The other question I had was if we are guaranteed to get a -1 or +1 for every test. Is it possible that any (mod n) group would not have a square congruent to 1 or -1? This would imply that no element of the group was its own inverse, right?

(Reflective) I was thinking about why a definitive primalty test was more difficult than proving a number is composite, and in reading pg. 181 I realized it goes back to the Jacobi Symbol, where we had 'a' over 'n' = 1. Since this 'a' over 'n' was equal to 1, if n was composite, then a/p and a/q would be equal, and could either be +/- 1.  Thus, without factoring n into p and q, we could not say whether 'a' was a square or not. Likewise with primality testing, it leaves us with a "probable answer" but doesn't guarantee a prime without further work. Given the similarities in testing for square roots mod n, and testing primality, it would seem the only "simple" way to prove primality would be to factor the number. My question now is how does the AKS algorithm prove primality without factoring?

Saturday, October 16, 2010

Section 3.10 due Oct. 18th

(Reflective)
The proposition that x^2 is congruent to a (mod p) has a solution if and only if a^(p-1)/2 is congruent to 1 (mod p) made me think more about primitive roots. Does this proposition imply that no primitive root (mod p) is a square of another number? The primitve root (or multiplicative generator) forms a cyclic group in group theory, so is there a definitive relation between square roots mod p, and cyclic groups?


(Difficult)
The law of quadratic reciprocity was the hardest to follow in this section. It seems we are reducing two values (a and p) by using properties which define the Jacobi symbols down to simpler terms until we can finally determine a value (+/- 1) for the Jacobi symbol. The trouble I had was understanding if there is a specific pattern to follow, if the order of reduction mattered, or if the application of each property could be done in a different order and yield the same results.

Thursday, October 14, 2010

Section 3.9 due Oct. 15th

(Difficult) I understood how the square roots could give us a non-trivial factor of a composite number, but I could not understand how this was computationally equivalent to factoring the composite number. It seems that this would require comparing two numbers squared, mod n over and over. Rather than trying to divide by all primes less than the square root of n, we would have to compute the "square" of every number mod n then compare them with one another. In reality, no two integers less than the square root of n would have the same squared value, so we would only need to check those values greater than square root of n.

Monday, October 11, 2010

Section 6.2 due Oct. 13th

(Difficult) The concept I struggled with in this section was dealing with timing attacks. I understand how such attacks might give you a rough idea on the order of 'e' or the order of 'n', but I don't understand how it could find  exact binary values as mentioned in the section. The equation at the end of page 174 sounds like we can actually calculate the decryption key just by the timing of each computation in RSA. I also had a hard time understanding how they used statistics in this formula, finding the mean and variance.


(Reflective) The thing that I found most interesting in this lecture was the flexibility of RSA. There are some very well thought attacks against RSA, but almost all of them can be avoided using a few basic guidelines. The way we generate the primes p and q, the way we pick the exponents e and d, and how we construct the message m can all become weaknesses if we are not careful. The only attack that seemed serious was the timing attacks, although the book mentions that there are ways to avoid this attack as well. It made me think back to Kerchoff's Principle. RSA is laid out in the simplest way. Everyone has the public key encryption method, and everyone knows how the decryption method works, but it still stands up against cryptanalysis quite well.

Sunday, October 10, 2010

Section 3.12 due Oct. 11th

(Difficult) I understand how the continued fractions are used to get decimal approximations of real numbers, but the Theorem in this section was hard to follow. If I understand it correctly, if we have a rational number (r/q) as an approximation of a real number, if the difference between the rational approximation and the real number is less than 1/(2q^2), then (r/q) is a partial quotient of continued fractions. It seems it would be a simple proof to show that any partial quotient will be within a specific range of the real value, but the theorem seems to imply that continued fractions are an optimal approximation when limiting the order of the numerator or denominator. That is, if we want to optimize are approximation with a limited number of digits in the numerator/denominator, continued fractions will guarantee optimization. The first paragraph of the sections seems to hint to this, since 22/7 is a better approximation of pi then 314/100, but I am still not sure if that is what the theorem is stating, or what other purpose the theorem may have in this section.

(Reflective) I had never thought of how continued fractions were related to the euclidean algorithm before. We could in fact calculate the gcd by using continued fractions, although the euclidean algorithm appears to be easier to use. However, for a computer, continued fractions may be an easier way to calculate the gcd. The process involved in both formats are very similar. I am curious as to which would be easier to implement on a computer, and if there are any advantages/disadvantages to either one. At the end of the section when it talks about partial quotients, its seems as though this method would have an advantage when implemented on a computer because it says that the partial fractions can be calculated without starting a new computation as we do with each layer of the Euclidean Algorithm.

Thursday, October 7, 2010

Section 6.1 due Oct 8th

(Difficult) One thing about the RSA algorithm that does not quite make sense is how we can use Euler's or Fermat's theorem to calculate the exponent 'e' of a message. In Euler's theorem we could find a to the power of (p-1)(q-1) was congruent to 1 (mod pq) because a was relatively prime to pq, (in this case pq = n) Are there limitations on the message size to ensure that the message will be relatively prime to n? It seems that if we are multiplying together two very large prime numbers, the chance of the message being a multiple of one of the primes seems very unlikely, but it still poses some concerns. If the message was never larger than either of the primes, then this would not be an issue.


(Reflective)  As we read about the encryption process of RSA, I wondered how it was implemented given today's technology. Would there be any way to prevent two companies from using the same value of n? If two people shared the same value of n, and n has a unique factoriziation, they would know each other's values for p and q. How could a system be developed to prevent multiple users with the same encryption and decryption methods? Likewise, although it would take a ridiculous amount of inital computations, Eve could create a multiplication table of values of p,q and calculate 'n' for n = pq. Although this list would be enormous, given enough storage capacity, Eve would already know the factors for many values of n, and if she found a system using the same value of n in their encryption, she would effectively have their key for encryption and decryption. Maybe n is large enough to make this impractical for modern technology, but it seems that it might pose a threat if newer methods of storage and compression on computer systems are found.

Tuesday, October 5, 2010

Sections 3.6-3.7 due Oct. 6th

(Interesting)
I enjoyed the section on primitive roots. Admittedly, when the idea of a multiplicative generator was covered in Abstract Algebra, I didn't see much of a reason for it. It seemed like nothing more than a neat idea. Since we used this technique with tabulating finite fields, and now with multiplication tables mod n, I am starting to see how many of these seemingly obscure concepts come together. It is amazing how cryptography uses facets from so many branches of mathematics.

(Difficult)

Euler's function was the most difficult to fully understand. I could see how it worked in the examples, but the proof of it was harder to follow. Knowing why something works can be just as important as knowing how something works, so I hope I will understand it better when it comes up in class.

Sunday, October 3, 2010

Sections 3.4-3.5 due Oct. 4th

(Interesting)

Modular Exponentiation seems like it would be amazingly useful in cryptography, especially given the restrictions of current technology. It is fascinating that a calculation could be performed in two different ways, one which could require a few hundred calculations, and the other could require more calculations than a modern computer could ever store, or even compute given years of computing time. I am curious to know how this method fits into cracking cryptosystems as well.

(Difficult)

Although I understood the proof regarding the unique solutions for the Chinese Remainder Theorem, I had a hard time following the explanation of how to find a solution. I was also unsure if we were trying to reduce a composite mod into its primes, or take two primes and find a number mod the multiple of the primes.