Tuesday, December 7, 2010

Section 16.5 due Dec. 7th

(Difficult) I was left with a few unresolved questions regarding the elliptic curve Diffie-Hellman Key Exchange: First of all, if Alice and Bob Choose N(a) and N(b) such that N(a)N(b)G = "infinity", this would yield a useless key. In choosing the point G and the Elliptic curve E, would it be better to verify that the only multiples kG for which kG = "infinity" on E is when k is extremely large, or would this limitation weaken the security of the system? Secondly, is there a size limitation on the values they pick for N(a) and N(b)? For example N(a)N(b) < p, where p is the modulo of our elliptic curve.

(Reflective) After reading about many of the uses of elliptic curves in cryptography, it seems to me that elliptic curves may be even more useful than RSA or ElGamal. The only thing that has not been mentioned here is the performance of each system. That is, which method best preserves security while remaining fast and efficient. We saw with RSA that making our two primes slightly larger would greatly increase the time required to perform a brute force attack. Are elliptic curves quicker to calculate then fast modular exponentiation?

Friday, December 3, 2010

Section 16.4 due Dec. 6th

(Difficult)
When looking at the laws associated with GF(4), I was unsure on how they determined the relationships between each element in GF(4). Are these laws derived from the structure of the field? That is, can we construct these laws for any GF(2^n) knowing how the field is structured?

(Reflective)
I was amazed at the end of the section that they talked about using fields of GF(2^n) with n around 150 for cryptographic purposes. Every other system we have studied is severely weakened when we place limitations on the field, like when we reduce n in RSA, or reduce p in ElGamal. This left me wondering why this does not leave elliptic systems vulnerable to such attacks like brute force attacks or birthday attacks.

Wednesday, December 1, 2010

Section 16.3 due Dec. 3rd

(Difficult)
I was a little confused by the example on pg. 357-358. The example used factorials multiplied by P, such as 10!P. In the previous examples, we were simply adding P + P + P... until we found a value m for which mP = infinity. Do we only find mP = infinity for m, or at every multiple of m. If not, are we guaranteed to find a solution if we are working instead with factorials? That is, will the value of m always be an integer equal to some factorial value? Intuitively it seems it must be at every multiple of m for the factorial method to work, but I don't recall reading this anywhere in this lecture or the previous.

(Reflective)
I found it interesting that the factoring method of elliptic curves was so similar to the p-1 method. It never would have occurred to me just from reading about elliptic curves that the factoring methods were so closely related. In the lecture it mentioned though that elliptic curves are most effective on large composite numbers with one relatively small prime factor. Each method we have discussed has its own advantages and disadvantages, but this made me think about the methods of choosing primes. When we talked about primality testing, we read about many probabilistic tests, but only one definitive method of testing for primality. If a number is choosen as a prime p, but is actually a psuedoprime, would this leave it vulnerable to any factoring methods?

Monday, November 29, 2010

Section 16.2 due Dec. 1st

(Difficult)
In 16.2.3 the author discusses how we can take a plaintext (single numerical value) and represent such a message in an elliptic curve. I see how the example worked, but I had a few questions about details on finding the elliptic curve representation:
1.) If we are not guaranteed that the message will work on the elliptic curve, wouldn't that limit the use of elliptic curves for cryptographic purposes?
2.) When looking at elliptic curves we are dealing with square roots, so is there a standard by which we decide which square root to pick?


(Reflective)
In 16.2.2 we look at the example of a "discrete log" on elliptic curves. The author discusses briefly the difficulties of attacking a discrete log problem on elliptic curves, but nothing is mentioned as to the use of discrete logs on elliptic curves for cryptography. The lecture leads us to believe it would provide us with a fairly secure method of encryption, but does not give us details on how it would work. Is our main focus on elliptic curves used for a factorization method, or will we be using elliptic curves as a method of encryption as well?

Tuesday, November 23, 2010

Section 16.1 due Nov. 29th

(Difficult)
I was a little confused by the use of the point at infinity as an additive identity. I understand that not all groups have the same additive or multiplicative identities ( we are most familiar with zero as the additive identity, and one as the multiplicative identity, under normal addition and multiplication), but I am not sure I understand the explanation of why infinity is the additive identity.


(Reflective)
 I am glad that the book clearly stated the fact that elliptic curves form an Abelian group. This was something I think I would not have noticed on my own, but it gives elliptic curves a lot of great characteristics for use in cryptography. As we saw with previous cryptosystems, systems which form groups have many advantages, and often a few disadvantages as well. In our examples of DES, it was critical that DES was not a group for Double DES or Triple DES to be effective. I am curious how Abelian groups fit into cryptography.

Monday, November 22, 2010

Section 2.12 due Nov. 23rd

(Difficult) In reading about the Enigma machine, I was a little confused about the method of encryption. It obviously was not a substitution cipher, but of the cryptosystems we have studied so far, it reminded me most of a one-time pad. With that in mind it would explain the weakness of encoding multiple messages with the same key.There also seemed to be some obvious limitations on the permutations of a given message, at least when compared to substitution ciphers, vigenere ciphers, etc.

(Reflective) It was nice to read a little about disjoint cycles of permutations in the lecture. This was another topic covered in Abstract Algebra which left me wondering, "when will we ever use this?" I also found it interesting that the cycle lengths each appeared an even number of times. Does this have to do with the design of the enigma machine itself, or more to do with properties of cycles?

Sunday, November 21, 2010

Section 19.3 due Nov. 22nd

(Difficult) For me the hardest part to follow in Shor's Algorithm was the Fourier Transform. Although I had briefly seen Fourier Transforms used in the EE department, The way it is presented here was far different than what I remembered from my electrical engineering class. If I understand correctly, we are using the individual frequencies in order to determine the period of a periodic sequence.

(Reflective) In reading about quantum computing, I thought a little about what the implications of a quantum computer would be. Although in theory it is a great idea, we obviously need more progress in quantum computing. it seems we have the math theory ready for facing quantum mechanics, but now fields like engineering, physics, and computer programming may be necessary to truly implement such a computer system.
  

Wednesday, November 17, 2010

Sections 19.1-19.2 due Nov 19th

(Difficult) In the Quantum Key Distribution, I understand the bases B1 and B2 used by Alice and Bob, and I understand how the qubits work, but I don't understand how Alice chooses random entries to create a key. The concept of only using entries where their bases matched is what I am struggling with.

(Reflective) Although I can't say I completely understand all the reading in "A Quantum Experiment", I was very interested to see polarization come up in this class. I look forward to the demonstration in class. I think this is a great testament to the fact that we still have so much more to learn about mathematics. Before becoming a math major, I had a very naive view of mathematics, that most everything in mathematics had already been discovered, and all that was left was how to apply mathematics to the real world. In reality, there is still a lot we don't understand about mathematics, and why things work the way that they do.

Tuesday, November 16, 2010

Sections 14.1 - 14.2 due Nov. 17th

(Difficult) The identification scheme in 14.2 was difficult to follow. I understand that the numbers s1,s2, etc. are Peggy's secret numbers, and that Peggy only sends x,y, and the vi's to Victor, but I don't understand how steps 3 and 4 work in this algorithm.

(Reflective) In section 14.1 when we talk about Victor choosing a random x1 or x2, this made me think back to the section on pseudo-random bit generation. This made me consider which random bit generators would work for a zero-knowledge "proof". If the random bit generator could be predicted, then Eve could use this to fake her way through the test. So now we are not only considering the security of the algorithm used for verification, but also the method used to generate "random" bits.  

Saturday, November 13, 2010

Sections 12.1 and 12.2 due Nov. 15th

(Difficult) The shamir threshold scheme was a little confusing to me. I think I understand how each pair (x,y) is chosen, but im not sure I understand why we know the determinant of V will be non-zero. My other concern is whether or not this method works only once. In other words, do we need to replace the polynomial s(x) with new random values s1, s2, etc. each time this method is used?

(Reflective) There are two different types of secret sharing in the examples from 12.1. The first case is one person giving a shared secret to various people. The second case, involves a secret where no single person, (including the inventor of the secret code), should be able to generate the whole code by themselves. The methods in the book all require that the code be constructed first, then the parts of the code are distributed out amongst multiple people. I tried to find a way to construct such a code so that no one would have access to the actual code. Is there a way to make a function where no one, (not even those distributing each part of the code), would have complete knowledge of the code? Would the secret code M be encrypted and then stored somewhere?

Wednesday, November 10, 2010

Exam 2 Prep for Nov. 12th

Which topics/ideas do you think are most important?
I think the most important topics are those covered in Chapter 3. Everything we have covered in Chapters 6 thru 9 relies heavily upon the theory that is covered in Chapter 3.

What kind of questions do you expect on the test?
I imagine we will have a question which covers some of the attacks on RSA and ElGamal. As for the mathematical work, I think there should be atleast one question regarding chinese remainder theorem, square roots mod n, computing discrete logs, and maybe Legendre or Jacobi symbols. There should also be some theoretical discussion of birthday attacks, and maybe a probability estimation.

What do you need to work on?
The material I need to study most are discrete logs, and the hash functions. I am still a little lost on the layout of SHA-1. I also need to go back and review the Jacobi and Legendre symbols, since it has been awhile since we have used those. We have done enough practice with RSA that I feel confident with most of the methods of factoring n, and the theory behind modular exponentiation.

What topics are you interested in studying the rest of the semester?
I am most interested in Chapters 13 and 17. We touched briefly on lattices in Math 371, but only enough to peak some interest in the topic. As for Chapter 13, the games sound like a fun application of crytpography. I already skimmed through Ch. 13, and it shows just how versatile cryptosystems can be.

Tuesday, November 9, 2010

Sections 8.3 and 9.5 due Nov. 10th

(Difficult) The secure hash algorithm was quite difficult to follow. I understand each of the individual operations mentioned (AND, OR, XOR, NOT, and addition mod 2^32.) but trying to picture the entire algorithm was tough to do. I understand that once the system is designed, the user doesn't have to understand what's really going on to use it, but as software programmers or cryptanalysts it would be critical to understand the overall picture. If you go over it in class I am sure it will make a little more sense, but any suggestions on being able to visualize the system better? I had this same stumbling block with AES, trying to see the "big picture" of what was going on.

(Reflective) As I read about the secure hash algorithm, I was thinking to myself that all of the operations were very basic. Each of the operations themselves (AND, OR, XOR, etc.) have a very obvious inverse function. What came to mind next was that the great thing about hash functions is that they are not an "encryption". There is no one-to-one mapping from a plaintext to a hash. This is an integral part of what makes hash functions secure.

Sunday, November 7, 2010

Sections 9.1-9.4 due Nov 8th

(Difficult) I understand that we may learn more about this in another section later in the book, but the lecture briefly mentioned the differences of the ElGamal and RSA system signature scheme. The RSA is a message recovery scheme, and ElGamal is not? Why is it harder to retrieve the message m from the ElGamal system?

(Reflective) In the section talked about signing the hash of a message, I started thinking about the pros and cons of sign a hash as opposed to signing the original message. In most cases, a hash will be shorter, and thus is faster to sign. However, if hash functions are too small, they are more susceptible to brute force attacks. Is there a common size limit for hash functions today? Has this size limit changed over the years as technology and techniques have developed? How does the probability of finding two messages with the same hash function change if the messages are of a fixed length, or arbitrary long?

Thursday, November 4, 2010

Section 8.4-8.5 and 8.7 due Nov. 5th

(Difficult) I have been curious about the birthday attack since it was first mentioned in a previous chapter, but i was somewhat confused by 8.4. The birthday attack seems quite amazing. The use of probability in cryptography is quite interesting, but I was a little disappointed when the birthday attack was compared with Baby Step, Giant Step. Both methods required roughly the same storage capacity and time to calculate, but the BSGS method is guaranteed, whereas the birthday attack will probably produce a solution. The birthday attacks seem effective on hash functions, but are there similar deterministic functions for finding collisions in hash functions?

(Reflective) I enjoyed section 8.7 on using hash functions to encrypt. Last week when we first talked about hash functions, it seemed a little odd we were discussing something which wasn't a cryptosystem in class. In reality, hash functions are not stand-alone cryptosystems, but can be used to add functionality to an existing cryptosystem. If I understand the example in the section correctly, this is another method in which we can generate the rest of the "pseudo-random" bytes even before receiving the first plaintext.

Tuesday, November 2, 2010

Sections 8.1 and 8.2 due Nov. 3rd

(Difficult) I am having a hard time understanding what has functions are used for. I know the lecture mentioned using them for digital signatures, and error checking, but there really is no decryption involved, right? Everything we have dealt with so far has had an inverse function, even in the case of RSA where the inverse is difficult to find without the prime p and q, there still existed an inverse function of some sort. Are there hash functions with no inverse function at all?

(Reflective) I found the data integrity application of hash functions to be quite interesting. This is a clever way to prevent Eve from modifying a message en route to Bob. I was curious though, one thing Eve could do is create her own message, run it through the hash function (if she knows the function), and send m and H(m). Are hash functions used in conjunction with other cryptographic systems? It seems there would be some other method needed to verify who the message was sent from. I understand we are just introducing hash functions now, but I hope at some time in the class we will have a chance to see how various cryptosystems might be used together in a real-world application.

Saturday, October 30, 2010

Sections 7.3 - 7.5 due Nov. 1st

(Difficult) In section 7.5, I had a hard time understanding the proof concerning the machine that could solve Decision Diffie-Hellman problems and decide the validity of mod p ElGamal ciphertexts. The main point I didn't understand was why the ElGamal machine only outputs "yes" when m = 1 is the same as ca^(-xy).  


(Reflective) When reading about bit commitment and the key exchange, I was thinking about how much more versatile public key encryption was compared to private key encryption. I know we discussed this at the beginning of the class, but it is nice to see all the examples of how public key systems can be used. When we had talked about using RSA to transfer a DES key I wasn't completely sure how this would work, but the Diffie-Hellman Key Exchange is an ingenious approach to using discrete logarithms to construct a key between two people.

Thursday, October 28, 2010

Section 7.2 due Oct. 29th

(Difficult) I am not sure if I completely understand this, but does the difficulty of discrete logarithms come from the need to factor p-1? I noticed in the example of 7^x = 12 (mod 41) it was necessary to first factor the number 40. In practice, would p-1 we be choosen as a multiple of two large primes, or does it matter if p-1 factors into more than 2 primes?

(Reflective) The Pohlig-Hellman Algorithm reminded me of the p-1 factoring algorithm with RSA. Although I don't completely understand it, it is interesting how p-1 is used to solve for the modular exponentiation mod p, both with RSA and discrete logarithms. The two methods are obviously related, since they both involve modular exponentiation, but after reading this section, computing discrete logs still seems rather foreign compared to the RSA algorithm.

Tuesday, October 26, 2010

Section 6.5-6.7 and 7.1 due Oct. 27th

(Difficult) The concept of a trapdoor was a little confusing to me. I can understand how it might be possible to find an easier way of dealing with a one way function, but it is hard to imagine why the designer of the original function would have any advantage in finding a trap door over anyone else. If everyone has access to the public encryption method, it would seem that everyone would also have an opportunity to try to find a trapdoor. It would be nice to see an example of how this might work to better understand how a trapdoor could be built into a cryptosystem.


(Reflective) The RSA challenge was a great example of overestimating the security of a cryptosystem. It was interesting to see that they estimated it would take 4 x 10^16 years to decrypt the message, but in reality it took less than a year in 1994. It goes to show how important it is to analyze a cryptosystem and look for its weaknesses before making any assumptions about the overall security.

Sunday, October 24, 2010

Section 6.4.1 due Oct. 25th

(Difficult) Th hardest part of this section to understand was where the author came up with the original squares he was using in the start of the section. If I understand it correctly, he takes squares which are just beyond n, so that they are small (mod n). This sounded alot like the what we did in class a few days ago, but the book's explanation seemed a little out of order. The book sort of pulled the squares out of thin air, then tried to explain them later. Was this method similar to the Fermat Factorization method? Finding values close to n, such as n + 1^2, n + 2^2, etc. until we found a square? it seems this method would also yield similar squares that would be small (mod n), which should give us relatively small prime factors.


(Reflective) I liked the table included in the book about the number of digits of factorization records over the last few decades. Clearly RSA has had a great influence on the study of factoring, and even today people continue to develop or improve existing methods of factoring. This is a great example of what interests me the most about cryptography: Cryptography continues to expand, and the study of cryptography is always new, and always interesting. Even though the math community has a great understanding of encryption methods, there is always more to learn, more to explore. It leaves open a great deal of opportunity for continued research, for greater understanding, and continued education. As I get closer to graduation, I realize I want to get into a career that always me to further my education, to always be pushing my self and my mental capacity a little further.

Thursday, October 21, 2010

Section 6.4 due Oct. 22nd

(Interesting) Fermat's Factorization method is a rather clever method of factoring. It appears to be similar to the continued fraction attack, since they both involve squares. The nice thing about Fermat's method is the simplicity of it. It really only involves squaring numbers and finding square roots, everything else is addition and subtraction.

(Difficult) The p-1 factoring algorithm was difficult to follow. It appeared more like an RSA attack than a factoring method, since factorization is not guaranteed by this method. I would really appreciate an example of this algorithm in class.

Monday, October 18, 2010

Section 6.3 due Oct. 20th

(Difficult) The most difficult part to understand in this section was the Miller-Rabin Primality Test: The process itself seemed straightforward, except for how we get the value m for n -1 = 2^k (m). It seems as though we take n - 1, then divide it by 2 until the quotient is an odd number, then we use that odd number as m? Once we have m, the test is fairly straight forward: square 2^m (mod n) until we get a result that is +/- 1.The other question I had was if we are guaranteed to get a -1 or +1 for every test. Is it possible that any (mod n) group would not have a square congruent to 1 or -1? This would imply that no element of the group was its own inverse, right?

(Reflective) I was thinking about why a definitive primalty test was more difficult than proving a number is composite, and in reading pg. 181 I realized it goes back to the Jacobi Symbol, where we had 'a' over 'n' = 1. Since this 'a' over 'n' was equal to 1, if n was composite, then a/p and a/q would be equal, and could either be +/- 1.  Thus, without factoring n into p and q, we could not say whether 'a' was a square or not. Likewise with primality testing, it leaves us with a "probable answer" but doesn't guarantee a prime without further work. Given the similarities in testing for square roots mod n, and testing primality, it would seem the only "simple" way to prove primality would be to factor the number. My question now is how does the AKS algorithm prove primality without factoring?

Saturday, October 16, 2010

Section 3.10 due Oct. 18th

(Reflective)
The proposition that x^2 is congruent to a (mod p) has a solution if and only if a^(p-1)/2 is congruent to 1 (mod p) made me think more about primitive roots. Does this proposition imply that no primitive root (mod p) is a square of another number? The primitve root (or multiplicative generator) forms a cyclic group in group theory, so is there a definitive relation between square roots mod p, and cyclic groups?


(Difficult)
The law of quadratic reciprocity was the hardest to follow in this section. It seems we are reducing two values (a and p) by using properties which define the Jacobi symbols down to simpler terms until we can finally determine a value (+/- 1) for the Jacobi symbol. The trouble I had was understanding if there is a specific pattern to follow, if the order of reduction mattered, or if the application of each property could be done in a different order and yield the same results.

Thursday, October 14, 2010

Section 3.9 due Oct. 15th

(Difficult) I understood how the square roots could give us a non-trivial factor of a composite number, but I could not understand how this was computationally equivalent to factoring the composite number. It seems that this would require comparing two numbers squared, mod n over and over. Rather than trying to divide by all primes less than the square root of n, we would have to compute the "square" of every number mod n then compare them with one another. In reality, no two integers less than the square root of n would have the same squared value, so we would only need to check those values greater than square root of n.

Monday, October 11, 2010

Section 6.2 due Oct. 13th

(Difficult) The concept I struggled with in this section was dealing with timing attacks. I understand how such attacks might give you a rough idea on the order of 'e' or the order of 'n', but I don't understand how it could find  exact binary values as mentioned in the section. The equation at the end of page 174 sounds like we can actually calculate the decryption key just by the timing of each computation in RSA. I also had a hard time understanding how they used statistics in this formula, finding the mean and variance.


(Reflective) The thing that I found most interesting in this lecture was the flexibility of RSA. There are some very well thought attacks against RSA, but almost all of them can be avoided using a few basic guidelines. The way we generate the primes p and q, the way we pick the exponents e and d, and how we construct the message m can all become weaknesses if we are not careful. The only attack that seemed serious was the timing attacks, although the book mentions that there are ways to avoid this attack as well. It made me think back to Kerchoff's Principle. RSA is laid out in the simplest way. Everyone has the public key encryption method, and everyone knows how the decryption method works, but it still stands up against cryptanalysis quite well.

Sunday, October 10, 2010

Section 3.12 due Oct. 11th

(Difficult) I understand how the continued fractions are used to get decimal approximations of real numbers, but the Theorem in this section was hard to follow. If I understand it correctly, if we have a rational number (r/q) as an approximation of a real number, if the difference between the rational approximation and the real number is less than 1/(2q^2), then (r/q) is a partial quotient of continued fractions. It seems it would be a simple proof to show that any partial quotient will be within a specific range of the real value, but the theorem seems to imply that continued fractions are an optimal approximation when limiting the order of the numerator or denominator. That is, if we want to optimize are approximation with a limited number of digits in the numerator/denominator, continued fractions will guarantee optimization. The first paragraph of the sections seems to hint to this, since 22/7 is a better approximation of pi then 314/100, but I am still not sure if that is what the theorem is stating, or what other purpose the theorem may have in this section.

(Reflective) I had never thought of how continued fractions were related to the euclidean algorithm before. We could in fact calculate the gcd by using continued fractions, although the euclidean algorithm appears to be easier to use. However, for a computer, continued fractions may be an easier way to calculate the gcd. The process involved in both formats are very similar. I am curious as to which would be easier to implement on a computer, and if there are any advantages/disadvantages to either one. At the end of the section when it talks about partial quotients, its seems as though this method would have an advantage when implemented on a computer because it says that the partial fractions can be calculated without starting a new computation as we do with each layer of the Euclidean Algorithm.

Thursday, October 7, 2010

Section 6.1 due Oct 8th

(Difficult) One thing about the RSA algorithm that does not quite make sense is how we can use Euler's or Fermat's theorem to calculate the exponent 'e' of a message. In Euler's theorem we could find a to the power of (p-1)(q-1) was congruent to 1 (mod pq) because a was relatively prime to pq, (in this case pq = n) Are there limitations on the message size to ensure that the message will be relatively prime to n? It seems that if we are multiplying together two very large prime numbers, the chance of the message being a multiple of one of the primes seems very unlikely, but it still poses some concerns. If the message was never larger than either of the primes, then this would not be an issue.


(Reflective)  As we read about the encryption process of RSA, I wondered how it was implemented given today's technology. Would there be any way to prevent two companies from using the same value of n? If two people shared the same value of n, and n has a unique factoriziation, they would know each other's values for p and q. How could a system be developed to prevent multiple users with the same encryption and decryption methods? Likewise, although it would take a ridiculous amount of inital computations, Eve could create a multiplication table of values of p,q and calculate 'n' for n = pq. Although this list would be enormous, given enough storage capacity, Eve would already know the factors for many values of n, and if she found a system using the same value of n in their encryption, she would effectively have their key for encryption and decryption. Maybe n is large enough to make this impractical for modern technology, but it seems that it might pose a threat if newer methods of storage and compression on computer systems are found.

Tuesday, October 5, 2010

Sections 3.6-3.7 due Oct. 6th

(Interesting)
I enjoyed the section on primitive roots. Admittedly, when the idea of a multiplicative generator was covered in Abstract Algebra, I didn't see much of a reason for it. It seemed like nothing more than a neat idea. Since we used this technique with tabulating finite fields, and now with multiplication tables mod n, I am starting to see how many of these seemingly obscure concepts come together. It is amazing how cryptography uses facets from so many branches of mathematics.

(Difficult)

Euler's function was the most difficult to fully understand. I could see how it worked in the examples, but the proof of it was harder to follow. Knowing why something works can be just as important as knowing how something works, so I hope I will understand it better when it comes up in class.

Sunday, October 3, 2010

Sections 3.4-3.5 due Oct. 4th

(Interesting)

Modular Exponentiation seems like it would be amazingly useful in cryptography, especially given the restrictions of current technology. It is fascinating that a calculation could be performed in two different ways, one which could require a few hundred calculations, and the other could require more calculations than a modern computer could ever store, or even compute given years of computing time. I am curious to know how this method fits into cracking cryptosystems as well.

(Difficult)

Although I understood the proof regarding the unique solutions for the Chinese Remainder Theorem, I had a hard time following the explanation of how to find a solution. I was also unsure if we were trying to reduce a composite mod into its primes, or take two primes and find a number mod the multiple of the primes.

Thursday, September 30, 2010

Preparation for Exam 1

Which topics and ideas do you think are the most important out of those we have studied?
For this class I think the most important topics are those based in abstract algebra and linear algebra. Most of the mathematics that go into cryptography are derived from these two topics. I think the analysis of cryptosystems is the next important idea we have covered. Our ability to understand how a particular cryptosystem works, and what advantages/disadvantages it has. 



What kinds of questions do you expect to see on the exam?
I imagine there will be questions in which we will use specific cryptosystems either to encrypt or decrypt a message. I also expect a few questions in which we will be asked to find weaknesses in different cryptosystems. Of course there will be some questions to test our understanding of the underlying mathematical principles (mods, linear algebra, fields, etc.) There should also before a few questions to test our knowledge of terminology used in cryptography.  



What do you need to work on understanding better before the exam?
I need to work on my understanding of the modes of encryption, ECB, CFB, OFB, CTR. I feel comfortable with the linear and abstract algebra covered in class, but I need to understand the modes if there are any encryption problems on the test. Right now I can perform all the encryption modes if I reference them from the book, but without looking at the drawings for each one I have a hard time remembering the order of the encryption process. 

Tuesday, September 28, 2010

Section 5.1 - 5.4 due Sept. 29th

(Interesting) To me the most interesting part of the lecture was the fact that Rijndael was able to use 128, 192, or 256-bit encryption. I had not considered this so far in our study of encryption systems, but versatility is definitely important in today's technology. This versatility would allow for an adjustable balance between speed and security of a cryptosystem. The strongest security system is useless if it takes way too long to encrypt or decrypt a message.

(Difficult)

The key schedule was the toughest part of the lecture to understand. It is hard to try to visualize the computations involved in finding the round keys based on the original key.

Monday, September 20, 2010

Section 4.5 - 4.8 due Sept. 22nd

(Interesting)

 I enjoyed the reading in section 4.8 on password security. I had never considered the fact that passwords would have to be encrypted to remain safe. With a better understanding of computer programming this would probably have seemed obvious, but I had not thought about how a password is stored. Passwords are so common place now on the internet, for email accounts, bank accounts, online purchases, etc. that clearly there is a need to encrypt them as well. One thing I would like to learn more about is how they prevent "Eve" from simply sending the same ciphertext to pretend to use the same password.

(Difficult)

I found the explanation on meet in the middle attacks to be the most difficult section of the reading. I understand that double encryptions with groups will not add an more security, since groups are closed under the operation. However, the meet in the middle attack seems to allow Eve to attack one key at a time, which would then reduce the possible combinations for the second key. However, I still don't see the immediate security weakness here.

Sunday, September 19, 2010

Section 4.1, 4.2, and 4.4 due Sept. 20th

(Interesting) I was most interested in section 4.4 when they brought up the topic of whether DES was a group. When we designed our cryptosystem in class I assumed working with groups (such as mods under addition or multiplication) would be the best way to go because they are closed operations. Because they are closed under addition or multiplication, it is easy to assure your outputs will be in the correct range of acceptable values in order to encrypt a message. However, I had not considered the possibility of the security weaknesses of groups. When we constructed our own cryptosystem, I had considered using two keys in order to encrypt our messages. After thinking about group theory, I realized that using two keys would be equivalent to using only one key. This means that rather than adding security, this type of encryption would only be redundant, and waste more time and resources for encryption.

(Difficult) The most difficult part of the reading for me was the function used in DES. Although the flowchart on page 126 was well-organized, I did not completely understand what was going on in the DES function. I understand how the expander function and XORing works, but the S-boxes were where I was lost.

Thursday, September 16, 2010

Section 2.9 - 2.11 due Sept. 17th

(Interesting) I found myself thinking specifically about the one time pad after completing the reading tonight. It appears to be a wonderful design in theory, but it seems in reality nearly impossible. However, I think that even in theory it falls short when we consider Kerchoff's principle: The security of the system depends on the key and not the obscurity of the cryptosystem.  The key in a one-time pad is quite impractical, since the key is the same length as any message, and the key can only be used once to ensure security. The key must first be passed over a secured line of communication, or must be encrypted itself, which seems to create some redundancy for a cryptosystem. In a world where the transfer of important information is often time sensitive, the one-time pad has too many shortcomings to be implemented yet.  



(Difficult) I had trouble understanding the Blum-Blum-Shub random bit generator.

Tuesday, September 14, 2010

Section 3.8 and 2.5-2.8 due Sept. 15

Interesting - In this class I have been most fascinated with the methods of cracking different cipher systems. Up until now most of the cipher systems were relatively easy to crack using frequency analysis or other basic methods. The block cipher seemed to have a very secure method by the description in the book, but I was interested in how easy it was to find the encryption key using a simple plaintext attack. This made me think about the security of real-life cryptosystems, and how not only the key itself, but the machine (or in todays world, the computer program) that generates the encryption must also be protected. I read an article recently about using electrical analysis of encoding "machines" to  determine how the machines work. It occured to me that cryptography has expanded to a plethora of scientific fields. No longer a matter of mere mathematics and logic, cryptography relies on computer science, electrical engineering, statistics, analysis, etc.

Difficult - The most difficult part of the reading for me was understanding the playfair cipher. I understand that it is a form of a block cipher, but using matrices and modular arithmetic seems far easier than memorizing steps to encode characters based off of rows and columns. I tried to encrypt a few characters myself, and found I spent far more time referring back to the instructions then actually encrypting anything. I also do not understand exactly how the decrypting process works.

Friday, September 10, 2010

Section 2.3 due Sept. 13

(Difficult) The first method for finding the key was the most difficult for me to understand. For me personally it was easier to follow the second method because it seemed more concise, put into mathematical terms without too much explanation.


(Interesting) To me the most interesting part of the section was finding the key length. I was intrigued at how lining up the same cipher on two lines and then displacing one of them could reveal the key length. It made me ponder the logic about how this method worked, in looking for coincidences in the two ciphers displaced a few characters from one another. It amazes me how much time and effort goes into creating ciphers, and yet at the same time, how much time goes into cracking them. With such detailed examples of how to solve ciphers it is no wonder that encryption has been an ever changing discipline throughout history. Even today, many methods of encryption become obsolete after years of use and abuse. This is part of the reason why I wanted to learn about cryptography, because it is a career field in which technology changes and the learning process never ends.

Thursday, September 2, 2010

3.2 and 3.3, due Sep. 3rd

(Difficult)
      The toughest part of the material in these sections was using the extended euclidean algorithm to find the inverse of a. That is, find s and t such that as + nt = 1. From this our value s is our inverse. Therefore the inverse of a is conrgruent to s (mod n). In previous classes I have seen the euclidean algorithm, but this was my first exposure to the extended euclidean algorithm. In some cases we have been able to use a "guess and check" mehtod of finding s and t for as + nt = 1. The easiest way form me to understand this concept was to see it as the linear combination of a and n to achieve the result 1. However, we can not solve directly for s and t since we have one equation and two unknowns. This is why we need the extended Euclidean algorithm.

(Reflective) 
     In thinking about our encryption project, I was trying to think of a way to encrypt a message using basic math functions. I started by seeing how addition or multiplication would work. This presented a problem since adding or multiplying would produce results outside of the range of numbers established before encrypting the message. This section on congruence made me realize that modular arithmetic is very well suited for encryption. It would allow me to simplify my method of encryption and still keep the numbers within a specified range. This will be extremely useful in cryptography, especially since simplifying the method often means the computer programming associated with the encryption will be easier as well. 

Tuesday, August 31, 2010

1.1 - 1.2 and 3.1 due on Sep. 1st

(Difficult)

The most difficult part of the lecture for me was in Chapter 1, on page 8. Here the author was talking about measuring the size of numbers, comparing the magnitude of a number and the number of digits in the number. The wording in this section is hard to follow for me. Specifically the phrase "An algorithm that runs in time a power of log n is much more desirable than one that runs in time a power of n." I was able to eventually grasp the rest of the paragraph, but this particular phrase is still confusing to me.

(Reflective)

In reading chapter three I was most interested in the division in modular arithmetic, and the use of a multiplicative inverse for division. In past classes, when dealing with modular arithmetic, we never touched on modular division. Most classes simply teach that you can't do it, or it only works in certain cases, and therefore we won't bother with it. It was nice to finally learn when it was applicable, that being when the modulo and the divisor are relatively prime.

Introduction, due on Sept. 1st

What is your year in school and major?

  I am a senior, majoring in mathematics.

Which post-calculus math courses have you taken?

I have taken ordinary differential equations, multivariable calculus, linear algebra, abstract algebra, and survey of geometry.

Why are you taking this class?


I am taking this class as an elective for my major (mathematics). I am also considering cryptography as a future career path. I first became interested in cryptography when we talked about it in abstract algebra.


Do you have experience with maple, programming,etc.?


I have taken a programming class in Java, and a intro class for programming in assembly, binary, and C. I have used maple sparingly in classes, only for constructing graphs of complex functions.

Most effective/least effective math teacher?

My most effective math teacher was Doctor Forcade. I took abstract algebra from him about a year ago and learned a great deal from his class.

What did he do that worked so well?

Th most effective part of his teaching was to encourage students to expand their knowledge on their own. He was always willing to help, and give hints on work, but ultimately he would assign us work that would require us to go back and review previous sections in the book and dig deep in our understanding of the material to find a solution to the problem. He also believed strongly that there was always an easier, more efficient way to construct a proof or solve a problem, and that it was worth it to spend the time to find the simplified solution rather than using brute force.

Something interesting about myself?

I love the outdoors. We go camping every summer, and we go off-roading and shooting year round. There's nothing more fun than driving through 2 feet of snow in a Jeep or ATV.