I'm not claiming any original material here, just things that I've gathered over the years that I find pretty interesting.
This is a rather famous proof, so I'm not going to elaborate too much. However, it's a proof by contradiction, so we're going to start by assuming the converse is true and trying to reach a logical contradiction. Let's get started.
Suppose √(2) is rational. Let's say √(2) = a / b where a and b are "natural numbers"– that is, positive, non-zero integers (1, 2, 3, and so on). Let's also suppose that a and b are relatively prime– that is, the fraction is in its lowest possible terms. a shares no divisors with b.
With these suppositions made, here's our proof:
I want to end by saying that this proof originally disappointed me. You could seemingly replace 2 by any other number and have the proof still work. This is kinda touchy, but the simplicity of the odd and even characteristics really make it work well for √(2). After further consideration, it grew on me.
I originally found this next one by searching for further truths about irrational numbers after my √(2) conundrum. I found this proof on Everything2, but the English was pretty bad and it didn't properly explain things.
This proof goes a little beyond my title. It actually proves all numbers that are not perfect squares are irrational. This is another proof by contradiction. We start by assuming p is prime and that there exists some a and b so that √(p) = a / b. Let's do some math.
This is a really beautiful proof that makes more sense the more you look and play with it. It's difficult to describe in plain English and mathematical notation becomes almost a necessity. (8) is kinda chunky, but it makes lots of sense. Things like the proof of √(2) might lead you to believe that these proofs will make things like √(25) drift off into irrationality, but it doesn't. In fact, if you plug √(25) or any other perfect-square into the above proof, c is equal to √(25) and d turns into zero. Then, of course d is less than b and d times √(25) is also 0. So, everything falls apart and rationality is restored.
I found this one in a 1960s textbook on number theory. It's breaking the trend of "prove-this-is-irrational" from the previous two. It's a proof by contradiction that assumes there is a largest prime number.
So, q - 1 is a hugely composite number. Every number including p divides it. q, however, can't be prime since (1) says that p is the largest prime, and every number less than or equal to p divides q with a remainder of one. Therefore, there can't be a largest prime number.
From here, you can either go back to writings, or go back home.