
I've always been interested in mathematics, for as long as I can remember. Although I originally intended to study physics in college (Stevens Institute of Technology), for several reasons I switched to the mathematics department, from which I received a Bachelor of Science degree. After I retire, I'd like to go back and study mathematics again, aiming for a PhD, which would probably make me the oldest graduate student in the field. Mathematics is very much a young person's game, so to speak. Still, I find it fun. In here you'll find some essays and notes that range from my own personal descriptions of well established topics, to speculations who's significance (and correctness) may be questionable at best. I try to identify those places where I go too far out on a limb. But it is in those areas in particular where I most enjoy dialog. If you're tempted to drop me a note concerning any of these essays, I'll be happy to reply. The equation pictured is known as Euler's Equation, sometimes referred to as the most beautiful equation in mathematics. It contains both the identity elements of addition and multiplication, the only digits necessary to form all other numbers, and the two most important transcendental numbers, e which is the base of the natural logarithms, and pi which is at the heart of geometry and so much of number theory. It also features i, the " impossible" square root of negative one, which finally closes the algebra of numbers into a consistent field. Sometimes you'll see this equation as eiB= 1, but that's a very inelegant way to write it. Sometimes, in mathematics, aesthetics can be very important. More on this simple but intricately complex formula later. I’ve always been interested with recursive functions, that is, functions which contain references to themselves. In practical terms, the recursive functions have to use different values for each instance of themselves, so that the evaluation eventually terminates. Theoretically, the evaluations merely have to converge, but I won’t go into such complications here. This essay is more about a formal notation for use in recursive functions, since I haven’t seen a good notation for this concept before. First, a little background. I was playing around with ways to generated ordered list of subsets of a given set, and collected the subsets by size. For a set of size n, there is one null set, and there are n subsets of size one, and of subsets of two elements there are or or “n choose 2” subsets, and so on, and if all these are added up there is a total of 2n subsets for a set of n elements. There are always 2n subsets of a set on n elements. It’s well known that are also the binomial coefficients, since and if a = b = 1, and since 1 to any power is still just 1, then we have the simpler expression So, in the absence of an established notation for recursion, I’d like to suggest one. Okay, here it is:
More interesting mathematically, but beyond the scope of this essay, would be to extend this expression for exponentiation to allow for noninteger values. That would be similar to how factorials of integers have been extended to all real values using the Gamma function (interestingly enough, using arguments involving recursion). This might seem trivial since real number exponentiation is well established. Still, the implications are for how the summation operator is affected by moving away from simple counting integer type controls. And it might lead to some insights useful to the subject of Real Analysis. Conditionally Convergent Series: This topic is best introduced by a remarkable puzzle. In the next few paragraphs, I will prove that one equals two. Some of you may have seen "proofs" of this using algebra, where a divide by zero is carefully hidden in one of the steps. This proof is not like that. No such superficial errors or tricks will be used. First, consider the infinite series given by ln 2 = 1  1/2 + 1/3  1/4 + 1/5  1/6 + 1/7  1/8 + 1/9  1/10 + 1/11  1/12 + 1/13  1/14 ... which is about equal to 0.691. This comes about from the Taylor series expansion of ln (x+1) where x = 1. Taylor series are studied in first year calculus courses, and are very useful and of much theoretical interest in themselves. You can explore this further, if you like, or just accept the fact for now, maybe verifying (approximately) the result on a calculator. In any case, the series is correct. Now we're going to divide the above formula by two, on both sides of the equals sign as is proper, so (ln 2) / 2 = 1/2  1/4 + 1/6  1/8 + 1/10  1/12 + 1/14  1/16 .... Now things get interesting. Let's add both equations together. Notice that in the right hand side of the top equation there is a minus onehalf, but in the bottom equation there is a plus onehalf. These will cancel out when the two equations are added together. The same thing happens with the minus onesixth from the top and plus onesixth from the bottom. Same thing for the pairs of onetenths, and onefourteenths too, and oneeighteenths. And so on. Note that all these denominators are even. All the terms in the top equation that have odd denominators don't have any such terms similar terms in the bottom equation, so they will be unaffected when the two equations are added together. But other terms do correspond. In the top equation we have a minus oneforth, and we also have a minus oneforth in the bottom equation. When there are added together, they will yield a minus onehalf. Curiously, that will replace the original minus onehalf that was cancelled out when the two equations were added. The same thing will happen with the minus oneeighths from both series, yielding the minus onefourth that would otherwise be missing. And so on to infinity. In other words, writing the two series one term at a time gives ln 2 + (ln 2) / 2 = 1 + 1/2  1/2  1/4 + 1/3 + 1/6  1/4  1/8 + 1/5  1/10  1/6 + 1/12 + 1/7  1/14  1/8 .... and rearranging terms we have 3/2 ln 2 = 1 + (1/2  1/2) + (1/4  1/4) + 1/3 + (1/8  1/8) + 1/5 + (1/6  1/6) + (1/12  1/12) + .... or 3/2 ln 2 = 1  1/2 + 1/3  1/4 + 1/5  1/6 + .... where we can see that what's on the right hand side of this equation is exactly the same as what was on the right hand side of the first equation. Since these two series are identical, what they sum to must be identical, so ln 2 = 3/2 ln 2, or 1 = 3/2, which means 2 = 3 and 1 = 2. And there you have it! Q.E.D. Okay, obviously, something isn't quite right. Can you spot the problem? It's pretty subtle. From your earliest math classes, you were probably taught some of the basic properties of numbers, like the existence of an identity element of addition (0), an identity element of multiplication (1), the communicative property (a+b = b+a) and associative rule (a*(b+c) = a*b + a*c). These are all very fundamental properties, and are usually assumed to be beyond question. I'd bet that no one ever told you that sometimes the communicative property of arithmetic doesn't work! But that is exactly the case, at least when dealing with some infinite series. The series that we need to worry about have alternating sign. Further, although the series may converge as written, if all the signs were positive, the series would diverge. 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 ... goes to infinity, although very, very slowly. It's such an important series that it has it's own name, the Harmonic series. Most infinite series that reach a limit are absolutely convergent. Convergent series that all have the same sign are absolutely convergent. Some series with terms of alternate sign are also absolutely convergent, if and only if the series formed by the absolute values of each term is also convergent. For example, the series 1/2n is absolutely convergent (the sum is 1), and the series (1)n/2n is also absolutely convergent (the sum is 1/12). For a series to be conditionally convergent, the series formed from the absolute values of it's terms must be divergent. In that condition, the communicative law of addition no longer applies, and rearranging the order can yield any value at all one desires. Why should this be? Look at it this way: let's rearrange the terms so that we have two series, one in which all the positive terms are grouped together and another one with all the negative terms. Both of these series are divergent, so one gives positive infinity and the other gives negative infinity. What's infinity minus infinity? Zero? Yes, but it can also be infinity (after all, infinity minus a billion or any other number is still infinity), or it could be negative infinity, or anything in between. In one sense, infinity is not a number and cannot be treated like any other number. In our series above, or in any conditionally convergent series, how would one generate an ordering of terms to give a value of, say, pi? First we'd add up enough of the first positive terms to give a value greater than pi, however many that took. Then we'd start adding the largest negative terms to make the value fall just under pi. Then we'd start adding the next positive terms until we exceeded pi again. Then we'd start reducing that with more negative terms, and so on, and so on. Since the differences from the actual value of pi will be always decreasing every time we need to change the sign, in this way we can form a series that converges exactly on pi. Likewise for any other number we might care to pick. In general, infinite series need to be handled with thoughtfulness and care, especially when the signs can vary. This leads to all sorts of complications. For example, an infinite series of continuous functions doesn't have to be continuous. An infinite series of continuously differentiable functions doesn't have to be differentiable. An infinite grouping of measurable sets doesn't have to be measurable. These quirks become very important when considering the foundational definitions of mathematics. Often, we think of numbers and functions of numbers as fixed and simple things, and in everyday experience that assumption serves us pretty well, but theoretically it gets deeper and more subtle. I hope this little essay has provided a glimmer of that. You can read up more about conditionally convergent series at the following links: Intro to Conditional Convergence, Conditionally Convergent Series, MathWorld  Conditional Convergence. Future Topics
