Robert Cairone Mathematics

Under Construction


 

I've always been interested in mathematics, for as long as I can remember.  Although I originally intended to study physics in college (Stevens Institute of Technology), for several reasons I switched to the mathematics department, from which I received a Bachelor of Science degree.  After I retire, I'd like to go back and study mathematics again, aiming for a PhD, which would probably make me the oldest graduate student in the field.  Mathematics is very much a young person's game, so to speak.  Still, I find it fun.  In here you'll find some essays and notes that range from my own personal descriptions of well established topics, to speculations who's significance (and correctness) may be questionable at best.  I try to identify those places where I go too far out on a limb.  But it is in those areas in particular where I most enjoy dialog.  If you're tempted to drop me a note concerning any of these essays, I'll be happy to reply.

The equation pictured is known as Euler's Equation, sometimes referred to as the most beautiful equation in mathematics. It contains both the identity elements of addition and multiplication, the only digits necessary to form all other numbers, and the two most important transcendental numbers, e which is the base of the natural logarithms, and pi which is at the heart of geometry and so much of number theory.  It also features i,  the " impossible" square root of negative one, which finally closes the algebra of numbers into a consistent field. Sometimes you'll see this equation as eiB= -1, but that's a very inelegant way to write it.  Sometimes, in mathematics, aesthetics can be very important.  More on this simple but intricately complex formula later.

A Notation for Recursion:

I’ve always been interested with recursive functions, that is, functions which contain references to themselves.  In practical terms, the recursive functions have to use different values for each instance of themselves, so that the evaluation eventually terminates.  Theoretically, the evaluations merely have to converge, but I won’t go into such complications here.  This essay is more about a formal notation for use in recursive functions, since I haven’t seen a good notation for this concept before.

First, a little background. I was playing around with ways to generated ordered list of subsets of a given set, and collected the subsets by size. For a set of size n, there is one null set, and there are n subsets of size one, and of subsets of two elements there are  or  or “n choose 2” subsets, and so on, and if all these are added up there is a total of 2n subsets for a set of n elements. There are always 2n subsets of a set on n elements.

It’s well known that  are also the binomial coefficients, since

and if a = b = 1, and since 1 to any power is still just 1, then we have the simpler expression
     
So, what would 3n look like? We use the formula immediately above with a=1 and b=2 to get

But 2m was given as the original summation above, so again, eliminating the power of 1 and substituting for 2m, we can write

I liked that, because we can write an equivalent form for 3n in which the value “3” never appears.  Because I thought that result was pretty, I wondered what else I could get out of this approach.

We can do the same thing for 4n and so on, and in general for any integer X to any integer exponent n

I think that’s cool, although the notation is awkward. There must be a better way, and the whole process looks like it’s a perfect candidate for recursion. All we need to use are subscripted variables to control the summations instead of the peculiar n, m, p, q, r … form. Just to be clear, we rewrite that equation as

  

That’s better, but it still isn’t quite tidy. Is there a better way?  Is there some notation explicitly for use with recursive expressions? If there is, I haven’t come across it, but if anyone knows of one I’d love to hear about it.

So, in the absence of an established notation for recursion, I’d like to suggest one.  Okay, here it is:



There, nice and concise.  I think it looks neat and efficient.  The prefacing superscripted asterisk designates the recursive nature of the expression, and the immediately following X gives the control variable. I’m assuming, and in this form it’s notationally necessary, that there are always at least two instances of the recursive operation.  Supplying 0 or 1 isn’t very meaningful, since an expression that doesn’t invoke itself at least one time over the original definition isn’t actually recursive.  Following that, in brackets, initial values can be supplied.  This is an important feature because any variables, possibly but not necessarily all of them, might need to be altered in the recursive instances of the function, as is true in this case.  The final asterisk indicates the recursive substitution of the expression.  Does that make sense?

I’d appreciate input from any reader on what they think of this suggestion.  Does it seem understandable, simple and usable? Is it sufficiently flexible and adaptable? Does it interfere with any other established use for similar notation that you’re aware of? Do you have any suggestions for how it might be improved?

And for those who enjoy recursion as much as I do, keep in mind that the binomial coefficient itself



is another recursive expression and could have been included as a further nesting in the example above.   But doing that now would have complicated the typesetting and made my proposal less clear.  However, in principle, it shows that the entire exponentiation process can be reduced recursively to a series of additions and simple multiplications and nothing more. No factorials, no divisions at all. Granted, it’s going to be a whole lot of additions, but it’s the principle of the thing that matters.  And of course exponentiation of integers to integer powers is a basic operation of multiplication, or iterative addition, so that’s only to be expected.

More interesting mathematically, but beyond the scope of this essay, would be to extend this expression for exponentiation to allow for non-integer values.  That would be similar to how factorials of integers have been extended to all real values using the Gamma function (interestingly enough, using arguments involving recursion).  This might seem trivial since real number exponentiation is well established.  Still, the implications are for how the summation operator is affected by moving away from simple counting integer type controls.  And it might lead to some insights useful to the subject of Real Analysis.

Back to Top

Conditionally Convergent Series:

This topic is best introduced by a remarkable puzzle. In the next few paragraphs, I will prove that one equals two.  Some of you may have seen "proofs" of this using algebra, where a divide by zero is carefully hidden in one of the steps.  This proof is not like that.  No such superficial errors or tricks will be used.

First, consider the infinite series given by

ln 2 = 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 + 1/7 - 1/8 + 1/9 - 1/10 + 1/11 - 1/12 + 1/13 - 1/14 ...

which is about equal to 0.691.  This comes about from the Taylor series expansion of ln (x+1) where x = 1.  Taylor series are studied in first year calculus courses, and are very useful and of much theoretical interest in themselves.  You can explore this further, if you like, or just accept the fact for now, maybe verifying (approximately) the result on a calculator.  In any case, the series is correct.

Now we're going to divide the above formula by two, on both sides of the equals sign as is proper, so

(ln 2) / 2 = 1/2 - 1/4 + 1/6 - 1/8 + 1/10 - 1/12 + 1/14 - 1/16 ....

Now things get interesting.  Let's add both equations together.  Notice that in the right hand side of the top equation there is a minus one-half, but in the bottom equation there is a plus one-half.  These will cancel out when the two equations are added together.  The same thing happens with the minus one-sixth from the top and plus one-sixth from the bottom.  Same thing for the pairs of one-tenths, and one-fourteenths too, and one-eighteenths.  And so on.  Note that all these denominators are even.  All the terms in the top equation that have odd denominators don't have any such terms similar terms in the bottom equation, so they will be unaffected when the two equations are added together.  But other terms do correspond.  In the top equation we have a minus one-forth, and we also have a minus one-forth in the bottom equation.  When there are added together, they will yield a minus one-half. Curiously, that will replace the original minus one-half that was cancelled out when the two equations were added.  The same thing will happen with the minus one-eighths from both series, yielding the minus one-fourth that would otherwise be missing.  And so on to infinity.  In other words, writing the two series one term at a time gives

ln 2 + (ln 2) / 2 = 1 + 1/2 - 1/2 - 1/4 + 1/3 + 1/6  - 1/4 - 1/8 + 1/5 - 1/10 - 1/6 + 1/12 + 1/7 - 1/14  - 1/8 ....

and rearranging terms we have

3/2 ln 2 = 1 + (1/2 - 1/2) + (-1/4 - 1/4) + 1/3 + (-1/8 - 1/8) + 1/5 + (1/6 - 1/6) + (-1/12 - 1/12) + ....

or 3/2 ln 2 = 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 + ....

where we can see that what's on the right hand side of this equation is exactly the same as what was on the right hand side of the first equation.  Since these two series are identical, what they sum to must be identical, so ln 2 = 3/2 ln 2, or 1 = 3/2, which means 2 = 3 and 1 = 2.  And there you have it!  Q.E.D.

Okay, obviously, something isn't quite right.  Can you spot the problem?  It's pretty subtle.  From your earliest math classes, you were probably taught some of the basic properties of numbers, like the existence of an identity element of addition (0), an identity element of multiplication (1), the communicative property (a+b = b+a) and associative rule (a*(b+c) = a*b + a*c).  These are all very fundamental properties, and are usually assumed to be beyond question.  I'd bet that no one ever told you that sometimes the communicative property of arithmetic doesn't work!  But that is exactly the case, at least when dealing with some infinite series.  The series that we need to worry about have alternating sign. Further, although the series may converge as written, if all the signs were positive, the series would diverge.  1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 ...  goes to infinity, although very, very slowly.  It's such an important series that it has it's own name, the Harmonic series.

Most infinite series that reach a limit are absolutely convergent.  Convergent series that all have the same sign are absolutely convergent.  Some series with terms of alternate sign are also absolutely convergent, if and only if the series formed by the absolute values of each term is also convergent.  For example, the series 1/2n is absolutely convergent (the sum is 1), and the series (-1)n/2n is also absolutely convergent (the sum is -1/12).  For a series to be conditionally convergent, the series formed from the absolute values of it's terms must be divergent.  In that condition, the communicative law of addition no longer applies, and rearranging the order can yield any value at all one desires.  Why should this be?  Look at it this way:  let's rearrange the terms so that we have two series, one in which all the positive terms are grouped together and another one with all the negative terms.  Both of these series are divergent, so one gives positive infinity and the other gives negative infinity.  What's infinity minus infinity?  Zero? Yes, but it can also be infinity (after all, infinity minus a billion or any other number is still infinity), or it could be negative infinity, or anything in between.  In one sense, infinity is not a number and cannot be treated like any other number.  In our series above, or in any conditionally convergent series, how would one generate an ordering of terms to give a value of, say, pi?  First we'd add up enough of the first positive terms to give a value greater than pi, however many that took.  Then we'd start adding the largest negative terms to make the value fall just under pi.  Then we'd start adding the next positive terms until we exceeded pi again.  Then we'd start reducing that with more negative terms, and so on, and so on.  Since the differences from the actual value of pi will be always decreasing every time we need to change the sign, in this way we can form a series that converges exactly on pi.  Likewise for any other number we might care to pick.

In general, infinite series need to be handled with thoughtfulness and care, especially when the signs can vary.  This leads to all sorts of complications. For example, an infinite series of continuous functions doesn't have to be continuous. An infinite series of continuously differentiable functions doesn't have to be differentiable.  An infinite grouping of measurable sets doesn't have to be measurable. These quirks become very important when considering the foundational definitions of mathematics.  Often, we think of numbers and functions of numbers as fixed and simple things, and in everyday experience that assumption serves us pretty well, but theoretically it gets deeper and more subtle.  I hope this little essay has provided a glimmer of that.

You can read up more about conditionally convergent series at the following links: Intro to Conditional Convergence,  Conditionally Convergent SeriesMathWorld - Conditional Convergence.

Back to Top

Future Topics

bullet Chaos and Wonderful Numbers
bullet Prime Interest, including the apocalyptic magic square.
bullet Cooking Up Pi's.