Continued Fraction Arithmetic

 

 

The exposition for this page is just beginning, but if you'd like to look at some diagrams illustrating how to perform some continued fraction arithmetic, links are below... but you're going to have to figure out what's going on on your own until I get to writing the text for this section.

 

All I'm going to show you now is some examples of Gosper's Algorithm for basic arithmetic operations on N-fractions.  Bear in mind that I'm using the Kornerup-Matula assignment of vertices to the coefficient cube.  So, in the diagrams below, use the Kornerup-Matula initializations and transformations.

All the four arithmetic operations are implemented with using all the same tensor transformations.  The only difference between the various arithmetic operations is the initial tensor.

Probably one of the first things that occurs to you is... how am I ever going to remember these?  Well, actually there's an easy way to remember them.  This is the representation of the coefficient cube as an algebraic equation and one possible representative coefficient tensor:

Now look what happens when we substitute these initial coefficient values back into this algebraic equation and simplify:

Anyway, that's how I remember them.

 

Next, the coefficient cube is transformed according to the same rules irregardless of the particular arithmetic operation being performed, i.e. addition, subtraction, etc.  This jpeg illustrates those transformation on the coefficient cube.  The x and y input operands are defined just as in the formulas above.

First of all, it's helpful to know that the matrix a-b-e-f (all the terms having an x in them) is referred to as the leading x-face of the coefficient cube.  Likewise, c-d-g-h (all y terms) is called the leading y-face, and e-f-g-h the leading z-face.  Also, inputting a partial quotient from one of the input continued fraction expansion (x or y) is called absorption, and outputting a partial quotient of the resultant continued fraction expansion is called emission (z).

There are a couple of ways in which to visualize this.  Some diagrams have the different coefficient cubes actually touching... two adjacent cubes would share either a leading x-, y-, or z- face (from the perspective of the former cube in the calculation).  So a new cube in the calculation just sorts of grows out of an old one by dropping one face and growing another.  As you can see from the diagram above, this new face is 'grown' in the direction of whichever input operand is absorbed or if an emission is made. 

However, because of the limitations of my present software, it's not practical to construct connect-the-cube diagrams as outlined here, so the association between cubes will be indicated by arrows pointing one cube to another.  Visualized in this manner, it might be easier to see it as first swapping the x-, y-, or z- face with its opposing face, followed by the multiplication by the new input operand being absorbed (or the negative of the emission output) and additions as outlined in the diagram above.  In any case, the new face will be the new leading x-, y-, or z- face on the new cube depending on which input operand was absorbed (or emitted).

Programming-wise I think it's probably easiest to simply store the transformation tensor as a 2- by 4-matrix as depicted above and define some special tensor operations on it.  One can define a right tensor product to simply be the action of absorbing a new x- input operand, which involves performing the leading x-face matrix scalar multiplication with the addition of the y-face matrix, followed or preceded by an exchange of the x- and y- faces.  Then, with this definition, a left matrix product corresponding to the action of absorbing as new y- input operand can be simply done by performing this same right tensor product preceded by a tensor transpose swapping the vertices b-c and f-g.  Emission is again essentially an identical operation and can be performed by the same operation preceded by a tensor transpose of the vertices a-g and g-h, being sure to multiply by the negative of emitted partial quotient.  However, instead of defined another tensor transpose, I've just been defining emission as a separate operation.  Obviously, either approach is valid, and will produce identical results.

 

Now, we have to know when to make an emission.  This diagram defines the conditions under which an emission is indicated:

First of all, this diagram is a bit misleading in one rather blatant respect.  Kornerup and Matula were talking about rational arithmetic on redundant continued fractions which demands a completely different test for emission of an output partial quotient.  This emission test is not the one used for their algorithm, but it will work with Gosper's algorithm with the vertex assignment Kornerup and Matula use.  Again, I'm using the vertex assignment of Kornerup and Matula.

 

Before preceding to the example calculations (some not yet completed...), there's one more thing I should point out.  Gosper mentions something about the bounding interval for the resultant, z, either being inside a/e = b/f = c/g = d/h, or outside this interval depending on the sign of the denominator of the tensor viewed in the algebraic formula form above.  I believe what he means is that if the sign of the denominator is negative than the interval becomes an interval around through infinity.  So, roughly, inside (1, 2) would mean z is greater than one and less than two, whereas outside (1, 2) would mean that z is less than one and greater than two... which is an interval passing through infinity.  I'm not sure I fully understand this yet, however, and haven't implemented it yet.  I'm unsure under what conditions a negative denominator can show up under, but in the examples I've tried so far, negative numbers have not appeared and everything seems to work fine... however, this is a possibly disastrous loose end that until I understand more, I must urge caution.

 

The first example will be the incomplete addition of 45/16 and 27/10.  Incomplete because there isn't enough information contained in the partial quotients of the input operands to fully determine the expansion of the output continued fraction.  Gosper was largely dealing with infinite continued fractions where partial quotients do not run out... and besides, infinite continued fractions represent perhaps the most useful application of continued fraction arithmetic.  The problem is simply that the amount of information contained in an input partial quotient is rather small in its ability to bound the resultant continued fraction expansion.  Others have improved on this, but this limitation still exists.

Obviously, there are probably some rather simple 'tricks' to get around these problems with rational operands with Gosper's algorithm... some perhaps might be considered 'cheating', in fact.  Nevertheless, this route is open to us if we wish... and since I argue that continued fraction arithmetic perhaps has it's most useful application with infinite continued fractions, this might be okay anyway.  By the way, my assertion here that 'continued fraction arithmetic perhaps has it's most useful application with infinite continued fractions' is argumentative.  It all depends on what you're trying to do... there's no clear-cut consensus on an assertion such as this.

In any case, this simple initial example to illustrate the processes involved in continued fraction arithmetic with Gosper's algorithm.  I hope that the illustration will be largely self-explanatory given this short introduction.  The diagram is fairly large... it goes over to the right a fair distance.

 

 

The second example deals with two infinite continued fractions and therefore is probably a more useful application of Gosper's algorithm.  This diagram is quite large... and perhaps is not yet complete.  By complete in this case, I mean I'm hoping for recurrence to show up in the tensors which will indicate that a period has started.  I think everyone knows that the product of the sqrt(5) and the sqrt(7) is the sqrt(35), however, I thought I understood that some manifestation indicative of recurrence would show up in the tensors as evidence of this recurrence.  I'm starting to believe I misunderstood, because with this example and others like it I've done, there doesn't appear to be any obvious sign of recurrence in the tensors.  I need to do more investigation on this issue.

 

That's pretty much all for now... more later.