Nate Smith

Limits

26 posts in this topic

Lately I've been rethinking the concept of a limit and how they are used to calculate areas under curves in Calculus. We start by estimating the area with a finite number of rectangles under the curve. We proceed to increase the number of rectangles and decrease their width, giving us a more accurate estimate of the area. We say that as the number of rectangles gets larger and larger (approaches infinity) we can get arbitrarily close to a limit, referred to as the "actual" area. It is usually stated that the limit of the sum as n (the number of rectangles) goes to inifinity is the definite integral (i.e. the area). But we also know that in math, while there may be a limit to a sequence, that doesn't mean we necessarily arrive at that limit. That is because we can never get to infinity. If I talk about the limit of 1 + 1/x as x goes to infinity, that limit is 1. I never get to one though, even though I can get arbitrarily close to 1. It seems as though we shouldn't be able to say that we are actually getting to the area, because our limit never gets to inifinity. Does this pose any problems for Calculus or am I missing something?

A more fundamental example I was thinking of involved decimal representations of some rational numbers. For example, 1/3 can be represented at 0.333... But those two quantities are only equal if I have ALL of the 3's after the decimal point. But since there are an infinite amount of them, I can't have all of them. The conclusion seems to be that no decimal representation of 1/3 is possible, or that infinite decimals can't really exist.

I think these two examples are the different sides of the same coin. Perhaps someone can straighten this out for me.

Share this post


Link to post
Share on other sites

What you say here is consistent with the calculus, at least as put forth by Newton (from whom I learned most of what I know about calculus). Here is a quote from the scholium after the lemmas in the Principia (Cohen translation):

"It may be objected that there is no such thing as an ultimate proportion of vanishing quantities, inasmuch as before vanishing the proportion is not ultimate, and after vanishing it does not exist at all. But by the same argument it could easily be contended that there is no ultimate velocity of a body reaching a certain place at which the motion ceases; for before the body arrives at this place, the velocity is not the ultimate velocity; and when it arrives there, there is no velocity at all. But the answer is easy: to understand the ultimate velocity as that with which a body is moving, neither before it arrives at its ultimate place and the motion ceases, nor after it has arrived there, but at the very instant when it arrives, that is, the very velocity with which the body arrives at its ultimate place and with which the motion ceases. And similarly the ultimate ratio of vanishing quantities is to be understood not as the atio of quantities before they vanish or after they have vanished, but the ratio with thich they vanish. Likewise, also, the first ratio of nascent quantities is the ratio with which they begin to exist...Those ultimate ratios with which quantities vanish are not actually ratios of ultimate quantities, but limits which the ratios of quantities decreasing without limit are continually approaching, and which they can approach so closely that their difference is less than any given quantity, but which they can never exceed and can never reach before the quantities are decreased indefinitely" (emphasis added).

Share this post


Link to post
Share on other sites
Does this pose any problems for Calculus or am I missing something?

You are right to express concern, but there are several resolutions. The full story is much too complex and too technical to present here, but I will provide a little sketch going back to the invention of the calculus and Newton.

Newton only had a somewhat vague notion of infinitesimals, and his use of first and last ratios at best only approaches a theory of limits. Newton hoped to use his ratio techniques to ground his theory of fluxions like the "perfectly accurate" geometry which was "cultivated by the ancients." But infinitesimals in calculus were not truly given a rigorous foundation until three centuries later when, in the 1960s, Abraham Robinson developed what is known as nonstandard analysis. (Note that Weierstrass in the 19th century bypassed Newton's infinitesimals and introduced limits in terms of the modern epsilon/delta approach).

When developing his mathematical method of fluxions, in order to overcome the problem of infinitesimals in his method of tangents, Newton introduced his notion of flowing quantities, fluents. Newton used the letter "o" to characterize the indefinitely small growth of a fluent in an indefinitely small time. So a small interval of the quantity x would be represented by (x-dot o), where x-dot is the quantity x with a dot up above. In 1714, looking back at his work over time, Newton made the following statement.

"In demonstrating propositions I always write down the letter o and proceed by the Geometry of Euclide and Apollonius without any approximations. In resolving Questions or investigating truths I use all sorts of approximations which I think will create no error in the conclusion and neglect to write down the letter o, and this I do for making dispatch." [1]

For Newton there was a place to keep his (x-dot o)s, and a place to "dispatch" them. Newton's dynamics are based on propositions which liberally make use of both infinities and limits. For example, Book II of the Principia starts with "Of the Motion of Bodies" and in Section I, Proposition II deals with the resistance of bodies in uniform motion being proportional to their velocity. Newton begins his analysis by letting "time be divided into equal particles" and eventually transforms his discrete approach into the continuous by infinitely reducing those intervals and passing to the limit.

"Let those equal intervals of time be diminished, and their number increased in infinitum, so that the impulse of resistance may become continual ..." [2]

Technically here Newton makes use of what is known as first order infinitesimals, and higher order infinitesimals are used in many other sections of the Principia. Recall that in the first Newton quote I provided, Newton specifically excluded demonstrating propositions from that which he applied approximations. Newton's "dispatch" was reserved for "resolving Questions or investigating truths." His mathematical papers reveal aspects of the processes which Newton used to develop his mathematics and physics, but Newton also intentionally obscured the process in some of his writings.

That first quote from Newton accurately reflects the facts as best as scholars know them. Newton made use of infinitesimals in the formal propositions upon which his mechanics and dynamics was based, but he relaxed that strict adherence, to varying degrees, both in the process of discovery of fluxions and fluents, as well as in the applications thereof. But Newton's approach to infinitesimals was not given a foundational basis until three-hundred years later by Abraham Robinson, and to understand this you need to study nonstandard analysis. For the epsilon/delta approach to limits, you have to study real analysis at a proper level. These are not trivial issues. Newton's credit for the calculus is well-deserved, but historically the calculus existed in a precarious state until more recent times.

[1] Isaac Newton, Letter to John Keill, May 15, 1714, The Correspondence of Sir Isaac Newton and Professor Cotes p. 176, Cambridge, 1850.

[2] Sir Isaac Newton's Mathematical Principles of Natural Philosophy and His System of the World, Translated by Andrew Motte in 1729 and Florian Cajori in 1946, page 236, _University of California Press_, 1946.

Share this post


Link to post
Share on other sites
It seems as though we shouldn't be able to say that we are actually getting to the area, because our limit never gets to inifinity.  Does this pose any problems for Calculus or am I missing something?

...

I think these two examples are the different sides of the same coin.  Perhaps someone can straighten this out for me.

It doesnt pose any problems in calculus, because the area is simply defined to be the limiting value. The same occurs in other cases - the sum of a series is defined to be the sum taken in the limit.

Limits are something that I have never been especially happy with - I understand the rationale behind introducing them, but I share your unease towards the idea of taking things in the limit for the reasons you mention. For what it's worth, limits are just one way of formulating analysis (albeit the most popular way, by far) and the subject is only really presented using them as a result of historical contingency. There is a way to formulate it using infinitesimals which gives the exact same results, and I personally find that viewing certain things in terms of infinitesimals makes more intuitive sense than viewing them as limits.

Share this post


Link to post
Share on other sites
For example, 1/3 can be represented at 0.333... But those two quantities are only equal if I have ALL of the 3's after the decimal point. But since there are an infinite amount of them, I can't have all of them. The conclusion seems to be that no decimal representation of 1/3 is possible, or that infinite decimals can't really exist.

I remember in math class I loved this stuff. From this you can arrive at the strange conclusion that

1 = 0.9999999999...

The easiest way to show this is simply multiply your relation by 3.

1/3 x 3 = 0.33333333... x3.

I have a hard time grasping this because you try to reason that 0.99999... will be slightly smaller, infinitesimally (word?) smaller. But if you never get there and just keep going then the difference is zero.

Share this post


Link to post
Share on other sites
It doesnt pose any problems in calculus, because the area is simply defined to be the limiting value. The same occurs in other cases - the sum of a series is defined to be the sum taken in the limit.

...There is a way to formulate it using infinitesimals which gives the exact same results, and I personally find that viewing certain things in terms of infinitesimals makes more intuitive sense than viewing them as limits.

You say that the area is defined as the limiting value. That is interesting from the perspective that area (earlier in the hierarchy of math) is defined as the number of square units that can 'fit' into a given region. If you integrate the function y = x from 0 to 2 (which will yield a right-isosceles triangle with legs of length 2), you will get an area of 2 square units. The area could of course be calculated much more easily using simple geometric analysis in this case. The result will be the same either way though.

It seems to me to be imperative that mathematicians show that the limit IS the quantity of square units that fit into a region, even if that region is not a polygon. I don't like the idea of redefining the concept of area; I'd rather just find new ways of calculating it.

I'm not familiar with infinitesimals. I'll have to look into that idea. Thanks for the comments.

Share this post


Link to post
Share on other sites
I remember in math class I loved this stuff.  From this you can arrive at the strange conclusion that

1 = 0.9999999999...

The easiest way to show this is simply multiply your relation by 3.

1/3 x 3 = 0.33333333... x3.

I have a hard time grasping this because you try to reason that 0.99999... will be slightly smaller, infinitesimally (word?) smaller.  But if you never get there and just keep going then the difference is zero.

It's funny that you mention this example. I arrived at this problem of limits by debating some people whether 1 = 0.999... I agree that they are equal. I think that the only way to rationally define of 0.999... is to say that it equals 1. If they are different, there must be some distance between them, and therefore some number between them. Of course there are none (Basically, the principle is if a-b=0, then a = B). Everyone that disagreed with me argued that the difference between the numbers is a decimal, then an infinite number of zeros, and then a 1 (i.e. 0.00...01). That's a funny contradiction. When would the 1 actually come?

The only good counter example I could think of is if you define 0.999... to be the limit of the sum of 0.9 + 0.09 + 0.009 + ... Then the problem becomes does 0.999... even exist at all, or any other non-finite decimal for that matter? There's where I got stuck and when this thread began.

Share this post


Link to post
Share on other sites
You say that the area is defined as the limiting value.  That is interesting from the perspective that area (earlier in the hierarchy of math) is defined as the number of square units that can 'fit' into a given region.
This makes sense when youre talking about simple shapes such as circles and polygons, but it quickly falls apart once you start moving onto more complicated things - the Dirchlet Function for instance. It's not at all clear what it means to talk about square fitting in regions here, yet the function can be integrated to give a definite value. At more advanced levels, area and volume are defined in terms of integrals and measure, since there is no obvious way to extend the common-sense definition to non-trivial functions.

Share this post


Link to post
Share on other sites
It seems to me to be imperative that mathematicians show that the limit IS the quantity of square units that fit into a region, even if that region is not a polygon.

Your use of "imperative" is way out of line. You cannot arbitrarily restrict what mathematicians may otherwise rigorously do, to what you know. A great deal of brilliant work has gone into this area for centuries, by many very knowledgeable and extremely intelligent mathematicians. First study the history of the problem and learn with proper rigor how it has been addressed, before prescribing to mathematicians what you think they must do. This is not a philosophical problem to be argued by all, but rather a technical scientific problem that requires detailed knowledge to appreciate what is involved.

Share this post


Link to post
Share on other sites
The only good counter example I could think of is if you define 0.999... to be the limit of the sum of 0.9 + 0.09 + 0.009 + ...  Then the problem becomes does 0.999... even exist at all, or any other non-finite decimal for that matter?  There's where I got stuck and when this thread began.

Why is this a counterexample? As you have indicated, 0.999... is defined as the limit of a sequence of numbers, and as such it is just a different representation of what is meant by the number 1. There are dozens of ways that people have developed to get an intuitive sense for why 0.999... equals 1, but in a real sense those intuitions are superfluous. As long as the limit exists -- and, indeed, whether using epsilon/delta, or infinitesimals, or by higher forms, that limit had better exist if mathematics is to be preserved -- then what is the problem? I can give you many intuitive ways to see this, but frankly I think that those intuitions are part of what stands in the way of many people understanding the issue. These things are firmly anchored in a higher-level and more rigorous portion of analysis, but even at this level this is nothing more than the representation of a number according to the the definition of the limit of a sum.

Share this post


Link to post
Share on other sites
Lately I've been rethinking the concept of a limit and how they are used to calculate areas under curves in Calculus.  We start by estimating the area with a finite number of rectangles under the curve.  We proceed to increase the number of rectangles and decrease their width, giving us a more accurate estimate of the area.  We say that as the number of rectangles gets larger and larger (approaches infinity) we can get arbitrarily close to a limit, referred to as the "actual" area.  It is usually stated that the limit of the sum as n (the number of rectangles) goes to inifinity is the definite integral (i.e. the area).  But we also know that in math, while there may be a limit to a sequence, that doesn't mean we necessarily arrive at that limit.  That is because we can never get to infinity.  If I talk about the limit of 1 + 1/x as x goes to infinity, that limit is 1.  I never get to one though, even though I can get arbitrarily close to 1.  It seems as though we shouldn't be able to say that we are actually getting to the area, because our limit never gets to inifinity.  Does this pose any problems for Calculus or am I missing something?

...

Your example is just like one of Zeno's paradox:

I order for me to go from point A to point B, I have to first travel half that distance between A and B, and then half the next distance, and then half of the remaining distance, and so on, ad infinitum. But since 1/2 + 1/4 + 1/8 + 1/16 + ... never actually equals 1 (that is it "never gets to" 1), then it follows that I could never travel any distance at all! :D You see the problem in that reasoning?

Share this post


Link to post
Share on other sites
You see the problem in that reasoning?

The time intervals between each event (reaching a distance (1/2)^n ) gets shorter and shorter, so does this suggest time stops as you approach your destination? By this analysis it would seem so, but we know time is continuous and flows at a steady rate. So it seems fair to say that you can get where you want to go. Speaking of that I got to go pack and get ready to head out to the logging camp. Here's to hoping I can get there.

Share this post


Link to post
Share on other sites
... but we know time is continuous and flows at a steady rate.

Time is a relational concept, not a thing that "flows at a steady rate."

Speaking of that I got to go pack and get ready to head out to the logging camp.

I was wondering about that. I thought you said you were leaving on Wednesday. Have a nice summer, and enjoy your reading.

Share this post


Link to post
Share on other sites
You see the problem in that reasoning?

I dont.

Zeno's paradox is something that I've never managed to resolve to my satisfaction. It DOESNT ever get to 1.

Share this post


Link to post
Share on other sites
Zeno's paradox is something that I've never managed to resolve to my satisfaction. It DOESNT ever get to 1.

Hence the term paradox...

Share this post


Link to post
Share on other sites
Hence the term paradox...

Well yeah, but a lot of things like that have fairly subtle reasoning errors once you look closely enough. However, I've never been able to find one in the Zeno paradox. I must have seen almost everything used to 'disprove' it, from the mathematical theory of limits to the planck length in quantum physics, but none of it has seemed particularly convincing.

Share this post


Link to post
Share on other sites
Well yeah, but a lot of things like that have fairly subtle reasoning errors once you look closely enough. However, I've never been able to find one in the Zeno paradox. I must have seen almost everything used to 'disprove' it, from the mathematical theory of limits to the planck length in quantum physics, but none of it has seemed particularly convincing.

Gee, and here I thought Aristotle dispensed with Zeno more than two-thousand years ago, and he never even got around to quantum physics.

Share this post


Link to post
Share on other sites
Zeno's paradox is something that I've never managed to resolve to my satisfaction. It DOESNT ever get to 1.

First off, nature does not make leaps.

What about this statement: The value never goes above 1 but always increases.

Share this post


Link to post
Share on other sites

To return to Nate's original questions on limits and computation of areas with integrals, there are several basic issues. One is what it means for a sequence to have a limit; another is what it means when the limit is a real number that does not exist in the form of a rational number representation; and another is the meaning of area as a limit. These issues are actually quite easy to understand, but can be tricky to pin down if you're not familiar with some details of the concepts and how they are related. This involves both technical and conceptual explanations to fully grasp. The conceptual explanation is what is usually omitted so mainly what you are missing is the role of a hierarchy of abstract concepts that conceptually organize the technical details and the relations between them.

... while there may be a limit to a sequence, that doesn't mean we necessarily arrive at that limit. That is because we can never get to infinity. If I talk about the limit of 1 + 1/x as x goes to infinity, that limit is 1. I never get to one though, even though I can get arbitrarily close to 1. It seems as though we shouldn't be able to say that we are actually getting to the area [limit], because our limit[ing process] never gets to infinity.

Here we have the first issue, in which the limit is a simple number (in fact an integer, 1) so there is no question about its meaning, only that of the limiting process itself (we will get to its use for "area" later). When limits of sequences are first presented in a calculus course, the explanation is typically given in an "intuitive" way, using almost allegorical terminology about numbers "going" to "infinity" while something else "approaches" the limit. These explanations begin to cause confusion when you start to think more about the meaning and to ask the right questions without more precisely defining the concepts first.

In this case you can easily see that the expression (1 + 1/x) becomes closer to the number 1 as the variable x becomes larger, and that the term 1/x "eventually goes away". This is where the trouble starts, because x can't actually become infinite; the "infinite" is not a number to which x can refer. So what does it mean to say that the expression "becomes" 1? The problem is that 1/x isn't actually "going" anywhere; it simply represents a static sequence of (finite) numbers that exist, explicitly or implicitly. The limit is not something, mathematically, that anything "gets" to. The "intuitive" notions are useful for a certain informal visualization of the process, especially in the beginning, but by themselves leave the concepts too vague and misleading (and open up confusion over Zeno's paradox, etc.)

To simplify it, suppose that the variable x is an integer n, so we have a definite, indexed sequence in mind (but we will return to the continuous variable x again below). Hoping that this explanation is neither too detailed nor too condensed for your particular background, the meaning of convergence to a limit is technically stated in the form that for any specified precision e (>0), there is some positive number N such that for any number n > N, the expression is approximately the limit value 1 within the specified precision: |(1+1/n) -1| < e.

This is a finitistic description of the mathematical limit process, and avoids allegorical notions of things "going" in time to places they can never arrive at. But of course the concept of the mathematical infinite is still implicit in the finitistic description because the numbers n are open-ended.

From here you can see how, for the function (1 + 1/x), you can give the same kind of formulation with integer n replaced by any number x -- so it works for the continuum as well as a sequence indexed by n. You know exactly what it means to say the the limit as x->infinity of (1+1/x)=1 (or more briefly (1+1/x)->1 as x->infinity). The limit is not something anything "gets to" but a fixed number for which members of the sequence are within some precision, depending on how far along in the sequenced they are.

Part of the answer to Zeno's paradox lies in the technical definition -- that makes no direct reference to an "infinity", let alone an "actual infinity" -- and part of it is in not confusing abstract infinite sequences in mathematics with things "going" someplace as if there were a real motion with infinitely subdivided precision in reality and an infinite number of steps. At some point, the finite motions made by real objects like tortoises and hares (in similar paradoxes) cannot be measured as smaller and smaller subdivisions of motion occurring before the end is actually reached. Such processes of subdivision are themselves only abstractions. (Beyond a certain point it makes no sense to split hares.)

In elementary calculus courses and books you may or may not encounter the more precise finitistic definition or its application to actual problems. You can understand quite a lot of calculus without it, but start to run into confusions as soon as you start asking more epistemologically oriented questions about its cognitive status (and historically, the lack of a proper definition led to contradictions). On the other hand, beginners often have difficulty understanding the point of the finitistic definition, which seems cumbersome for something that seems so obvious at an elementary level (Of course "1/x goes to zero, what do you need all the N's and n's and e's for?"). But once you get the point of the precise definition and have the motivation, it makes a lot more sense (and makes it possible in practice to prove more complicated limits which aren't so obvious without it). In more advanced topics in calculus and beyond in abstract analysis, that approach is used routinely. But the idea of a limit of an infinite sequence is also used directly, as an abstraction, without having to repeat the whole mechanism every time the concept is invoked.

A more fundamental example I was thinking of involved decimal representations of some rational numbers. For example, 1/3 can be represented at 0.333... But those two quantities are only equal if I have ALL of the 3's after the decimal point. But since there are an infinite amount of them, I can't have all of them. The conclusion seems to be that no decimal representation of 1/3 is possible, or that infinite decimals can't really exist.

and later:

Then the problem becomes, does 0.999... even exist at all, or any other non-finite decimal for that matter? There's where I got stuck and when this thread began.

This question is very much related to the first issue and has essentially the same answer. The infinite decimal expansion .3333... represents a sequence {.3, .33, .333, ...}. The representation of 1/3 as 0.3333... means that the number 1/3 is the limit of that sequence. When you say 1/3 "equals" .3333... (or .999... "equals" 1) you are referring to the abstract limit process, not the equality of 1/3 with something containing an actual infinity of decimal places. There is no actual infinity and no number with an actual infinity of decimal places or anything else. To say that the decimal expansion is infinite means the open-ended sequence -- in which there is always the potential in principle for a higher, but still finite, precision. But once you have the higher abstraction, you can say the two representations are "equal" because they represent the same value. (Also see the discussion below on irrational numbers).

We proceed to increase the number of rectangles and decrease their width, giving us a more accurate estimate of the area. We say that as the number of rectangles gets larger and larger (approaches infinity) we can get arbitrarily close to a limit, referred to as the "actual" area.

and

It seems to me to be imperative that mathematicians show that the limit IS the quantity of square units that fit into a region, even if that region is not a polygon. I don't like the idea of redefining the concept of area; I'd rather just find new ways of calculating it.

This brings us to the question, part of which is only implicit in what you asked: What happens when the limit, such as the area under a curve, is an irrational number, like pi*r^2 for a circle? What does it mean to be the "actual area" (whether an rational or irrational number) defined only as an infinite limit representing the inside of an infinitely smooth curved boundary that is incommensurable with square or rectangular units of area? What does it mean to have an infinite sequence that never gets to infinity in which the limit it never gets to is also an irrational number defined only by an infinite decimal that also doesn't exist?

Once again the answer lies in the hierarchy of abstract concepts concerning various mathematically infinite limit processes.

An irrational number refers to a sequence of rational numbers, for example the square root of two is a number which when squared equals 2 within some precision. It is an abstraction referring to a sequence of rational numbers with a certain property, and is at a higher level of abstraction than the rationals. (Refer to the chapter on abstractions from abstraction in IOE.) You often see irrational numbers described by such sequences, but that isn't enough: you also have to integrate these facts into an abstract concept, concretized by a symbol while omitting the differences in precisions. Such is the meaning of the abstract concept "square root of two", whose referents are the rational numbers in the sequence with a certain property: The facts that give rise to the concept are the property that when squared they are two to some degree of precision.

It is a fundamental fact that not every convergent sequence of rationals has a limit that is a rational number. When you say that some sequence has a limit which is an irrational number, you are invoking a higher level abstraction for numbers than the concepts of integers or rationals; while in the technical and numerical meaning behind it the sequence and its limit are equivalent sequences, in effect converging to each other. But for mental economy you omit those details; you invoke the higher level concept of irrational number and say that the sequence converges to such a number.

When you do this for the area under a curve by taking the sequence of partial sums of the areas of rectangles you are operating on a much higher level of abstraction than adding up finite rectangles geometrically or physically to measure an area. To make the rectangles smaller and smaller you have to create conceptual units of area that are smaller and smaller by a process of subdivision. But there are no rectangles with infinitely small sides any more than there are numbers with infinitely many decimal places of precision or smallness. The concept of area under a smooth curve is a higher level abstraction just as is an irrational number and any limit process. It is neither necessary nor possible to explain the "actual" area under every smooth curve as a sum of finite square units of area that geometrically fit under the curve. (But it is still true that if you could conceptually break up the appropriate number of squares and conceptually re-arranged the pieces to fit under the curve, they would fill the area.)

Furthermore, the very idea of a smooth curve is itself an abstraction because you are omitting the width of the line as non-essential. So by the time you get to the point of expressing the area by an integral you are already dealing with abstractions in many different ways, far higher in the hierarchy than measuring areas as some number of unit squares that physically fit under the curve.

To sum up, understanding the cognitive status of concepts and methods like limits, integrals and areas under curves requires more than the usual collection of commonly known technical procedures. You also have to organize them and see how they lead to objective concepts at a high level of abstraction, and understand how the concepts are related. Once you have that, the mysteries go away.

Share this post


Link to post
Share on other sites
It's funny that you mention this example.  I arrived at this problem of limits by debating some people whether 1 = 0.999...  I agree that they are equal.  I think that the only way to rationally define of 0.999... is to say that it equals 1.  If they are different, there must be some distance between them, and therefore some number between them.  Of course there are none (Basically, the principle is if a-b=0, then a = :D.  Everyone that disagreed with me argued that the difference between the numbers is a decimal, then an infinite number of zeros, and then a 1 (i.e. 0.00...01).  That's a funny contradiction.  When would the 1 actually come?

The only good counter example I could think of is if you define 0.999... to be the limit of the sum of 0.9 + 0.09 + 0.009 + ...  Then the problem becomes does 0.999... even exist at all, or any other non-finite decimal for that matter?  There's where I got stuck and when this thread began.

N = 0.9999999999999...................

10N = 9.9999999999999..................

Subtract

9N = 9

N = 1

Share this post


Link to post
Share on other sites
Well yeah, but a lot of things like that have fairly subtle reasoning errors once you look closely enough. However, I've never been able to find one in the Zeno paradox. I must have seen almost everything used to 'disprove' it, from the mathematical theory of limits to the planck length in quantum physics, but none of it has seemed particularly convincing.

Try this one to see if it helps you.

Aristotle on Zeno

Share this post


Link to post
Share on other sites

Another intesting item.

1 = 0

1-1+1-1+1-1+1-1+1-1.............. = (1-1)+(1-1)+....................... = 0 + 0+0+0..... = 0

1-1+1-1+1-1+1-1+1-1.............. = 1-(1-1)-(1-1)-....................... = 1-0-0-0......... = 1

Check out Zeno's Paradoxes for more info.

Share this post


Link to post
Share on other sites
Another intesting item.

1 = 0

1-1+1-1+1-1+1-1+1-1.............. = (1-1)+(1-1)+....................... = 0 + 0+0+0..... = 0

1-1+1-1+1-1+1-1+1-1.............. = 1-(1-1)-(1-1)-....................... = 1-0-0-0......... = 1

Check out Zeno's Paradoxes for more info.

This paradox exists within some limit and not beyond it. What is that limit?

Everything exists.

Everything exists equally.

Everything is limited.

Know your limits.

Share this post


Link to post
Share on other sites

I found what ewv said here to be very helpful.

I too had a big aha-moment when I in my first college calculus course came across a definition of limit as a fixed value, and not just some value that is approached but never reached.

It seems to me as though a limit is a value which a function differs less and less from when the variable approaches some number. In that way I can view it as fixed. But I'm not sure yet If i've got it 100%.

This is what ewv said (Fourth paragraph):

To simplify it, suppose that the variable x is an integer n, so we have a definite, indexed sequence in mind (but we will return to the continuous variable x again below). Hoping that this explanation is neither too detailed nor too condensed for your particular background, the meaning of convergence to a limit is technically stated in the form that for any specified precision e (>0), there is some positive number N such that for any number n > N, the expression is approximately the limit value 1 within the specified precision: |(1+1/n) -1| < e.

I get stuck here because I don't get what is meant by precision, nor what is the relationship between the variables N, n or e. I mean for instance are they values of the function or the variable?

Share this post


Link to post
Share on other sites
I found what ewv said here to be very helpful.

I too had a big aha-moment when I in my first college calculus course came across a definition of limit as a fixed value, and not just some value that is approached but never reached.

It seems to me as though a limit is a value which a function differs less and less from when the variable approaches some number. In that way I can view it as fixed. But I'm not sure yet If i've got it 100%.

This is what ewv said (Fourth paragraph):

To simplify it, suppose that the variable x is an integer n, so we have a definite, indexed sequence in mind (but we will return to the continuous variable x again below). Hoping that this explanation is neither too detailed nor too condensed for your particular background, the meaning of convergence to a limit is technically stated in the form that for any specified precision ε (>0), there is some positive number N such that for any number n > N, the expression is approximately the limit value 1 within the specified precision: |(1+1/n) -1| < ε.

I get stuck here because I don't get what is meant by precision,...

What you stated is correct. The formulation you are stuck on quantifies that in algebraic terms. That is then used to prove mathematically that a function converges in accordance with a precise mathematical principle for the criterion of convergence.

The mathematical definition of convergence quantifies in terms of finite numbers specifying size and precision what it means to say that the infinite sequence differs less and less from the limiting value as the variable -- the index n in this case -- approaches 'infinity', i.e., becomes larger and larger. The concept of the mathematical infinite is still implicit because the numbers n are finite but open-ended. This process of conceptualizing and quantifying infinite processes in an algebraic formulation in terms of finite numbers and inequalities was termed the "algebraization of analysis". ("Analysis" in this context means the infinite processes of "calculus" and broader abstractions based on them.)

The "precision" ε is the ordinary meaning of how precise a numerical measurement is, but quantified explicitly to say how precise. That the function value approximates the limit means they are the same within some degree of precision; beyond that level of precision they may be different. If they are the same within a particular degree of precision, then the approximation is accurate to that extent. (Otherwise you have lost the accuracy but you have still specified the numbers to a higher degree of precision.)

The precision can be specified by how many decimal places you need for the value to be close enough to the limit (in this case the limit is 1). An n = 100 means the 100th term in the sequence is 1 + 1/100 = 1.01, which is the limit 1 within the precision .01.

The precision can in general be measured by how many zeroes are to the right of the decimal point: A precision within ε = .0001 is a greater precision than .01. An n for which 1+1/n is within .0001 of 1 is closer to the limit 1 than an n for which 1+1/n is within .01 of the limit 1.

So in general you could specify precision in terms of M decimal places in the form ε = 10^-M to make ε small enough to specify any required precision.

(This example is still the simplified version of the function originally cited by Nate, as a sequence of integers {1+1/n} rather than a function of a continuous variable f(x) = 1+1/x.)

The paragraph you are stuck on is the usual definition of convergence found in any introductory calculus book that goes into this depth beyond the verbal description you gave. I will try to explain this conceptually (but without expanding on it too much), because many text books give the correct mathematical formulation but leave you puzzled about what it means and why, for example, why ε comes before N, etc., sometimes turning it into a kind of game in which "players" take turns with "moves" in an order given with no explanation of what it really means or why you are doing it -- only "you give me an ε and I give you an N", etc.

If you understand the meaning of the process conceptually in terms of precision of the numbers and what is being measured and why, you will understand what the conceptual dependencies are and in what order you have to specify the different quantities.

A fuller explanation would include more examples of convergent sequences together with examples of sequences that don't converge, or which do in a non-obvious way, in order to concretize the abstractions and illustrate more of what they are for and the kinds of facts which they account for, but you can find those in ordinary books and I assume that if you have gotten this far that you don't need more of that just for a basic understanding. As you read through it, however, write it out for yourself and work through calculations both algebraically and with a calculator with specific numbers to see how it works out. (Always do this when reading mathematics.)

For many elementary cases you don't need this method of quantifying convergence at all -- it is obvious what the limit is and why in this example and many more -- but in the 19th century it was discovered that in more complex situations and in reasoning about more abstract general theorems mathematicians were getting the wrong answers because they had inadequate conceptual means for more thorough analysis of convergence and they lacked a means to quantify the 'infinite' limiting process in terms of finite numbers. This more precise formulation of concepts of limits makes it possible to analyze much more general cases in proving theorems as well as to reason about more complex specific functions and sequences in the analysis (calculus) of real and complex numbers and in functional analysis.

This may seem verbose, but once you understand it, become accustomed to it, and learn a collection of practical procedures and the many theorems you can use to work with the mathematical criteria, it becomes second nature and you don't have to stop to think through the explanations.

... nor what is the relationship between the variables N, n or ε. I mean for instance are they values of the function or the variable?

n in this case is the variable, ε is a degree of precision, and N tells you how big n has to be for a particular degree of precision within which the value of the function (i.e., a term in the sequence in this case) approximates the limit.

For example, for the sequence s(n) = 1 + 1/n with limit L = 1, you measure the accuracy of the approximation for the nth term as | s(n) - L |. If n > 100, then | (1+1/n) - 1 | = 1/n is always less than 1/100 = .01. If n > 1000, then | (1+1/n) - 1 | is always less than 1/1000 = .001.

For there to be convergence it isn't enough that you find just one value of n that is precise enough, you need a value of n for which all subsequent terms satisfy the condition for each ε. Otherwise you might think it has converged but later in the sequence it wanders off again and does not approaches the limit. (The sequence {1+1/n} of course always becomes closer to one as n increases so that doesn't happen in this case.) This is why you must be able, in principle, to always select ε first, then show that there is an N that depends on it so that all n greater than N works.

Numbers like .000100013 could also be used for ε but that has extraneous digits to the right of .0001 that are irrelevant to the order of magnitude and the overall process of convergence. If a precision ε with extra digits making it a little large for what you need is not good enough you can always select the next order of magnitude with another zero after the decimal point. Nevertheless, the conditions for convergence need not be tied to the the number of zeros after the decimal point or to the decimal system of representing numbers at all so ε is generally specified as any real (or rational) number.

From this you can see in general how to quantify convergence for any sequence s(n) to a limit L in the form: For any sequence {s(n), n=1,2,3,...}, s(n) converges to L if and only if for any real number ε > 0 there is an N(ε) > 0 (meaning N depends on ε) such that for all n > N, it is the case that | s(n) - L | < ε.

For a function of a continuous variable f(x) it works the same way, with x increasing without bound, but not restricted to integer values: For any function f(x) of a continus variable x, f(x) converges to L as x increases without bound if and only if for any real number ε > 0 there is an X(ε) > 0 such that for all x > X, it is the case that | f(x) - L | < ε.

But that is only one simple case because you can also have a limit of a function as the independent variable approaches a fixed number rather than 'infinity'. To specify convergence to L here, the dependency is on x approaching a fixed number a rather than growing ever larger without bound. For this you need to specify the precision δ within which x matches a instead of some X for which x > X:

For any function f(x) of a continuous variable x, f(x) converges to L as x approaches a, i.e., f(x) → L as xa, if and only if for any real number ε > 0 there is a real number δ > 0 such that for all | x - a | < δ, it is the case that | f(x) - L | < ε. Notice that the δ is dependent on the point a as well as ε and may be different at every point. If it is independent of the points in some interval, i.e., if there is way to select a δ that works for all of them given any ε, then the convergence is "uniform" on the interval.

A function f(x) is continuous at a point a means that the limit of the function near the point is the same as the function value: f(x) → f(a) as xa. The finite algebraic analysis of the limit process shows what it means that a function does not jump at a point and how to measure it in the abstract continuum approaching the point. This definition of continuity relies on the concept of a limit already precisely defined, so you don't have to repeat all the details with the ε and δ.

Once you understand these concepts, you concisely use the abstract concept "limit" and the criteria for convergence without the repetition of thinking through the reasoning for these definitions based on quantifying the limiting process with what is called the "ε-δ theory of limits". You know that your concepts of limits are firmly established, leaving the details implicit, and you can routinely apply the convergence criterion as you need them, re-introducing the dependency of specifications of precision of measurements only as required for whatever specific abstract analysis you are doing.

Share this post


Link to post
Share on other sites