Posted 16 Apr 2005 · Report post Ken, I'm no pro in higher math, but I'll give this a try.There are two ways to solve every problem: the quantitative approach, and the qualitative approach. You've been suggesting that all we can do is the quantitative approach, which means that, for each new N we have to test out if the hypothesis is true by performing the mathematical steps required. The other approach, what the turing machine cannot do, is to perform the qualitative approach, which means: understanding the nature of entities involved. Here we have to understand the nature of prime numbers, and their distribution across the number space. For example, we can start on the short range of the scale:2, 3, 5, 7, 11, 13, 17.However, as you go up, the distance between prime numbers increases, and the likelihood that, for any number N, there will be some numbers below it that can be multiplied to produce it, increases. For example, there is, essentially speaking, very low chance that a low number will have lower numbers that form its products. Even if you didn't know that "2" was a prime, you could still say there's a 99% chance it is a prime because it is literally two units away from negative, i.e. there's only one viable unit between it and zero that could serve as its multiple. For "3" it's almost the same, but now the rules are relaxed a teensy bit, because now there are two units between it and zero, and thus a correspondingly higher proportion that there will be some numbers that will serve as multiples for it.Apply this principle to a prime number in the millions, and no matter how it looks, the chances are getting progressively higher that, though it may look just like a prime, there's a good chance that there will be some numbers below it that can be multiplied together to produce it.So, in respect to the "Goldbach" algorithm, the chances that it remains true grow progressively smaller as you go higher up. If, as we proved just now, the 'distance' between two prime numbers increases at a certain rate, as we go higher up in numbers, then the algorithm will find it more and more difficult to gather together the two numbers it needs to make the sum. By this principle, it is certainly feasible to imagine N being so high, let's assume the number of atoms in the universe to the power of the number of atoms in the universe, where the distance between any two prime numbers will simply be too high to find two appropriate ones that add up to N. Or, to put it in reverse, this little algorithm you've put before us basically asks us, will the distance between two prime numbers ever be "too big" for the algorithm to continue. The answer should be yes. So yes it will terminate. The answer "no", while seemingly wrong according to my discussion above, would still be a proper and valid one. The answer "we can never know" is not a proper and valid one, and it shows more the error and laziness on the part of the people calculating, than on some kind of metaphysical limitation on consciousness.So a human mind, by working qualitatively and looking at nature of things, can solve this and every other 'unknowable' problem. Self referrential conundrums such as Godel's, can be simply discounted as irrelevant, as what I think Dr. Binswanger has done. Every problem can either be solved or classified as not proper. No problems, especially by one of us, should ever be classified as "impossible to ever determine", which is what you seem to be doing here. Share this post Link to post Share on other sites

Posted 16 Apr 2005 · Report post Self referrential conundrums such as Godel's, can be simply discounted as irrelevant, as what I think Dr. Binswanger has done.←Specifically regarding Godel's Theorem, it's not irrelevant, just limited in scope (i.e. context) to symbolic formal systems. (HB happened to note that in a fairly recent HBL post as well.) Share this post Link to post Share on other sites

Posted 24 May 2005 · Report post Questions about the halting problem:1. If you define halting, have'nt you solved the problem?On the other hand, given a definition of halting you might argue: Given one state of halting, there is a more complex state of halting which the prior definition did not encompass. Therefore whatever you define halting as, there are more ways for a program to halt. This comes accross as "Whatever number you choose, there are numbers which are higher"2. Since I reached two different conclusions here, what is wrong with one or both?3. Is it true to say: Given an algorithm, there exists an algorithm which evaluates wether it will halt or not. (i.e. the halting problem says its not the same algorithm) Share this post Link to post Share on other sites