• Content count

  • Joined

  • Last visited

Posts posted by kenstauffer

  1. I think the self-check-out lanes are a great idea.

    First, I can scan my stuff quicker than most people at any of the stores.

    Second, it brings down the total cost of prices which in turn makes my life better.  I have more money to spend on other values.

    Lastly, the lines are always empty in my local Wal-Mart, which means I am in and out as quickly as possible.  Again, this enhances my life because my time can be spent in other areas. 

    Economics is defined in many ways, but I use a definition I once read (I can't remember where), conservation of ones resources.  This does not just mean your money, but your whole life.  Anytime I can get something of the same or like quality for less effort, it allows me to use that effort in another area.  This is one science that a lot of people under-value in their own life.

    You probably think that Jim Taggarts idea to remove diner cars was a great example of conservation of resources. Maybe we should require fast food customers to clean the bathrooms. Then this savings could be passed on to the customer, instead of being wasted on hiring a person to clean the toilets. Heck I bet you can clean toilets faster than most people.

  2. Here is my take on self-checkout, which I loath..

    As long as products have bar codes that must be located before they can be scanned, then it doesn't make sense to make the customer scan their own products.

    The checkout person is much faster because they know where the barcode is located on the the eggs, milk etc... Whereas for me it is a waste of my time that I could spend on doing soemthing like enjoying life.

    I hate self-checkout because I find it very degrading to be put into the position of a minimum wage highschooler to buy food. Why did I go to college and spend thousands of dollars and hours learning to use my brain, if when I go to spend my money I end up doing it all myself?

    Sorry to sound like an elitist jerk, but we all derserve a rich life filled with labor savings technology; not reduced to car pooling in diamond lanes, communal buses, pumping gas, checking your own oil, and checking your own groceries. Screw Albertson's and Wal-mart. I deserve better!

  3. Are you criticizing Brad's character for generally expressing something negative about his former friend, or solely for the particular things that he said?

    The latter.

    If Brad had said: Willis is no longer my friend, but I will give you his phone number with the understanding that I do not recommend associating with him," would that have been acceptable?

    The reworded quote is much better.

    In that case: I wouldn't pry further (unless Brad wants to get more specific). I would continue my plan to call Willis. I wouldn't say anything to him (unless he says something about his former friendship with Brad).

  4. I would get Willis's number and then throw out Brad's (metaphorically speaking)

    What Brad does and says in this example reveals more about himself than Willis. I would wonder if Brad is trying to indirectly punish Willis rather than protect me from him. Unless Willis is a suspected serial killer (purse snatcher, crack dealer, etc...), Brad's first response to my request should be "Sure, no problem.".

    This example also establishes that Willis and Brad are friends (or were friends). Brad doesn't hesitate to trash his former friend for a relative stranger. That speaks volumes about Brads character.

    Let consider what Brad is accusing Willis of doing. Brad's accusations are sufficiently vauge that Ayn Rand or Leonard Peikoff could be accused of the same. Either Brad has a very rigid/intolerant standard for who people should associate with (unlikely considering he is outgoing and social), or Brad is subconsciously trying to punish Willis for something (there may be other possibilities). Both of these speak poorly for Brad, not nessasarily for Willis.

    Finally, Brad doesn't appear to have any problem telling me what to do (don't call him, if you are smart). This is a very presumptious attitude, and I personally don't like to be dealt with this way.

    What to do? Insist on the number, call Willis, and wait for more information about Brad's character.

  5. A productive methodology I somtimes apply for questions like this is to ask myself if there are examples of this technology in the animal kingdom.

    Given lifes amazing propensity for discovering things like sonar, etc... then it isn't unreasonable to conclude that mirrors are probably not an effective weapon, as life has not converged on this strategy. It's not proof, but I think it's a worthwhile strategy when first "peeling the onion" on such questions.

    Another element of such thinking, is to try and imagine a series of small incremental steps that each have survival value leading up to the "mirror based weapon". (it's not a requirement that each step be useful as a mirror-based weapon, it just has to be useful for something)

  6. On Apollo 13, the networks did not carry the capsule broadcast live, is it moral for mission control to withold this from the astronauts?. If the atronauts had explicitly asked if the networks showed their broadcast what should mission control had said?

    On Apollo 12, lightning caused problems during the launch. Ground controllers were able to verfy the safety of going to the moon. However they had some concern that the pyrotechnics for the re-entry parachutes might not work. Was it wrong to say nothing of this fact to the crew?

    In both cases, mission control seems to be limiting the information given to crew (rather than outright lying), in order to protect the crew from the facts of reality. Are these examples of lying? What principle makes these interactions justified? Are these just real-life examples of life-boat scenarios?

  7. This link concerns Michael Moore, and how he may have Nacissistic Personality Disorder

    It made sense to me. When I was reading up on N's the one thing that struck me as a very useful diagnostic trait is that N's are happy with bad attention just as well as good attention. This differentiates them from most people (who to some extent want attention too) because most people avoid the bad kind of attention.

    An example of bad attention comes from the world of crime. A serial killer, when caught, will enjoy tremendously the court appearances, the angry crowds.

    This relates to Micheal Moore specifically, because he seems to not care about the kind of attention he is getting.

  8. Doesn't this just have to do with the nature of prime numbers?

    Yes. But re-casted in the form of an algorithm. If a human can solve the halting problem then a human should be able to examine this and definitatively report YES it halts or NO it does not.

    There is no (known) "maximum prime number".

    Yep, and this loop explores all even numbers starting with '4' and the loop only halts when it discovers an even number that cannot be expressed as the sum of two prime numbers.

    If there is no maximum prime number, then every even number can be expressed as the sum of two prime numbers.

    If that's true then you are a millionaire.


  9. It is unknown if the loop terminates, as the goldbach conjecture is still unsolved.

    (My comment for the GOLDBACH said it always terminates, but that doesn't mean it always returns TRUE, if it is found to return FALSE for any 'n' input, then the conjecture is false, so far this function has returned TRUE for values as high as 2 x 10^17)

  10. I screwed up... Arghhh. I need to change the final loop to be "not GOLDBACH()".

    Does this loop terminate?

           halt := False
           n := 2
                if NOT GOLDBACK(n) then
                         halt := True;
                 n := n + 2;
           until halt = True

  11. Can you provide one such algorithm where a human cannot figure out if it will terminate or not (assuming we know already have all of the variables beforehand)?

    This is my implementation for the Goldbach conjecture..

       function PRIME(n)
     -- True if n is prime
     -- (This algorithm terminiates for all n)

    function GOLDBACH(n)
     -- True if 'n' is the sum of 2 primes, else False.
     -- (This algorithm terminiates for all n)
     for x = 1 to n
      for y = 1 to n
       if PRIME(x) and PRIME(y) and (x + y) = n then
        return TRUE
     return FALSE;

    Does this loop terminate?

    halt := False
    n := 2
     if GOLDBACK(n) then
      halt := True;
     n := n + 2;
    until halt = True

  12. In the context of programming, when coding a loop, you give it a limit on which if it reaches that limit it will terminate. By looking at a loop, I could tell you the maximum value that the loop will reach.

    But in this case the human is not doing anything that a machine couldn't programmed to do. In a context in which the algorithm parameters are limited (i.e has finite number of states.), then a machine can solve the halting problem too.

  13. Stephen your reply is much appreciated. In particular,

    That's a great question. One I haven't asked myself (but should have) for ages.

    Somewhere, long ago, I got stuck on this turing/godel/computer analogy and have not been willing to let it go. Part of the reason is that I only half heartily accepted the Objectivist position on volition. "volition" never jived with my scientific billard-ball view of the universe.

    Also, when I was exposed to the ideas of Godel/Turning/Halting problem in college, I was blown away by the results. Again, my billard-ball view of the world made it hard for me to accept the fact that a formal notation powerful enough to express arithemetic is insufficient to prove ALL true statements of that notation. (Godel's incompletness)

    This just occured to me,

    My argument is very similar to Penrose's (The Emperor's New Brain) on the subject of consciousness.

    1. QM is wierd

    2. Consciousness is wierd

    3. Therefore consciousness is based on QM

    My argument is:

    1. Turing/Godel/Halting Problem is wierd

    2. Consciousness is wierd

    3. Therefore consciousness is based on Turing/Godel.

  14. Stephen your reply is much appreciated. In particular,

    What's wrong with the view I have stated here many times?

    That's a great question. One I haven't asked myself (but should have) for ages.

    Somewhere, long ago, I got stuck on this turing/godel/computer analogy and have not been willing to let it go. Part of the reason is that I only half heartily accepted the Objectivist position on volition. "volition" never jived with my scientific billard-ball view of the universe.

    Also, when I was exposed to the ideas of Godel/Turning/Halting problem in college, I was blown away by the results. Again, my billard-ball view of the world made it hard for me to accept the fact that a formal notation powerful enough to express arithemetic is insufficient to prove ALL true statements of that notation. (Godel's incompletness)

  15. Ken, you make one big assumption in your post - that the human brain is a turing machine, an assumption which you cannot make.

    Yes, that is one of the biggies of the AI community. Is the mind just a complicated computer with more states? It is also the assumption of the church-turning thesis that any computable function must also be computable by a turning machine (no exceptions). That's a huge assumption, and no counter example has so far been found to contradict it. I am not claiming this gives me a right to my "brain equals turning machine" viewpoint, but that's where I get it from.

    ... human beings can solve the Halting Problem; in fact programmers do it every day, or how else do you think they are creating and debugging all those complicated computer programs? Clearly they can determine when a loop will or will not terminate.

    Human beings cannot solve the halting problem. Remember that the halting problem applies to all algorithms. This would mean that I could give a human any algorithm, and if they can solve the halting problem, then they can always report: "YES" the loop terminates, or "NO" the loop goes on forever.

    If this were true, I could give a human a simple loop that iterates over all the integers (starting at 0) and terminates only when fermat's last theorem is FALSE. If a human told me that this loop terminates, then I have a refutation of Fermat's, if the human told me the loop goes on forever they have established the truth of Fermat's last theorem.

    In otherwords, being able to solve the halting problem, implies humans could trivially solve all currently unsolved problems in number theory.

    (Fermat's was a bad example as it HAS been solved in this specific case, but subsitute any currently unsolved problem in number theory to make my example compelling again)

  16. What exactly do you mean by "materialism?" And while we are at it, in what particular sense do you refer to the "finite nature of consciousness?"

    My saying "finite nature of consciousness" was nothing more than saying "finite nature of beach balls". It sounds deep and profound, but is really just an application of the law of identity to consciousness. I mentioned it because my view that the mind is a large turning machine with a finite number of states.

    And this addresses your first question, what do I mean by materialism? I view consciousness as an emergent phenomenon from matter, where matter has strict cause/effect billard-ball like properties. But this view is not Objectivism, because you cannot derive consciousness from non-consciousness.

    It was my hope that I could keep my materialistic tendencies and the axiomatic nature of consciousness, and while we are at it, explain free will. (and all in a short 100 word post on an internet forum!)

    By the way, this points out two of my errors (mentioned by Dr. Binswanger in his lecture 'The Metaphysics of Consciousness':

    1. equating volition with consciousness.

    2. trying to derive consciousness from non-consciousness.

  17. p.s. None of what you wrote seems at all connected to this "new force of nature" so I am not sure why you included that phrase.

    It does sound like it is coming out of left field in the context of my post. I am trying to grapple with Dr. Binswanger's 1998 lecture "The Metaphysics of Consciousness". He makes the amazing claim that volition must have the ability to move matter in the brain. He believes the only logical way for this to happen is that some new force of nature (akin to magentism) must exist to explain it. Anyway, I am just now re-listening to this lecture series and he identified about five common errors people make when thinking about consciousness and I managed to commit most of them in my short post (mostly stemming from my materialism).

    You are alive, you are conscious, and you have the capacity to regulate your consciousness. Just enjoy it!

    I won't be happy until tax season is over and football season starts! But you're right, it is so amazing to be conscious. What a wonderful world and how great it is that mankind has aquired the knowledge it has. It's a wonderful age we live in (post Ayn Rand).

  18. I am a materialist when it comes to the mind and consciousness. Of course, this puts me at odds each time "free will" is mentioned in Objectivism. How can a materialistic viewpoint escape the inherent predeterminism in such a view?

    Some objectivists have "resolved" this issue by citing the axiomatic nature of free will, and thus ending all further inquiry into the subject. (but existence is an axiom too, but that doesn't mean chemists and atom smashers are wasting their time). Others have reconciled free will with materialism by proposing a new force of nature that allows consciousness to manipulate matter.

    See my previous post to see how I believe free will is consistent with my materialism. For this new topic I just want to explore the halting problem, and how it is solvable for Finite Automata

    The Halting Problem:

    The halting problem arises in the context of turing machines that have inifinite storage capacity. It has been proved that no general algorithm can be written that will determine if an arbitrary algorithm will halt. (I.e, you cannot predict the algorithm's behavior. Can you smell the parellels with free will?).

    The Halting Problem for finite Automata:

    If you have a turning machine with a capcity for N states, then I can build a larger turning machine that simulates the smaler one. Whenever I detect a repeated state then I know the algorithm DOESN'T halt. But if the simulated program reaches a HALT state, then I know the algorithm halts (duh!). Futher, (and most important) my larger computer will always terminate and give an answer.

    Since the finite nature of consciousness is not a debated fact in Objectivism, then I believe free will need not be incongruent with materialism.

  19. (Sorry, I accidentally hit POST instead of PREVIEW. Here is the completed post)

    Is free will an axiom? Or is it just my free will that is an axiom?

    I understand the argument in OPAR that I have free will and why this must be an axiom. Because any attempt to refute my own possession of free will leads to contradictions etc... But when I conclude that somebody else has free will, do I not have to [1] identifty the person as human, [2] identify myself as human, [3] know that organisms of the same species share all basic capacities, therefore other people have free will just like me. Thus, other people's free will is not an axiom TO ME. It is a inference I make based on a long complex chain of reasoning.

    What I am getting at, isn't free will a subective experience, like pain, the color red, etc..

    Does this not partially eleminate the quest for a new force of nature to explain the existence of free will?

    Let me elaborate...

    Imagine a consciousness that is far advanced from my own (advanced means it has a capacity to understand every atom and connection in my brain). This consciousness could observe me and predict my actions, and from its perspective I do not violate any laws of physics. From this god-like consciousness perspective everything I do is fully in accordance with the physical laws and no "free will" force of nature is needed to explain my behavior.

    Could free will be an axiom only because oneself does not have a capacity to "get outside" of ones awareness and thus see the determinism that is really taking place?

    That ends the best way I can express this idea. Below is an elaboration using the Halting problem.....

    Halting Problem and Free Will:

    (Since I have already messed up this post I will place this in a new topic shortly, thank you)