On May 27 2014 09:23 L3gendary wrote:
Like I mentioned the cells in your body are constantly dying and new ones are being formed so you are not composed of all the same particles from day to day. Also, it's not possible for your body to be scanned completely without simultaneously destroying it due to the Heisenberg uncertainty principle which is why cloning is impossible and why a lot of these hypothetical teleportation scenarios assume that the original is destroyed. See this: http://en.wikipedia.org/wiki/No-cloning_theorem
Like I mentioned the cells in your body are constantly dying and new ones are being formed so you are not composed of all the same particles from day to day. Also, it's not possible for your body to be scanned completely without simultaneously destroying it due to the Heisenberg uncertainty principle which is why cloning is impossible and why a lot of these hypothetical teleportation scenarios assume that the original is destroyed. See this: http://en.wikipedia.org/wiki/No-cloning_theorem
There being a theorem which says that cloning is impossible is irrelevant to the question. Even if by natural laws it is impossible, it is still not absolutely impossible because it is conceivable. As long as it is conceivable, we can use it in argument because as long as reason does not forbid it, then it can be plugged in to rational systems to construct an argument.
Another example of this kind of thing is a Frankfurt counterexample to classical free will being necessary for moral responsibility. The idea is that in order to be held morally responsible for one's actions, one must have the ability to do otherwise (classical free will). Frankfurt's counterexample is something like:
An evil scientist wants Fred to rob a bank. He installs a brain control device in Fred's brain such that if Fred chooses not to rob the bank, the machine will immediately activate and make him choose to rob the bank. If Fred chooses to rob the bank of his own accord, then the machine will not activate. So in the case that Fred chooses to rob the bank of his own accord, it would seem like he's morally responsible, but if he chooses not to and the brain machine makes him choose to, then it would seem like he's not really morally responsible for robbing the bank. Either way, Fred will rob the bank, so he doesn't have the ability to do otherwise (classical free will). But we still hold him responsible for robbing the bank if he does it on his own. This is to say that classical free will is not necessary for moral responsibility.
Now, Frankfurt examples can get a little hairy and it's not like you can't argue with them. My point, however, is that even though something like a brain control machine is probably impossible (let's just say that it is impossible by natural law), that doesn't mean you can't use it as an example for a rational argument. It's just a kind of "what if" thing that helps us get at ideas a little deeper. Does that make sense?