Friday, May 27, 2011

I'm sorry Dave, I can't do that.

Thoughts concerning free will, robots and the application of both in regards to combat and morality...

Philosophers have been struggling with this question for about as long as there have been philosophers. As it stands I will have to take a rather philosophical position on this topic. The best I have been able to say about free will and whether humans have it (or if machines can) is that from my perspective it sure as hell seems like I have free will. I can decide to get up when the alarm clock goes off or I can choose to hit that snooze button. I can pick what meal I would like to eat. But if you take a step back it is easy to see a lot of impinging outside influences that affect our choices. From the big things like identifying as one religion or another (the influence here is based mostly on where and to who somebody is born and the same could be said of political stances or languages used) to the little things like my girlfriend used all the milk for her smoothie so instead of cereal for breakfast I will have some toast instead. I do still seem to have a choice but those choices are diminished or increased depending on a whole bunch of other criterion that are not necessarily up to me.

A machine, I believe, would have the same sort of influences. At a high level of functionality a machine's free will would appear to be like our own. At least in the sense that it would be allowed to choose between the options available to it at the time a choice was needed. At the low level of functionality machines clearly have little choice and at times this is true of humans as well. If one is asked to choose between morally ambiguous matters such as sacrificing one for the good of the many then how do we define responsibility and choice of action. Is one choice better than the other? Do we prefer logic and utility over a possible range of "gut" feelings and intuitions? Is one to be held responsible for making a choice that seems right which ultimately turns out to be wrong in hindsight? How about when no choice is made between two equally bad outcomes? Is a decision to not make a decision grounds enough to hold somebody accountable? Furthermore, will we be quicker to judge an artificial system more harshly then we would a human? Evidence has proved that we are indeed more likely to blame a machine for malfunctioning than we are a person. I don't think that this is necessarily fair but it is a reality. As the machines gain more autonomy and begin to show signs of free will I can only assume that this trend will increase.

These sorts of questions unfortunately lead to circular arguments and that is probably why there has not yet been a solid answer to whether or not free will exists at all. All I can say is that given the overwhelming abundance of evidence that I personally have access to (my own mind and memories) it sure seems like I have free will.

As for machines there is a threshold out there when they will have something approximating what we all think of as free will. When that point is reached is when we will really need to think about what it means to be sentient beyond selecting a tasty breakfast. And of course that is the central idea to this course and this discourse. We need to get ahead of the game before these sorts of machines become a reality. What I mean by this is that since we will be the creators be have a duty to try and make these machines as moral and ethical as possible from the ground up.

This brings me to the main point that I would like to make about machines built for the purpose of combat. To me it seems irrelevant if we can produce war machines that are able to understand and follow the laws of armed combat, or specific rules of engagement. I am sure that we can but that isn't the question I want to ask. I want to ask whether we should. At no other point in time has mankind been so close to being able to wipe itself from the face of the planet than in the last 70 years. Increasing that possibility with autonomous gun toting robots seems to be very counter intuitive. If and when we produce a machine capable of thinking for itself I truly want it's answer when asked to kill not to be "yes sir" but "I'm sorry Dave, I can't do that."

Cheers

'sup bitches...

...yeah, I know there is nobody that qualifies as "bitches" here. Whatever.

I'm working on a meaningful addition for this forum but for now here's a bit of fluff.

I've been off the grid for a while now because I've been in school. Learning some good stuff. Getting good grades too. *pats own back a bit* In addition to that I now have a steady job. While it doesn't pay all that much it is meaningful and that makes a big difference. It's also not involved with the modern day slavers I mentioned last time. I'm sure I will talk about that more soon but for now it's not that important. So, short and sweet, expect more in a few days (that is if you even give a deuce).

Cheers