Pages

Thursday, June 21, 2012

Economic Problem Solution

Let's get rid of money!!  How, you ask?


Robot slaves.

First, I'll say I do not take issue with the term "slave" in this case.  These robots are meant to act like the machines they are, and are never meant to become sentient, unlike many of the science fiction issues with robots that have arisen.  "Slave" similar to they way our computers and cell phones are slaves.  So, ok, tools, I guess.


The idea is: we create robots to do all of the work so that we don't have to.  That way, we can have food, products, free time, health... it would be great!  Then humans can spend all of their time worried about whatever they want to worry themselves with.  This does not mean that we could not also contribute to the work-system that would be put into place, but the thing is, we would only be contributing because it would make us happy to, say, own and run a farm or well, maintaining the robots.  There would be no need for government or money.  It would be for the pure joy of hard work, which is essential to the human experience.  We could get out from under the thumb of Big Banks, Big Pharma, Big Food, Big Oil, Big Corporations.  We could feed ALL of the hungry people in the world.  We could house all of the homeless.  We could travel anywhere, at any time.  We could worry about being good people instead of being assholes for the cash.  Fabulous.

 Moreover, the environment would no longer be an issue.  Without the constant squashing of earth-friendly energy by the oil company money, scientists would be free to work on and develop there energies freely.  There would be no issues with funding or exposure.  We would just implement the most sustainable lifestyles with no cost to anyone.

And since we are no longer constantly trying to gain resources, and no government means no more need for reinforcing power structures; there would be no more war, no more civil rights abuses, not more genocide.

And on those notes, classes would be meaningless.


Now, as we have all been warmed be the science fiction thought experiments that have been presented to us, the issue of robots becoming self-aware, epithetic, feeling, or thinking is an issue that needs to be addressed before we take this idea seriously.

The Asimov Laws of Robotics

These laws are presented assuming that the laws can be programed into the robots as we understand them in English and that robots are not used as military weapons.

0.  A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.  A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Problems with the Laws


The language of the laws requires very specific definitions.  "Human," for example, needs to be able to be applied to every single Homo sapien on the planet. Perhaps some sort of DNA analysis could be the indicator.  This is assuming the the androids we come up with are nothing like the cylons, and are very clearly machines.  Robots must have a very clear distinction between robots and humans and never confuse the two terms.  Period.  Why we would make sentient robots in the first place is beyond me.  Just because you can does not mean you should.

There have been many other Laws presented; these are the most famous.  We should be open to others, and decide on that upon creating the robots.  The greatest minds of the field should work on this, provided they think of the tools here as entirely egalitarian and without any learning or empathy loopholes.  Plus, we probably won't get it right the first time, so it'll be a learning process.  I just don't want the next great war (or the one after) to be humans against machines.  We would probably lose.


Problems with the Solution


Similar to people's issues with communism, a potential problem with this solution is that people will become lazy because they have no reason to work hard.  Perhaps this will happen with some people, but maybe that's just how those people will choose to live their lives.  Others will be adventurers and travelers, philosophers, doctors, artists and writers.  In fact, people would have the ability to do whatever they loved at any time.  I do not believe the meaning of life would be lost, it would just change.  Maybe.  I guess that depends on what you think the meaning of life is for you.  Problem solved.

Well, ok, I'm not naive enough to think that those in charge of distribution and creation initially would have equal power to those who have little or no involvement in this.  I suppose this issue would come to the "divine beauty of the human heart," as it were.  Eventually, ideally, this would no longer be an issue because some sort of formula would come into place that would distribute robots as needed.  We would also need some way of identifying and closely watching the sociopaths of society so that no one would be able to change any programing in favor of total domination, and those people would have no connection to the robots at all in any way other than the layman's interaction.  So, as long as that whole thing worked out, problem solved.



So, you're welcome.  We'll see if the politicians will implement this...  The jerks.

No comments:

Post a Comment