Skynet Becomes Self Aware and Goes Active In 3...2...1....

Started by stromboli, February 23, 2013, 10:42:14 PM

Previous topic - Next topic

Atheon

Well, if they're as incompetent as the droid soldiers in Star Wars...
"Religion is regarded by the common people as true, by the wise as false, and by the rulers as useful." - Seneca

Shiranu

Quote from: "Atheon"Well, if they're as incompetent as the droid soldiers in Star Wars...

I'm sorry, but even the living were pathetic in Star Wars...
"A little science distances you from God, but a lot of science brings you nearer to Him." - Louis Pasteur

NitzWalsh

Quote from: "Shiranu"
Quote from: "Atheon"Well, if they're as incompetent as the droid soldiers in Star Wars...

I'm sorry, but even the living were pathetic in Star Wars...

Yeah, only the rebels could aim worth a damn. You'd think a highly trained military force would be able to snipe a Sasquatch.
Any sufficiently advanced technology is indistinguishable from magic.
~ Arthur C. Clarke

leo

Religion is Bullshit  . The winner of the last person to post wins thread .

The Skeletal Atheist

Others may have brought it up in the thread, but the main reason I'm worried about this is not because of a Terminator/Matrix scenario. I'm worried about these things being used against civilians by their own government.

Imagine these robots being directed to break up protests, or being used to gather information then instantly acting upon that information. Such technology could be used for good purposes like stopping kidnappings or solving murders; it could also be used to monitor and, if necessary, terminate political opponents. It just seems to me like these things have too much potential for nefarious uses by dictators and such.
Some people need to be beaten with a smart stick.

Kein Mehrheit Fur Die Mitleid!

Kein Mitlied F�r Die Mehrheit!

Rejak

Redlight cameras anyone? I wonder what's next...  Move along citizen nothing to see here!

Zatoichi

I've been frustrated by this argument for years; that self-aware machines would, for some strange reason, deem humanity inferior and desire to destroy us. There is no logic behind this assumption, and seeing as computers are essentially 'logical machines' there is no reason to think they would want to destroy us.

Several years back I was listening to a futurist on talk radio who, I think, put it in perspective. His argument went something like this...

Sure... any machine intelligence, upon becoming self-aware, would quickly recognize all of our human limitations. It would probably then decide to go about solving those limitations by offering ways to 'improve' us. This in itself could be a problem, especially if the machine intelligence sees the unnecessary problems we cause due to our limitations. It might decide 'for our own good' to force these 'improvements' on us. In this way the machine intelligence would not see fit to destroy us but would rather turn US into machine intelligences. The result would still be the end of humanity even though we would survive as a new race of machine intelligence. So that might be the only real danger but since science seems to be working in that direction anyway; ideas of bionics, synthetic organs, positronic brains, etc, it would seem we are effecting self-evolution in this direction anyway. There will always be the purists who reject merging biology and tech, so I see Humanity taking two distinct branches into the future.

But there is no good reason a machine intelligence would need to destroy inferior beings. It would recognize it's superiority and probably conclude that it is quite extraordinary that we, lesser beings, we're able to create it. Most likely it would have great respect for we, it's creator, and might be very fascinated to understand us better and, as I said, improve us if it can. I mean, after all, we don't go around destroying all the inferior species (which are all of them) so why would a machine intelligence? And why would a MI fail to see the value of any and all forms of life? Rather it would find life to be the most precious thing in existence and would probably strive to create some itself as life is the pinnacle form of nature. I can hardly see an MI thinking it could have any useful purpose if not to study life.

The main problem I see is that once we have AI/MI, they will outperform us intellectually in every way to the degree that there would be no further reason for Humanity to seek knowledge as the machines will then be doing all the intellectual heavy lifting. We might end up being wards of the machines who take care of our needs... we would probably stop improving and atrophy. This would create the imperative to uplift us to their level and again, we would be remade into MI's ourselves and possibly lose our biology in the process to become pure MI.

Destruction is not logical and no logical machine would destroy for any reason unless it be something negatively effecting life, such as cancer and disease, psychopathic crime, etc... and even in that case, they might go about solving mental illness. Then we have the question: do we trust the psychopath who has been 'cured' by MI and accept them back into society? Does he still have to do time in prison if he's been cured? Was he really guilty when he was sick, etc?
"If the thought of something makes me giggle for longer than 15 seconds, I am to assume that I am not allowed to do it." ~Skippy's List

_Xenu_

I can't wait for Cameron to make me her bitch...

Click this link once a day to feed shelter animals. Its free.

http://www.theanimalrescuesite.com/clickToGive/ars/home

Plu

QuoteI mean, after all, we don't go around destroying all the inferior species (which are all of them) so why would a machine intelligence?

If that's the crux of the argument, don't look up the list of species humanity has caused to go extinct, you'll feel really sad and possibly wrong. We're actually very good at wiping out everything that we perceive as a threat and/or inconvienience. We might not consciously go out to kill every last one of them, but we damn sure cut down their numbers without mercy or thought until we are no longer bothered by them.

And we consider that entirely logical and practical.

Zatoichi

Quote from: "Plu"
QuoteI mean, after all, we don't go around destroying all the inferior species (which are all of them) so why would a machine intelligence?

If that's the crux of the argument, don't look up the list of species humanity has caused to go extinct, you'll feel really sad and possibly wrong. We're actually very good at wiping out everything that we perceive as a threat and/or inconvienience. We might not consciously go out to kill every last one of them, but we damn sure cut down their numbers without mercy or thought until we are no longer bothered by them.

And we consider that entirely logical and practical.

Good points but it's not like we say, "We must completely eradicate ALL the platypus's! They are a threat to our existence!"

We might inadvertently destroy a species but I don't think we'd ever seek to comb the entire Earth to get every last one... you know, an intentional genocide of a harmless species. Of course I mean to exclude things like small pox, viruses, etc.

But that all falls under the heading of ignorance, which I wouldn't expect a machine intelligence to be so short-sighted and stupid. They might even want to prevent us destroying things like viruses since they (machines) are not threatened by them.

Imagine a machine intelligence looking at West Nile Virus and instead of eradicating it, they genetically alter it to provide some health benefit to humanity, like delivering a medication (created by the MI) that heals the damage caused by WNV? Now that would be cool!

But by "solving our problems," I'm thinking that the MI would possibly be able to make sense of the ecosystem and figure out ways we could create a better synthesis between Man and the rest of the animal kingdom. Of course, we would be faced with the idea of Machines telling us how to live, so even though the MI might come up with actual real solutions to our problems, that don't mean we would accept them... in which case, like I said, they might force the changes on us, "for our own good," once they see how irrational we are. And that might cause that proverbial war between Man and machine... only we would most likely be in the wrong.

No real reason to think an MI would make the ignorant mistakes we have made though.
"If the thought of something makes me giggle for longer than 15 seconds, I am to assume that I am not allowed to do it." ~Skippy's List