Started by Baruch, November 18, 2017, 08:33:06 PM
Quote from: Hydra009 on November 19, 2017, 07:55:21 PMI'm not sure. Sometimes we seem to be in complete agreement and sometimes not, yet my position has not changed. Then what has changed? My position is that our current AI technology is still in its infancy and current AIs are not general-purpose (narrow AI) and are well below the human level. But assuming any rate of improvement, AIs are likely to reach human-level intelligence in the future.In your post, you describe AI behavior in a video game - basically, a NPC running a predefined script with no variation ("enemy always does the same thing") and conclude that the "intelligence" in artificial intelligence is a misnomer, that it's really just number crunching, not intelligence. Therefore, AIs cannot match human intelligence because AIs do not really possess intelligence.I'm trying to sum up your position as best I can. I'm practically quoting you verbatim in places. So if I completely misread you, I'm sorry. I'm just trying to understand where you're coming from and I only have your posts and an imperfect reading of what you're saying to work with.So if you'll sum up your position in a few sentences, that'd help me understand tremendously.
Quote from: Hakurei Reimu on November 19, 2017, 08:42:20 PMPart of the reason that AI accomplishments tend to be discounted as real intelligence (the AI effect) is because the AI doesn't accomplish the given task in any way resembling the way a human would accomplish the same task.
Quote from: SGOS on November 19, 2017, 08:54:20 PMI'm talking about the processes that result in these outcomes that we call intelligence. Both can produce intelligent outcomes, but are achieved using different pathways. That's what I mean by intelligence. It's a complex pathway that solves a problem.
Quote from: Hydra009 on November 19, 2017, 09:10:28 PMOkay. I'm looking at this more from a results perspective. I'm more concerned about whether or not an AI can perform complex intellectual tasks than how it goes about doing it or what sort of processes are going on. So, we were probably talking past each other back there.
QuoteThe Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.The test was introduced by Turing in his paper, "Computing Machinery and Intelligence", while working at the University of Manchester (Turing, 1950; p. 460). It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words." Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?" This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".Since Turing first introduced his test, it has proven to be both highly influential and widely criticised, and it has become an important concept in the philosophy of artificial intelligence.
QuotePhilosophical backgroundThe question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. RenÃ© Descartes prefigures aspects of the Turing Test in his 1637 Discourse on the Method when he writes:[H]ow many different automata or moving machines can be made by the industry of man [...] For we can easily understand a machine's being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.Here Descartes notes that automata are capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing Test by defining the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing Test as such, even if he prefigures its conceptual framework and criterion.Denis Diderot formulates in his PensÃ©es philosophiques a Turing-test criterion:"If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation."This does not mean he agrees with this, but that it was already a common argument of materialists at that time.According to dualism, the mind is non-physical (or, at the very least, has non-physical properties) and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially.In 1936, philosopher Alfred Ayer considered the standard philosophical question of other minds: how do we know that other people have the same conscious experiences that we do? In his book, Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined." (This suggestion is very similar to the Turing test, but is concerned with consciousness rather than intelligence. Moreover, it is not certain that Ayer's popular philosophical classic was familiar to Turing.) In other words, a thing is not conscious if it fails the consciousness test.Alan Turing
Quote from: SGOS on November 20, 2017, 09:16:37 AMThe Turing Test does not test for correct answers or replies given by the AI. That was not what Turing wanted to determine. He was only testing whether the AI responses could not be differentiated from human responses. This may suggest that AI and humans use a similar process if they can pass the Turing Test.
QuoteWith that, I'll go off on another tangent. Why would you want to build a computer that thought like humans? The point of computers is to not think like humans, but to capitalize on their special designs to solve specific problems with much greater speed and accuracy than humans ever could.
Quote from: Hakurei Reimu on November 20, 2017, 11:31:15 AMA machine able to imitate a human well enough to pass as one doesn't mean that the machine thinks like a human. Being able to imitate a human necessitates that the machine be able to form some kind of theory of mind about humans. The machine may think in a completely different way from a human, but because it has an understanding of how human minds work, it can extrapolate how a human would react to a given input. Computer programmers do the exact same thing to computers all the time; because they have an understanding of how computers work, they can extrapolate how a computer (and its software) will respond to a given input. A computer that is able to pass a full Turing test has demonstrated the ability to learn how a human thinks, even though the mechanism of its thought is completely different from that of a human, and as such, has demonstrated some capacity for intelligence.
Quote from: Hakurei Reimu on November 21, 2017, 06:06:35 PMLook, Baruch, just because you can be replaced with ELIZA doesn't mean we all can. On the other hand, while behaviorism doesn't explain everything we do, we are built on top of a stimulus/response chassis. This is why marketing works sometimes.
Quote from: Hakurei Reimu on November 21, 2017, 06:15:33 PMWith a little help from my telencephalon. In a way, those morons who think that the governments of the world are secretly run by reptiles are right. Only, they are right only in the sense that we are all reptiloforms â€" we never stopped being a bit reptile.