The Random Test
By ai-depot | July 9, 2003
Any particular test of intelligence is destined to be biased towards certain abilities. This article searches for a new method to establish sentience by means of randomness instead.
Written by Marco van de Wijdeven.
The Random Test
This essay is about how one could determine sentience (The quality or state of being sentient; consciousness) in an AI by means of an experimental method. The most well known test to see if an AI is actually sentient is the test devised by Alan Turing a.k.a. the Turing Test. This test is based on a simple assumption. An AI can be called sentient if he is able to converse with a human and the human cannot tell he is talking to an AI or another human. I will talk in this article about Turing�s Test, why it is flawed in some aspects and how a better test can be made on a different hypothesis. I will then describe a possible test to determine sentience through this new hypothesis.
Turing�s Test
This test fails to achieve its goal for various reasons. The first problem being the assumption on which this test is based. Sentience is defined by the ability self-reflection: �I think therefore I am.� The test assumes that to be able to reflect on your own actions you need a certain quality of language to be able to name and understand the abstract construct that is self-reflection. This works because Sentience and Self Reflection have a 1 on 1 relation. However measuring self-reflection by looking at the language level is debatable. For all we know dolphins might have self-reflection yet their language, while extensive, isn�t on the same level as that of a human.
The second problem is the fact that either of the two results reached (pass or fail) can have two causes. If the AI passes is this because of the present sentience in the AI or the lack of skill in the human? Same goes for if the AI fails, is the AI really not sentient or did the human not know for sure and just guessed? The test is done by multiple humans so the effect of this problem is only minor. Yet the human factor still makes this test uncertain and open to much debate.
This test fails not only because of the human factor but also because of the way of measurement. Therefore we need to move away from the ’sentience equals self reflection’ argument and try to find another means of determining sentience. I believe I have found this in the concept of �randomness�.
Alan Turing
Randomness as a sentience measurement tool
The assumption I make here is that only a sentient being is capable of understanding the concept of randomness (Without a governing design, method, or purpose; unsystematically). Hence you get the same 1 to 1 relation as mentioned above. I think I can make this assumption for two reasons. The first reason is that the binary language of the computer doesn�t allow for the random concept. Every programmer knows that getting a number that is truly random rather then derived from some basic input is impossible. Only an AI that surpasses its basic programming and achieve a higher level of being would be able to make a truly random choice.
The second reason is that the random concept cannot be acquired through learning, training, conditioning or programming. Hence there is no way to produce true random results by using a trick or pre-established knowledge. The only way you can make a true random choice is to make a decision based on nothing.
My definition of non-sentient intelligence is as follows:
The ability to perform one or more different action(s) in response to determined stimuli without interference from a third party.
My definition of sentient intelligence is:
The ability to perform one or more different action(s) without interference from a third party or any form of determined stimuli.
In short, sentience is the ability to do something for no reason at all which makes the random concept the perfect way to measure it.
The Random Test
Because the AI to be tested for sentience is still a computer this part will be relatively easy. The only thing we have to do is to present the AI with a selection of X numbers and let it pick out a random number Y times. This will give a string of numbers as output. If the AI uses a preprogrammed random() method then at some point the string of numbers will start repeating itself.
More shrewd random() methods will take a very high Y before the repeating becomes clear. It doesn�t even have to be the whole string that is repeated but certain key numbers that keep showing up at regular intervals. The real test is to discover a pattern. If a pattern (besides Normal Distribution) is found then that is the proof that the randomizing is hard-coded rather then spontaneous.This test needs to be run multiple times with varying X. If a pattern fails to show up in one test doesn�t mean one isn�t there. Extra tests with other X might show a pattern more easily. Also the tests can be compared to each other, which might reveal the pattern almost immediately.
Of course there is always the possibility that an AI is in fact sentient, gets bored with the task and stops completely. Yet stopping the test is automatically a failure because it�s very easy to mimic that in a normal program. Fortunately the chance of this occurring before a possible pattern is determined is slim in my opinion. After a result is determined a human programmer can still examine the code of the program. This is an extra insurance to see if a negative result is actually negative. It cannot be used to determine if a positive result is true because nobody knows what code from a sentient AI would look like. Therefore checking out the code is not part of the test nor should it be.
Conclusion
The big advantage the Random Number test has over the Turing test is that it minimizes human involvement. A simple program can be constructed to check the output of a candidate for specific patterns. It�s also out of the question that a result can mean two things. If a repeating pattern is found the AI is not sentient. If not, a breakthrough is achieved.
-Marco �Ashiran� van de Wijdeven
Written by Marco van de Wijdeven.
Tags: none
Category: essay |
April 24th, 2007 at 12:08 am
Very interesting idea. So the idea boils down to, if you can be random, then you are intelligent?
So if I used a random number generator that used as it’s seed, the ambiant temperature in one of 3000 locations throught the world, + the 30th character on a random web page from google, it would be more intellegent then a plain random generator?
Maybe I’m mis-understanding, but to me this seems rather …. unintellegent :)
April 26th, 2007 at 10:00 am
Interesting idea. WRT the RNG, the point is narrowly missed. Sentience would be closer to creating the random number generator without knowing how when you began. The salient point - I suggest - is the observation of something being done for no reason and without a predetermined external stimuli. So, a computer just sits and waits - it is event/purpose driven. A human - while it may respond to inputs and seek to achieve goals (food, shelter, etc.) is not constrained in such a way.
To my knowledge, a computer requires a set of a priori rules to accomplish it’s purpose. It cannot rewrite those rules independently (for example, being an ATM is boring, tomorrow I’ll do something different…hmm but what?). A computer can process information, it doesn’t “feel” any particular way about doing the task. We, on the other hand, can “feel” and act on that feeling (a passionate, non-rational response), thus moving into the random element of self expression, which is arguably the basis/foundation of human thinking.
Don’t know if that makes sense…its just how I feel about it ;-)
May 23rd, 2007 at 12:21 pm
I won’t hide the fact that I’m sorry, but that I don’t agree with your test. The simple fact as to why it won’t work is because it IS a randomness test. We as humans may think that we’re all powerful when we seem to create random numbers, but in fact we don’t, and we can’t. The human subconscious will always make one thing seem better than the others, and you will unconsciously choose this or that decision, this or that number. When you think about it, we can’t help it like we can’t help having experience and unconsciously taking into account all of those little details which shape our ‘random’ choices. This is the same if you asked to give a really long string of numbers: almost no human would be able to do it without stopping, and that is because before saying something, we visualize the string in our brains, or a part of it, and then say it. While being asked to actually choose random numbers we’ll want to make the string look random; you’ll rarely find 2 same numbers back to back because we don’t think that’s random. The fact that we try to make the string look random takes all the randomness out of the exercise. Humans will be able to say a shorter or longer string depending on their capacities like memorizing which aren’t random.
In short, when even humans can’t say something random, how can you justify testing machines for randomness?
June 13th, 2007 at 2:38 am
Well, I don’t agree with it either. Atleast not anymore. :)
I wrote this back in 2003 in a hurry for the contest back then. Rereading it now shows that the reasoning/testing is flawed or wrong.
I still do believe however that being able to understand what Random as a concept means is a sign for sentience even though no computer or human would be able to actually perform it, as any choice made will always be based on something.
Ofcourse, testing abstract understanding of a specific concept goes back into the field Turing’s test is operating in. So in that regard it really isn’t an improvement at all.
October 15th, 2007 at 12:19 pm
I was having a discussion where I was defending the fact that humans do not behave in a random way and the other user was telling me that I was wrong. Would you check out the debate and maybe then tell me what you think about it?
http://www.ai-forum.org/topic.asp?forum_id=1&topic_id=30097
PS: I’m sorry if I’m doing some kind of publicity to the other site but the post is just too long to copy paste.
November 9th, 2007 at 1:43 pm
I have read that sometimes researchers manually fake data, in order to save on work, but that it can be detected because of the *lack* of randomness.
Here is an article that discusses the correlation coefficients in real data sets vs correlation coefficients in fabricated data sets.
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=212490
Basically, humans cannot do this.
I suspect that animals with no understanding of what is going on would be able to generate better pseudorandom sequences than humans.
December 29th, 2007 at 10:02 pm
…
In Turing Test Two, two players A and B are again being questioned by a human interrogator C. Before A gave out his answer (labeled as aa) to a question, he would also be required to guess how the other player B will answer the same question and this guess is labeled as ab. Similarly B will give her answer (labeled as bb) and her guess of A’s answer, ba. The answers aa and ba will be grouped together as group a and similarly bb and ab will be grouped together as group b. The interrogator will be given first the answers as two separate groups and with only the group label (a and b) and without the individual labels (aa, ab, ba and bb). If C cannot tell correctly which of the aa and ba is from player A and which is from player B, B will get a score of one. If C cannot tell which of the bb and ab is from player B and which is from player A, A will get a score of one. All answers (with the individual labels) are then made available to all parties (A, B and C) and then the game continues. At the end of the game, the player who scored more is considered had won the game and is more “intelligent”.
…
http://turing-test-two.com/ttt/TTT.pdf