Artificial Intelligence and reading about Alan Turing Test............

My last contribute for the day..... :-) I bring this up because there was a thread on this list about asking about random numbers etc. A good place to start to know more about Randomness and Pseudocode etc is this man called Alan Turing and the tests he did. A genius..... So after many months of reading various stuff, ( little or no code yet ) I can finally see the light at the end of the tunnel. Reading and understanding about AI, a few more months to go before anything tangible can happen as far as the actual game development goes. Rgds.

Do Androids dream of Electric Sheep ? I have always wondered, with these games AI ; is it possible to create characters in the computer and instead of controlling them - you just let them be... By giving them predefined characters; and let the "randomness" just be their decisions; and give these characters ability to learn... I wonder what will happen after months or a year to these AI characters... <thots>

@Stephen, lol! The subject is a bit crazy especially on randomness and hollywood have made it worse by making AI look so easy. Here's an example as in game development : Suppose you want your code to generate random objects, doing different things at different velocities and acceleration. Then the objects must also appear in different positions, different animations and different collison detections too. Those are just 6 "intelligences ". The seed is what random number generators use but you have to be careful where you do the Seeding. I'll know the answer in a few weeks/months time.... :-) Rgds. On Fri, Jul 23, 2010 at 1:21 PM, ndungu stephen <ndungustephen@gmail.com> wrote:
Do Androids dream of Electric Sheep ?
I have always wondered, with these games AI ; is it possible to create characters in the computer and instead of controlling them - you just let them be...
By giving them predefined characters; and let the "randomness" just be their decisions; and give these characters ability to learn...
I wonder what will happen after months or a year to these AI characters...
<thots>

@Ndungu - actually it is possible. MIT normally have a yearly game programming competition called battlecode. They either group into teams or individually and are given a basic framework of a strategic game (u know like age of empires). Each team designs and develops strategies for their bots which are then pitted together through 'N' stages knockouts, quarters, semis and then finals. Pretty interesting stuff. Check out this link http://battlecode.mit.edu/2010/info -Billy 2010/7/23 ndungu stephen <ndungustephen@gmail.com>
Do Androids dream of Electric Sheep ?
I have always wondered, with these games AI ; is it possible to create characters in the computer and instead of controlling them - you just let them be...
By giving them predefined characters; and let the "randomness" just be their decisions; and give these characters ability to learn...
I wonder what will happen after months or a year to these AI characters...
<thots>
_______________________________________________ Skunkworks mailing list Skunkworks@lists.my.co.ke http://lists.my.co.ke/cgi-bin/mailman/listinfo/skunkworks ------------ Skunkworks Server donations spreadsheet
http://spreadsheets.google.com/ccc?key=0AopdHkqSqKL-dHlQVTMxU1VBdU1BSWJxdy1f... ------------ Skunkworks Rules http://my.co.ke/phpbb/viewtopic.php?f=24&t=94 ------------ Other services @ http://my.co.ke


@Billy, thnks for the link. The only way for code to know about other code and do something meaningful is through two concept in AI : Collisions and Randomness. Those are the 2 keywords I picked from the link you provided. If one really wants to go crazy about these then here goes : 1) Collision detection : You write code to detect such. The most basic form is " bounding box " algorithm. In your "objects", you build a code box around the "image" or build many boxes which will be more accurate. You have to code each box for actions. Anything that comes in "contact" will cause further code to be run that can be change of weapons, different fighting skills etc. You can use this basis for Robots too using sensors that degenrate input and collisions generate actions. Ofcourse there are many complex detection codes and you can go quite deep into this. Similarly, if I have 2 objects, I could create a chasing code so that both objects will be engaged, using the same collision detection. 2) Randomness : I had summarised earlier. Rgds. On Fri, Jul 23, 2010 at 2:00 PM, Billy <billyx5@gmail.com> wrote:
@Ndungu - actually it is possible.
MIT normally have a yearly game programming competition called battlecode. They either group into teams or individually and are given a basic framework of a strategic game (u know like age of empires). Each team designs and develops strategies for their bots which are then pitted together through 'N' stages knockouts, quarters, semis and then finals. Pretty interesting stuff. Check out this link
http://battlecode.mit.edu/2010/info
-Billy

Or have a "searchlight" or "torch" - if you have aggressive characters. This can search for opponents or other objects on the screen and head in their direction to "interact" or even destroy.. Pixel by pixel search can be achieve through motion detection algorithms - though this assumes that the other objects are moving, and not perfectly still...

ok now I really wish I was 20 years younger and someone paying the bills than we could have built such systems ..... :-) But I have a question on motion detection. This scenario is applicable on Robots, AI vehicles that can collect input data from sensors such as thermal or infra-red. Even if the target is sighted, there are hundreds if not thousands of decision to be made. As an example : Suppose you have an empty field and your AI robot is contantly scanning for moving objects. ok, moving object detected and is stationary. What next? a) Calculate position X,Y,Z ( grid based ? ) b) Calculate threat level 1,2,3 ( Escalation level ? ) c) Calculate distance ( How much time and speed needed to engage ) d) Calculate any obstructions ( this alone could be hundred of decisions, eg stones, stairs etc ) e) What action to take on successful interception? The decision making is too slow to be used in gaming scenarios. If the motion detection algo is used in gaming, how is it going to work? Will code scan the entire pixels? Rgds. On Fri, Jul 23, 2010 at 2:41 PM, ndungu stephen <ndungustephen@gmail.com> wrote:
Or have a "searchlight" or "torch" - if you have aggressive characters.
This can search for opponents or other objects on the screen and head in their direction to "interact" or even destroy..
Pixel by pixel search can be achieve through motion detection algorithms - though this assumes that the other objects are moving, and not perfectly still...

ok, moving object detected and is stationary. What next? A video camera can be used. An open source program like this can be used : http://dorgem.sourceforge.net/ Focal point of lens can be used for depth perception (perspective) Then you can code the screen pixels such that the extreme left top is (x,y) 0'0 and Z being a variable - etc a) Calculate position X,Y,Z ( grid based ) b) approach target c) redetect target - or update target d) approach target (by avoiding obstacles) e) redetect target f) approach target (its a loop) e) identify target f) determine cause of action Its a dumb system, because if the target becomes stationary ; then the code will deem it as part of background... Unless one has a memory bank that 'remembers' bearing and shape of the scanned object - even if the pursuing object changes direction to avoid obstacles...

The camera and lens removes many coding lines. But there is still a problem. If its an AI vehicle, even if it calculated the parameters necessary to move within micro-seconds, the terrian would add many problems thus the vehicle can only move a few rolls at time. Remember the Mars rover? Now if it was a Walking Robot, that would be crazy! The first thing would be to stabilize the movements, possibly having a X-Y-Z level meter in its belly...which will limit to a few movements per second. Still too slow. :-) On Fri, Jul 23, 2010 at 3:31 PM, ndungu stephen <ndungustephen@gmail.com> wrote:
ok, moving object detected and is stationary. What next?
A video camera can be used. An open source program like this can be used : http://dorgem.sourceforge.net/
Focal point of lens can be used for depth perception (perspective)
Then you can code the screen pixels such that the extreme left top is (x,y) 0'0 and Z being a variable - etc

Thats why japan still has a clumsy walking robot... Wheels are usually the best... At the beginning of this year - I visited "Fablab" of University of Nairobi - these guys and other colleges have an annual robot intelligence competition that give the robots specific tasks every year - and these little critters can do amazing things... (no external control) Most guys use "infrared" and "ultra sonic" detectors for vision - but a visual spectrum camera can also be a possibility... So its possible (but wheels are the preferred mode of movement)

@Stephen, the controlled enviroment robots with wheel, tracks use collision detection AI? And with the correct sensors and drive train for reverse, turn and forward makes it easy. BUMP and turn. Very very few decisions to be made on the side of programming thus small chips. ( corrections welcome ) And here's worse stuff by hollywood, though Terminator is my favourite. http://www.pcworld.com/article/200229/robocop_ran_dos.html Rgds. On Fri, Jul 23, 2010 at 4:08 PM, ndungu stephen <ndungustephen@gmail.com> wrote:
Thats why japan still has a clumsy walking robot...
Wheels are usually the best...
At the beginning of this year - I visited "Fablab" of University of Nairobi - these guys and other colleges have an annual robot intelligence competition that give the robots specific tasks every year - and these little critters can do amazing things... (no external control)
Most guys use "infrared" and "ultra sonic" detectors for vision - but a visual spectrum camera can also be a possibility...

This year competetion went like this: 1. Put on robot - robot should find white line or track and stay within the track 2. Once end of track is reached, robot comes across tray full of sponges and stops 3. Robot should identify color red sponges (from an assortment of white, blue, yellow etc) from the rack,, and using an arm, should reach out and pick 'red' sponges only 4. Then robot should go back on the track and drop the red sponges on a tray awaiting on the other end 5. Then robot should go back and pick out red sponges again That was the way the competition went - and as far as recall, a private university won. == winner is whoever has the most red sponges trick is your robot's arm should not be clumsy - e.g upset the tray of sponges,, drop the sponges while transporting them and also distinguish colors.. <this was done in kenya :-) >

Haiya Stephen, Was this done at Uni level? Since am a code learner ( but a semiconductor person many years ago ), I'll challenge other coders to write out the simple code for the Robot control in your posting below. - Colour detection = different wavelength = different waveform = different voltage. 50% of the job done by the sensors - One collision detection = flat object = flat waveform = halt drive motors. Sensor does the work again! - Identify sponge = colour detection waveform. - Pick sponge = look at sensor waveform, each object has different pulse timings = run action. Where is the AI in this? Wooii...my words here are too harsh. Did the students programme the controller on waveforms or did they actually build intelligent code to guide the robot? Rgds. On Fri, Jul 23, 2010 at 4:54 PM, ndungu stephen <ndungustephen@gmail.com> wrote:
This year competetion went like this:
1. Put on robot - robot should find white line or track and stay within the track
2. Once end of track is reached, robot comes across tray full of sponges and stops
3. Robot should identify color red sponges (from an assortment of white, blue, yellow etc) from the rack,, and using an arm, should reach out and pick 'red' sponges only
4. Then robot should go back on the track and drop the red sponges on a tray awaiting on the other end
5. Then robot should go back and pick out red sponges again
That was the way the competition went - and as far as recall, a private university won.
== winner is whoever has the most red sponges
trick is your robot's arm should not be clumsy - e.g upset the tray of sponges,, drop the sponges while transporting them
and also distinguish colors..
<this was done in kenya :-) >

I think some intelligence is required... :-) Then again,, one could just mechanize the whole process.... It comes down to how each of them decided to do the thing; some robots were idiots,, others 'seemed' smart (in that, can your code improvise when the parameters change ?)

I think local Unis need to get into real programming AI and not some Robots Kits that make it pretty easy and possibly the reason why we are still not producing top programmers. The link by Billy of the MIT challenge makes more sense in these times. Back to AI, its a concept worth reading and plays an important part in the gaming industry. If you get it wrong, your development becomes a non-starter. Rgds. On Fri, Jul 23, 2010 at 5:38 PM, ndungu stephen <ndungustephen@gmail.com> wrote:
I think some intelligence is required... :-)
Then again,, one could just mechanize the whole process....
It comes down to how each of them decided to do the thing; some robots were idiots,, others 'seemed' smart (in that, can your code improvise when the parameters change ?)
participants (3)
-
aki
-
Billy
-
ndungu stephen