Wednesday, 14 December 2011

REFLEX MANAGEMENT AND THE SUBSUMPTION ARCHITECTURE

~ A rap on an iterative idea-chain started by Rodney Brooks ~

The brain must allow not only the setting up of reflexes, but also their modification and control.

Initially speech and vocalisation seem reflexive in the infant, but then later as further development takes place speech can say anything it wants. So it is more than a reflex because, despite there being a finite limit on the number of reflexes, speech is unique nearly every time. Similarly many other intelligent behaviours are designed for context. How does this happen?

Is a reflex : “say something” like a frame or placeholder into whose blank space is inserted the results of speech design, which is originated in a separate process, perhaps also consisting of interacting reflexes?

There must be a level of reflex in behaviour but also a level of reflex management too, these two levels interacting. There are two possibilities, one a top-down control of reflexes, the other an emergent control where reflexes modify other reflexes and intelligent behaviour emerges from this melee. It was in Brooks' subsumption architecture that the small insect like behaviours I call reflexes were proposed, and robot design right now hasn't got that far into reflex management...but its to be hoped that the extra "rational" layers of reflex control and management will also be built in, or allowed to emerge.

How might a robot 'know' about all its possible reflexes and build this knowledge into its planning? This is suggested to me by the high connectivity of the brain where regions are multiply joined to other regions. In the top-down model the master controller might need to observe/study/predict its own behaviour as closely as the external world. If it has planning capability then it plans for its goals, first level, but it needs, second level, to plan for what its reflexes will cause too.

This looks like a kind of proto self-knowledge. The simplest reflex bypasses the frontal cortex or the brain altogether, like a knee jerk, but some reflexes may be initiated by higher functions, and that's what the tangled cortex is doing. What is the difference then between intelligent behaviour and a reflex? There may be many shades of grey.

The most necessary starting point for a robot may be to try lots of randomised behaviours and observe which cause successful change in the environment. If it has a picture in mind of a goal then it would need to look through its memories of what reflexes have given what results. Then it might know to try one or a combination of a few whose results may take it one step closer to finding itself in the environment it is aiming for.

If the movements of the vocal system start as reflexes learned through mirror neurons from mother giving baby talk, then real speech needs this kind of reflex management. There is basic emotional content in speech, and then there is the semantics of actual words. This can't be a single reflex because it's different every time. brain must have a meta-reflexive level that emerges from learning. I find it more believable that this doesn't work through totally centralised symbol/logic, the old paradigm, but that it emerges from reflex management, which is reflexes modifying reflexes.

Children badly need mirror neurons (in themselves and caregivers) for learning. This is not widely understood yet. I have observed it many times. the actual extent of mirroring is quite surprising. I believe in children that this approached the level of seeming telepathy, because I can remember much of my childhood and I could read the emotions and intentions of adults very clearly.

Reflexes (as actions) stimulate change in the physical environment, which is observed by the mind of a child or learning entity. But reflexes (as communication) also provoke changes in the minds of other humans, and the consequences of these are read partly by using mirror neurons to empathise and interpret emotions and states of mind. The amount of learning is huge, over decades, and given the slower pace of robot development this may mean the first learning robots need many years of childhood.

There may also be gradations of granularity. The reflex management function assembles composite behaviours out of granular collections of reflexes. This means that reflexes are aggregated into more complex behaviours, and therefore that something must be there from which the planning for this emerges. I am assuming this happens in the frontal cortex mainly. Maybe reflex management can build new reflexes out of collections of old ones. In generation of speech it might be that the finest granularity is the utterance of a single phoneme ? Other times a speech reflex may be larger, such as "I'm hungry mummy". The reflex management system assembles phonemes into utterances by modifying and combining the lowest level reflexes. But it also mashes these up with the emotional content of speech, intonation, breath and body language are added in to the final act of communication.

Concept

reflexes + reflex management = intelligent behaviour

So what if reflex management were itself merely a collection of reflexes ? Some reflexes have executive control or the power to hack and gain control over more primitive reflexes, like competing code in core-wars. So what we call intelligence seems to emerge from the collaboration of a swarm of reflexes. This makes a reflex like a Minskyan agent, and the society of reflexes emerges. This is an exciting synthesis that has been hinted at before.

Consider this famous quotation: "A clever man knows the right thing to say but a wise man knows whether or not to say it." Here wisdom corresponds to speech based reflex control, so in allowing some agents power of veto over lesser agents, behaviour is enriched.

The interaction of reflexes with each other could indeed get complex and tangled, but a small start can be made. Consider the following examples:

Define:

* reflex A : run forward until near object and outstretch arms to pick it up, then grasp
* reflex B: inhibit current reflex and freeze

behaviour 1: see toy, initiate A. result: "now I have the toy". Conclusion: this reflex is useful sometimes.

behaviour 2: see mummy, initiate A... then initiate B. result: run to mummy and outstretch arms then halt. Mummy also outstretches arms and hugs... consequence "I just learned to ask mummy for a hug". Conclusion: mark this as a new reflex to use again.

or this one

Define:

* reflex C : grasp object in front of me and raise to mouth to eat
* reflex B: inhibit current reflex and freeze

behaviour 3: see slice of banana, initiate C.
result: "I'm eating the banana"

behaviour 4: see flower in garden, initiate C...then initiate B.
result: "I can smell the flower now its under my nose".
consequence :"I learned to smell something"
conclusion: mark this as a new reflex to use again.

So it would be good for some reflexes to decompose into other behaviours when they are truncated. This allows the emergence of new behaviours when a given reflex is initiated but then inhibited before completion. Brain architecture thus may achieve a combinatorial explosion of possible behaviours with a small starting set of reflexes. Axioms breed theorems.

And for a robot:

Define:
* reflex A : follow another robot
* reflex B: inhibit current reflex and freeze

behaviour 1: see a robot moving. initiate A.
result "I am following another robot"

behaviour 2: see a robot moving. initiate A, then initiate B.
result "I followed another robot as far as the recharge station then he went on but I stayed put".
consequence: "you can go somewhere interesting if you follow someone - mark as new reflex"

So even merely aborting and freezing is a reflex management agent behaviour that is useful and potentially innovative. What about another agent behaviour that mashes together reflexes. Its easy to see that this combining meta-reflex would be useful. From above A + C would result in the ability to see something at a distance and then run to eat it. Note: I am now seeing how nerve racking parenthood may be for some!

Or maybe a meta-reflex that time-reverses another reflex or combination. I run backwards and remove what is in my mouth, then place it on the ground. or I run forwards, place a toy in my mouth and then run to mummy and ask for a hug, then spit the toy at her ;-) I am not a parent myself but I expect nearly all possible combinations may indeed play out in childhood at some time or another. Reversal is like an operator that can act on an existing reflex too: "I followed a robot to the charger and now I am going backwards. I got back to where I was before !"

Or a meta reflex to repeat another reflex. reflex D: take stick and smash it on the ground in front of you . reflex E: repeat current reflex... result: "I just smashed my toy into pieces and now it looks different and more interesting".

Friday, 2 December 2011

GRAFFITI : THE NEW AI

BROOKS AND THE NEW AI
 
Cambrian Intelligence : The Early History Of The New AI, is a compilation of the best papers of Rodney Brooks and embody the principles of the new AI. In the preface Brooks uses two diagrams - probably the best way to convey the new AI. 
The Old AI : Pre-Brooks, Sense-Plan-Act notion
The New AI : The new model, where the perceptual and action subsystems are all that really is. Cognition is only in the eye of the observer

Brooks, inadvertently confirms that the beauty of cognition indeed lies in the eyes of the beholder ! 

REFERENCES


Tuesday, 15 November 2011

ECOLOGY BASED ROBOTICS - A SNEAK PEEK

I come home, I find the automatic lawn mower mowing my lawn, as it is supposed to do when the grass grows more than 1.5 inches. At the door the face recognition system detects it is me and opens the door. My personal robot comes along and says "Good evening, tea and cookies will it be? " to which I smile and acknowledge. I enter my drawing room and the air conditioner realises my presence and starts cooling at 22 Degree Centigrade - as per my preference. Not long my personal robot gets me my cup of tea and cookies. Serves me and politely adds, "Just to remind, you have dinner with Mr.Smith, given the traffic and the distance a good time to start would be 7:33 PM". 
This may not be too long into the future.  A world which is dominated by automation and robotics may be just a couple of decades away. Since the 80s, researchers have realised the importance of applied AI, and away from the jargon laden ivory towers, AI has made its way to robots and intelligent machines which promise to bring the Clarkian world of science fiction to life. 
Fig.1 Rosey the Robot Maid, from The Jetsons - An example of Personal Robot

 ISSUES WITH BEHAVIOUR BASED APPROACHES

Brooks and Arkin enunciated the behaviour based approach, giving the robot its own sensory system so it can detect and respond to the environment coupled with a hierarchy of control laws which work in tandem. Thus if the higher level of control fails then the robot can 'subsume' to a lower level of the hierarchy thus preventing complete system failure. Adding on to this was motivations from anthropomorphism from animals and inscets, such has always attracted the enthusiasm of roboticists. Behaviour based approaches led to a blending of behaviours, enabling the cumulative reactive response of the robot as an emergent notion. This hunger for doing 'God like' and creating intelligence which reacts to external stimuli with concerted mechanical response; has been an ongoing effort for the last 3 decades.

An obvious problem to the Brooksian philosophy, which Brooks acknowledges to some extent in his paper (1991); is that the environment as perceived by the robot, is what it appears to its sensors. Thus, for a low lying mobile robot (viz. Roomba) which has sensors with an angular span of 30 degrees, will see a chair as 4 metallic rods sticking out of the floor and  also will have more appreciation of a 2D perception than a true 3D perception. Also, a  sensor will have a finite range - so any world view will be an incremental endeavour, probably very slowly at times - this time lag may impair the robot's reactions.  Maps may help till some extent - but real world is dynamic and thus not really 'mappable'.

Though, behaviour based approaches denounce analytical modeling, (viz. the  block world etc) however, sensor based approaches clearly have their issues. As Brooks puts it, with a hint of sarcasm; 
When we examine very simple level  intelligence  we find  that explicit  representations and models  of the world simply get in the way.  It  turns out  to be better to use the world as its own model.
I AND THE WORLD - THE ECOLOGICAL APPROACH

The idea to model the world and the agent as a single unit is probably most appealing to a software designer, such was discussed by Saffiotti (1998) in his doctoral research. 

The philosophy that the world embodies the agent in itself, this ubique point of view is probably the starting point of Ecology based robotics, extending this idea leads to more potent conceptions; a number of interacting robots and devices all of which work in tandem and in explicit cooperation. The very idea being motivated from the concept of Biological notion of Ecology - where each creature's doing has a bearing on every other creature in the Ecology.

This approach is said to be the third revolution in robotics, the first being the industrial robot while second being mobile robots & personal robots.

One of the earliest proponent of Ecology based robotics was Duchon  (1994), his work was an extension of Gibson's pioneering work, 'Ecological Approach to Visual Perception'. Duchon points to some basic principles; 
  1. Because of their inseparability, the agent and the environment together are treated as a system.
  2. The agent's behaviour emerges out of the dynamics of this system.
  3. Based on the direct relationship between perception and action, the task of the agent is to map available information to the control parameters at its disposal to achieve a desired state of the system.
  4. The environment provides enough information to make adaptive behaviour possible.
  5. Because the agent is in the environment, the environment need not be in the agent. That is, no central model is needed, but this does leave room for task-specific memory and learning.  
However, Duchon's works were limited to  Visual Perception and also being a research in the 90s, it lacked the new age technologies which came to fore in the next two decades. More in tune with current day; Arkin addresses Ecology (viz. Ecological Psychology) throughout his works while Saffiotti and his team has developed control software, PEIS (Physically Embedded Intelligent Systems) to implement Ecology based robotics.
 
Fig.2 Laying of the 'PEIS floor', the floor is networked using RFID chips


Fig.3 The Pedagogy leading to an ubiquitous point of view, modified from the works of Saffiotti and Broxvall

Thus, to progress into the realm of 'i-robot', 'Rosey the Robot Maid' and R2D2; a pragmatic society in which robots and automation work in tandem to support the human civilisation - we probably need to route it via Ecological Approaches. 

All philosophies have short comings; as a criticism to Ecology based robotics, it can never be realised for sufficiently large environments - all sensors will have a physical limitation of range - a central model of some sort will always be needed to bridge this gap between the pristine theory and the practical applications.

Tuesday, 8 November 2011

UPCOMING POSTS

Six upcoming article I have planned for the blog for the next three month;

'3 generations - shakey, flakey and erratic' - A brief discussion on the Saphira Architecture which has been responsible for these three very special mobile robots.

'Uncanny indeed' - A discussion on the uncanny valley hypothesis by Masahiro Mori

'Ecology based robotics' - A peek into Ecology based robotics

'GSSP' - Tutorial and discussion on Graphical State Space Programming

'Behave !' - A study of contrast - on the various definitions of 'behaviour' in mobile robotics


ROS@HacDC - It may be fun to review HacDC Robotics Class 2011 

MAPS FROM STAGE SIMULATIONS IN ROS : PART DEUX

SLAM USING TELEOPERATION IN STAGEROS

In my previous article, I discussed realising SLAM using stage's wander controller. Now, I discuss about an alternative method where the robot can be driven around by human participation and thus realising SLAM.

Such can be done in ROS using stage/stageros and teleoperation.

#.1 - Start an instance of the master in a terminal window

roscore
 
#.2 - Start the stage/stageros simulation in a new terminal window 

rosrun stage stageros /path/to/.world file

This world file must not employ the wander controller. The idea is to drive the robot around not let it wander.


rosrun teleop_base teleop_base_keyboard base_controller/command:=cmd_vel 


#.4 - Start gmapping in a new terminal window 

rosrun gmapping slam_gmapping scan:=base_scan

#.5 - Start map server in a new terminal window 

rosrun map_server map_saver

The map will get saved in the directory from where the command for map_server is issued as a pgm file.

Drive the robot around using u,i,k,l etc and check the map as it starts to develop. Drive it around a few times across the whole environment that should give a sufficiently good map. 


The rxgraph for this simulation is as shown;

The pros and cons with the previous method can be seen in contrast;

NOTE : Simulations done in ROS 1.4.10 diamondback

Sunday, 30 October 2011

MAPS FROM STAGE SIMULATIONS IN ROS

SLAM USING STAGE'S WANDER CONTROLLER IN ROS

SLAM, Simultaneous Localisation and Mapping; using simulations in stage/stageros  to map the environment based on sensor readings from a mobile robot in the 'wandering' mode may be done as follows;

#.1 - Start an instance of the master in a terminal window

roscore
 
#.2 - Start the stage/stageros simulation in a new terminal window 

rosrun stage stageros /path/to/.world file 

The world file employs stage controller 'wander' hence the robot starts moving immediately and is in a wandering mode. 

#.3 - Start gmapping in a new terminal window 

rosrun gmapping slam_gmapping scan:=base_scan

#.4 - Start map server in a new terminal window 

rosrun map_server map_saver 

The map will get saved in the directory from where the command for map_server is issued as a pgm file.

Run the simulation sufficiently long enough for a good map. While the simulation is running, on running rxgraph in a new terminal window one may see the following graph for the ROS nodes.



SAMPLE OUTPUTS

#.1 


#.2 

 
#.3

 
NOTE : Simulations done in ROS 1.4.9 diamondback

REFERENCES
1) I thank Mac at ROS for his suggestions
2) This blogpost went to be motivation for a ROS tutorial - http://www.ros.org/wiki/stage/Tutorials/MakingMapsUsingStagesWanderController


Friday, 28 October 2011

ROS TUTORIAL, STAGE CONTROLLERS

My first contribution to the ROS community, Tutorial on Introduction to Stage Controllers

Codes for the tutorial are available here

Wednesday, 26 October 2011

VOLKSBOT

VOLKSBOT@PLAYER/STAGE

While browsing the web, I came across a very interesting model in player/stage, Volksbot. The sheer complication of the robot and its depiction in the stage model impressed me.
 



The model was designed for Stage 4.X.X, Brian was kind enough to send the model through to my email and I was able to modify it to suit Stage 3.X.X. 

CODES
  1. The codes for the simulation are available here

Monday, 26 September 2011

EVA

EVA@PLAYER/STAGE 

After some effort, I was able to put together an inc file in Player/Stage which resembled eva, walle's love interest.



Thursday, 8 September 2011

CAPTURE BY A SWARM

CAPTURE BY A SWARM





Flocking behaviour in Stage (3.2.2) can be realised by using 'pioneer_flocking' Stage controller. The default settings simulates 100 robots which exhibit swarm behaviour by forming flocks. 

A interesting feature of a swarm is that it can 'capture' other robots in its path. The captured robots behave as a part of the swarm. 

A DEMONSTRATION

1/3 - Initially there are 100 red robots and 3 yellow robots. On starting simulation the red robots start to move, triggered by sonars bouncing off the wall, while the yellow robots are stationary.


2/3 - The red swarm on reaching the yellow robots 'capture' them.


3/3 - The captured yellow robots behave as a part of the swarm.




CODES
Codes for the simulation are available here

REFERENCES

Sunday, 4 September 2011

USING GAZEBO

GAZEBO

 

Gazebo is a part of the Player Project. It is a 3D simulator utilising ODE and OGRE. ROS has incorporated Gazebo into their distributions with some minor modification.

GAZEBO@ROS WITH WORLD FILES - A DETOUR FROM THE USUAL

Gazebo, while it is a part of Player Project is used with world files. However, ROS recommend using Gazebo with launch files - these launch files identify the corresponding world file.


However in the spirit of Player/Stage/Gazebo it is possible to run Gazebo with world files from its executable, located at /opt/ros/diamondback/stacks/simulator_gazebo/gazebo/bin .


Thus using ./gazebo with pioneer2dx.world, which is a well known Player Project world file;
  1. Start an instance of roscore - gazebo is built with slight modification, so an instance of master (roscore) is needed. The later scripts search for the master.
  2. Navigate to the gazebo executables directory ( cd /opt/ros/diamondback/stacks/simulator_gazebo/gazebo/bin)  
  3. Start Gazebo with pioneer2dx.world file (./gazebo /path/to/pioneer2dx.world)
  4. The following window should pop up,  


Trying the rxgraph to check the ROS nodes at work;


Trying with pioneer2at.world file;


The same works for Stage, the executables at /opt/ros/diamondback/stacks/simulator_stage/stage/bin  can be used in a similar manner. The executables, stage and stageros work same with a world file giving 2D simulations. However stage is the primitive Player Project executable and doesn't need a roscore master while stageros is more 'ROS like', needs roscore and works with ROS nodes. 


NOTE 
  1. Tested on ROS 1.4.9 Diamondback (Gazebo 0.10.0 and Stage 3.2.2)     
  2. pioneer2dx.world and pioneer2at.world files can be found in Gazebo installation files (viz. gazebo-0.10.0/worlds)
  3. An alternative is to use, rosrun gazebo gazebo /path/to/pioneer2dx.world file

REFERENCES
(1) I thank John Hsu for his help
(4) N.Koenig, A.Howard,"Design and Use Paradigms for Gazebo, An Open-Source Multi-Robot Simulator"
(5) Gazebo Test at Care-O-Bot webpage 
(6) Alternative download form the pioneer2dx.world file - http://ubuntuone.com/5pWIiUyo1mhrY2PHP4I86C

Saturday, 20 August 2011

THE KHEPERA

ORIGINS

The Khepera is a small mobile robot developed at the EPFL (École polytechnique fédérale de Lausanne) by Prof. Jean-Daniel Nicoud and his team. The Khepera has been developed and manufactured as a commercial product by the K-Team Corporation, for use in education and research. 

In Egyptian mythology, Khepera is the name of a beetle looking god, symbolizing the forces moving the sun. Khepera is more widely associated with “rebirth”, “resurrection”, “renewal”, and the general idea of “coming into being”: it might have inspired the looks and the name of the robot!
The main purpose of the Khepera is to provide a platform allowing training and experiments, involving local navigation, artificial intelligence, collective behavior, and real time programming.

It is widely acknowledge that the Khepera played a non negligible part in the emergence of evolutionary robotics.

ORIGINAL VERSION

The original Khepera is a 55mm diameter and 3cm high robot, constructed around a 16 MHz Motorola 68331 processor. Its motion and steering is achieved via 2 DC brushed servo motors with incremental encoders, and obstacles are detected thanks to 8 infrared proximity and ambient light sensors. The robot can be remotely operated, as the previous Kheperas, via a Personal Computer. 
Released 10 years ago, it has received a significant processor and firmware update before been discontinued. Let’s note that the sensors work pretty much like contact switches: the mobile robot could only detect very close obstacles.

K-2.0 VERSION

The Khepera II solved many shortcomings of the original, such as the reach of the distance sensors, swappable battery pack system for optimal autonomy, and a better differential drive odometry, allowing more precision on awkward surfaces. The khepera is now also capable of embarking additional modules increasing the versatility of the system. However, the most notable improvement is the added processing power improving the robot’s computing autonomy. 

K-3.0 VERSION

The third iteration of the Khepera is almost a revolution from the hardware point of view. On top of an array of 9 infrared sensors, two ground oriented sensors allowing line following and edge detection, the Khepera also embarks five ultrasonic sensors. 


An additional module allows Linux support, Flash extension cards, Wi-Fi, Bluetooth, 2D cameras, and extra storage space to complete the Khepera’s arsenal. The mobile robot also provides the user with improved quality motors and odometers to work with.

KHEPERA, PROGRAMMING AND CONTROL

Remote operation programs can be written with Matlab, LabView, or with any programming language supporting serial port communication. Here are the interesting ones; 

#.1. KiKS
KiKS means "KiKS is a Khepera Simulator", and runs under Matlab. It was developed by Theodor Storm as a Master’s degree year long project. KiKS emulates one or more Khepera robots connected to the computer by simulating motors, proximity/light sensor, and behaviours. 


 #.2. KHEPERA SIMULATOR
Khepera Simulator is a freeware package allowing to develop controllers for the mobile robot Khepera using C or C++ languages. It includes an environment editor and a graphical user interface. Moreover, if you own a Khepera robot, you will be able to switch very easily between the simulated robot and the real one. It is mainly directed at teaching and research in autonomous agents.


#.3. KHEPERA IN PLAYER/STAGE
Khepera can be simulated in Player/Stage;


#.4. KHEPERA IN WEBOTS
Khepera in Webots simulator;