Sunday 6 October 2013

SIMULATIONS IN OLFACTORY, MANY ROBOTS

This blog post describes a player/stage simulation for multiple robot mapping with goals or reference points as odor sources. Let's think of a situation where we need to explore large environment and generate its map which can be used later on for navigation and localization. In this scenario, instead of single robot exploring such a large environment, its always beneficial and effective (in terms of processor usage and time) to allow multiple robots to explore different parts of the environment and merge the maps finally whether topologically or based on overlap regions in occupancy grid. 

This idea is based on Mobile Robot Olfactory experiments performed by ISR Embedded Systems Lab, details of which can be found here: (http://ftp.isr.ist.utl.pt/pub/roswiki/simulator_plumesim.html). 

The idea of using PlumeSim library for this purpose came from this interesting project of PlumeSim framework which simulates odor transport in the environment. The basis of this project is that olfaction is a key sense in the survival of many biological species. 

This article deals with the process of installing PlumeSim plugin driver initially followed by multiple robot mapping simulation in player/ stage. This driver is capable of introducing simulated chemical plumes into a simulated/ real robot from broad range of sources to CFD (computer fluid dynamics) software. Based on data collected from real world experiments, its also able to playback recorded plume in future periodically. 

Below are the steps involved in setting up the driver for this purpose:

1. The driver library can be downloaded at this link or at this github
    link: https://github.com/DevasenaInupakutika/PlumeSim-1.0

2. Clone to your local repository through terminal as below:
    
    git clone https://github.com/DevasenaInupakutika/PlumeSim-1.0

3. Enter the directory: 

     cd PlumeSim-1.0

4. Build the library using below command which will create the required driver
    (libPlumeSim.so):

    make

This plug-in driver in player/ stage brings plume simulation into the world of mobile robotics. This can be used as a great tool for developing odor search algorithms providing smooth path from simulated robot to real environment.

Sample output, the blue dots show the plume of the gas
In this article, we have taken use-case of this driver in multi-robot map merging. For this purpose, please follow below steps:

Note:  Make sure all the path corresponding to player/ stage packages and library are at same location.

1. The player/ stage package can be downloaded from here using below 
    command:


2. cd Player_Plumsim and Use make clean command to clean if package is  
    already built.

3. Built the package as below:
   
   g++ -o main`pkg-config --cflags playerc++` main.cc `pkg-config --libs  
   playerc++`

4. Run the program as below:

    player map_square.cfg

    Open another terminal (in a separate tab):
    
    ./main -p 6665 -r 1 

    (which means robot 1 will start moving where 1 is robot id    and 6665 is port 
    number)

The configuration file (map_square.cfg) contains description of robots and 
corresponding ports. The prime important thing so as make multiple robots to move is to assign either separate port numbers for each robot or use separate robot ids on same port number.

5. In order to run another robot, open another terminal tab and type below   
    command:

    ./main -p 6665 -r 2

   and so on. This has maximum of 12 robots.

After executing all the above steps, you get results similar to the ones shown in the below screen shots and corresponding robot text files (robot<id>.txt) files get updated based on which robot is traversing the environment and it's distance from odor source. The emanating gas is shown in red while the robots from top view are circles of various colours.





The files being updated contains simulation environment's map topological data (in terms of coordinates).

This work has also been implemented on ROS - occupancy grid map merging where in robots generate local maps by exploring separate areas of environment and merge based on overlap regions into a final global map which will be discussed in separate article.

This can be implemented on real robots as well using the player server of the player/stage suit.

Saturday 31 August 2013

E-GLOVE Part-2 -- The Robotic Arm !

E-GLOVE 'n' THE ROBOTIC ARM

The last time I blogged here, I wrote about my project on the e-glove and how my team and I got around to using the glove as a wireless air-mouse using an accelerometer and an RF Transceiver system. But, not stopping there, we decided to move on and try to use it for it's more serious applications, like bomb defusal, remote object manipulation and maybe, if we have access to more sensitive equipment, a robotic surgery arm.


The first step, obviously, was to acquire the arm. Being poverty stricken engineering students, we had no funds to actually purchase the arm itself, so we approached the BioMedical Instrumentation department of out college who graciously provided us with not one, but TWO unused robotic arms. Since one of them was pre-assembled, we decided to use that one for our current prototype.

The arm, manufactured by Arexx, actually has an inbuilt Atmega Processor and an FTDI USB-to-Serial converter to program and control the servos on the arm independently. However, upon further perusal of the documentation, kindly written in German for better understanding, I found out that the coding would have to be done in Assembly or hex. (English Version, FINALLY). This promptly made me drop the idea and decided to proceed by individually connecting each motor to the Arduino Leonardo and trying to use the Servo library which comes built-in with the IDE. Well, that went well and I could now manually control the arm using the WASD keys on the keyboard.I even ordered a Sony CP-ELS PowerBank for the arm, which is basically a 2200 mAh rechargeable battery that will finally allow for some portability.

The input for each of the 5 motors (actually, there are 6 but one is broken) came from an old USB cable connected to the GND and +5V pins of each motor using a simple splitter on a PCB which gave me 5 parallel outputs from one battery input.

Block Diagram of arm with motors labelled accordingly
Then, once I got that part working, we came to the main task at hand, making it compatible with the existing glove so that we could move it around using hand movements.

Then came the real problem. Turns out that the VirtualWire Library that we used (mentioned here) uses an interrupt based system to get the RF communication working. Now, the Servo library ALSO uses the same interrupts in the Arduino and thus, no matter what, they are incompatible. Which basically meant, that for the moment, our project was doomed.

Library after alternative library was tried, but to no avail. The ServoTimer2 library promised to be compatible with the VirtualWire library, but even that, as it turned out, did not work. We even tried editing the library itself (a huge undertaking on our part, since we didn't know crap about the internal hardware structure of the Arduino) and we got the thing to compile, but the arm remained motionless as ever.

Then, at 5:00AM one morning, with an 8:00AM class in the horizon, I decided to bunk all libraries and try moving the servos using simple and old-school PWM control. And voila, that did the trick. Taking inspiration from this tutorial, I proceeded to fine-tune the pulse widths to match the rotations of each motor, which gave me a complete 0 to 180 degree turns with relative ease. Putting it into a function made it way easier to choose the motor to be turned.

On a tangent, here is a quick explanation of PWM
Basically, the servos have an internal chip/driver which reads signals in terms of HIGH and LOW and the amount of time a signal remains HIGH or LOW.
               _      __     ___    _____   _      _____   __     _   
               | |    |  |   |   |  |     | | |    |     | |  |   | | 
PWM Signal     | |    |  |   |   |  |     | | |    |     | |  |   | |  
             __| |____|  |___|   |__|     |_| |____|     |_|  |___| |_____
 So, as long as the signal is HIGH, the motor keeps turning. When it is at zero, it stops. When the signal falls below the mean value, i.e. LOW, it turns in the opposite direction. The length of the "plateau" in the above picture is basically the duratin for which the signal is kept HIGH or LOW.

Back on track
So, that was that. Once we got that working, it was just a matter of removing the Mouse( ) functions from the code and replacing it with the PWM Servo functions in order to get each motor working as intended.

The basic movement is as follows :

Normal Left-Right Movement : <base_rotate>
Normal Up-Down :  <bottom>
Left Click Up-Down : <middle>
Left Click Left-Right : <base_rotate>
Left+Right Click Up-Down : <claw_control>
Left+Right Click Left-Right : <claw_rotate>


Obviously, this being a prototype, we are still working out some kinks in the accuracy and control of the arm, but so far so good. Next step in this project will be most probably mounting it onto a rover of sorts, with an attached remote camera for total wireless control of the arm. Also in the pipeline is interfacing this arm with an EEG machine, to try and use brain-waves to give rudimentary control to the arm, which will be useful for handicapped people.





Wednesday 24 July 2013

E-GLOVE Part-1 -- James Bond

E-GLOVE -- AN ARDUINO BASED AIR-MOUSE

I recently got my hands on an accelerometer, the MMA7361 on a nice breakout board for use with the Arduino and that got me thinking about its practical applications. Inspired by that idea and some Googling, I got around to making myself an e-glove, something like an air-mouse for the computer with which I could control my mouse with hand movements. Firstly, I had to figure out how to use the accelerometer by hooking it up to the Arduino and how to get understandable readings from it.

Fig.1.The final glove with wrist-strap
Initial Googling got me to the Sparkfun website, which distributes the said accelerometer and that got me some useful information regarding hooking it up to the Arduino (the page was in Chinese, I think) and a sweet little library that did all the maths for me. Sparkfun MMA7361, MMA7361 Library MMA7361 Connections. For those of you interested in further reading and details on how to do the maths, read it here.

Once that was hooked up, the next step was to go ahead and get the mouse to move in response to the accelerometer readings. So, I went ahead and got myself an Arduino Leonardo, which supports on-the-fly mouse control using the ATmega32u4 that has built-in USB communication, eliminating the need for a secondary processor.  This allows the Leonardo to appear to a connected computer as a mouse and keyboard, in addition to a virtual (CDC) serial / COM port. The official details for the board are given here.

Now, came the tough part. The readings of X and Y axis coming from the accelerometer had to be translated into real movement of the mouse on the screen. Taking a look at the Mouse control function of the Leonardo, this was the syntax :

Mouse.move(<x-axis>, <y-axis>, <z-axis>);

But, the readings from the accelerometer were slightly different than expected. The X-Axis readings from the accelerometer were the Y-Axis movement on the monitor and vice-versa. Since we wanted hand-up motion to move mouse up and hand-down motion to move mouse down, we had to take –X readings for the up movement and +X readings for the down movement. Once that was cleared out, the next hurdle was that the mouse needed to be stopped from moving if hand was at a central position, i.e. at rest. So, to facilitate that, we figured out a simple graph system which would allow us to set thresholds based on readings taken in from the accelerometer and move the mouse only if that threshold was crossed. The graph below is basically what we used as the basis for all mouse movements. Initially, the mouse movement was fixed to 4 directions, but we added separate code to facilitate movement in all 8 directions. The blue areas in the graph below are essentially the threshold values we used on the basis of readings from the Arduino. The mouse was now working quite nicely with relative smoothness in all directions.

Fig.2.The Graph used to figure out threshold values
The next step of the project was to actually try and make this wireless, since the main aim was to use it for something like controlling a robotic arm from a distance or shoot some bad guys in Call of Duty. So, I pulled out my trusty RF transmitter and receiver and got started with trying to integrate that into this glove. This meant that I had to get hold of an Arduino Uno to start acting as the transmitter, which would sit on the glove and the Leonardo as the receiver, which would remain connected to the computer. The RF uses the Virtual Wire Library to send and receive data on a 433Mhz frequency. After connecting the RX and TX to the Leonardo and the Uno respectively as given here, I proceeded to code both ends. The Uno would take care of receiving all the accelerometer input and transmitting them to the Leonardo receiver which would then move the mouse on the system.The RF works at 4800bps (bauds per second) which gave me pretty good resolution on the mouse.To improve on the efficiency of transmission, the coding was done to optimize the data sent to a minimum amount. So, we went for low level encoding and brought down the data sent to around 10 or 15 bits as opposed to sending the X readings and Y readings separately as integers. We also added an extra bit to the setup()portion of the code on the transmitter so that each time the user would start his glove, the receiver would calibrate the glove based on initial readings of the accelerometer.

The basis of calibration was that it would receive around 5 to 10 initial readings after which it would add to it a threshold value based on which the device would respond. This gave us a dynamic range within which the mouse would move, irrespective of how the user was wearing his glove. Also, we added a sensitivity option on the basis of said calibration that would allow the user to control the speed of mouse movement based on the degree of movement of his hand. This was done using a float value based on accelerometer readings that gave a nice smooth transition from one speed to another, like an exponential curve.

The next step was to add control for the mouse clicks for which we employed two Flex Sensors. This is a cool piece of hardware that changes its resistance based on how much it is bent. Using information available here, I set up the flex sensors on the glove’s fingers and added some calibration bits for this too. The Leonardo syntax for mouse clicks is :

Mouse.press(MOUSE_LEFT | MOUSE_RIGHT | MOUSE_MIDDLE);
Mouse.release(MOUSE_LEFT | MOUSE_RIGHT | MOUSE_MIDDLE);
Mouse.isPressed(MOUSE_LEFT | MOUSE_RIGHT | MOUSE_MIDDLE);

Using these three lines of code, we figured out a way to send a bit for each click, left and right, and then, on the receiver, check if they were pressed or released separately or simultaneously. When pressed simultaneously, the Mouse.scroll() function was called and that allowed for scrolling of screens based on X axis movement.

This, in my opinion, was the hardest part of the coding, since we also had to take care of drag-and-drop functionality. Basically, that was it. After around a week of coding and frustrating amounts of testing along with around 6 hours of PCB soldering, we had a working prototype of the wireless e-glove. We added a wrist-strap that held the Uno and the battery (9V) on the glove and taped things up a bit to keep things steady on the glove.

The members of the team are;

Kanishka Ganguly – Hardware and software
Nimesh Ghelani  - Coding
Vishesh Dokania – Coding and general visualization of concepts
Neel Lahiri – Coding and general visualization of concepts
Mainak Basu – PCB soldering and guidance

Fig.3.The PCB after fabrication
Our main aim with this project was a proof-of-concept which we hope to achieve by purchasing a robotic arm and connecting the receiver to it to control the arm remotely. The main agenda is to prove that such systems can be used in rovers to remotely defuse mines and rescue people from wreckage, if produced on a larger scale. Also, such systems may be extrapolated to be put to use in remote surgery and similar such applications.The next step on the agenda is to try and use gyroscopes and/or Inertial Measurement Units (IMUs) in order to achieve much more fluid gestures and movement in 3D space.

The code for this is currently in a private repo on Github, which we cannot release due to intellectual property rights at the moment.

This is a work in progress and we hope to put up videos and other progress soon , and may be some day James Bond maybe using it .... 'shaken, not stirred' !!!