Blog

Simple Obstacle Avoidance

John Keogh | December 13, 2013

This post discusses an easy algorithm for obstacle detection and avoidance. The techniques and code discussed in this article are related to two former posts: Range Finding with LEDs and Robot Reacting to Visual Stimulus.

This blog post has an accompanying video which briefly shows a robot using the algorithm this blog post covers:

Algorithm

The algorithm is very straightforward:

  • With the robot headlights off, take an image
  • With the robot headlights on, take another image
  • Compare the images, if the area affected by the headlights is above a certain luminosity, and above a certain vertical point, there is an obstacle in the way.
  • If there is no obstacle, continue straight, otherwise turn to the left or right if the obstacle is in the middle or to one side, otherwise turn a random direction and look for obstacles again.
To see the effect, shine a flashlight into the middle of the room, the light will be very attenuated when it illuminates the far wall, and there won't be any noticeable affect if the wall is too distant. If you shine the flashlight at a nearby wall, the reflected light will be bright

Implementation

//from the header typedef enum { kUnknownObstacleLocation, kNoObstacle, kObstacleFront, kObstacleLeft, kObstacleRight } ObstacleLocation ; //from the implementation -(ObstacleLocation)lookForObstacles:(UIImage *)noLightImage andLightedImage:(UIImage *)lightedImage{ CGImageRef noLightImageRef = [noLightImage CGImage]; NSUInteger width = CGImageGetWidth(noLightImageRef); //to determine whether to turn right or left NSUInteger halfWidth = width/2; NSUInteger height = CGImageGetHeight(noLightImageRef); NSInteger brightOnRight = 0; NSInteger brightOnLeft = 0; CGImageRef lightedImageRef = [lightedImage CGImage]; ObstacleLocation calculatedLocation = kUnknownType; if((CGImageGetWidth(lightedImageRef)!=width)|| (CGImageGetHeight(lightedImageRef)!=height)) { self.hasDistance = NO; return calculatedLocation; } //image data into raw data buffers for speed CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *lightedRawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char)); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * width; NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(lightedRawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), lightedImageRef); CGContextRelease(context); unsigned char *noLightRawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char)); context = CGBitmapContextCreate(noLightRawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), noLightImageRef); CGContextRelease(context); int totalSame = 0.0; int totalBrighter = 0.0; for(int row =0; (row+4)<(height-80); row+=4){ for(int column=0; column<width; column++){ for(int rowmatrix=0; rowmatrix<8; rowmatrix++){ int byteIndex = (bytesPerRow * row+rowmatrix) + (column * bytesPerPixel); CGFloat red = (noLightRawData[byteIndex] * 1.0); CGFloat green = (noLightRawData[byteIndex + 1] * 1.0); CGFloat blue = (noLightRawData[byteIndex + 2] * 1.0); CGFloat noLightAverage = (red+green+blue)/3.0; CGFloat lightedRed = (lightedRawData[byteIndex] * 1.0); CGFloat lightedGreen = (lightedRawData[byteIndex + 1] * 1.0); CGFloat lightedBlue = (lightedRawData[byteIndex + 2] * 1.0); //determined experimentally, will depend on //headlights int cutoff=60; //could also use a polynomial to create //values if(noLightAverage<80){ cutoff = 120; } else if(noLightAverage<150){ cutoff = 100; } else if(noLightAverage<200){ cutoff = 80; } //this is the difference between the images //with the light on and off. //If it is high,it means there is reflected //light, so an obstacle may //be in the way int difference = abs(red-lightedRed)+ abs(green-lightedGreen)+ abs(blue-lightedBlue); if(difference>cutoff){ totalBrighter++; if(column>halfWidth){ brightOnRight++; } else{ brightOnLeft++; } } else{ totalSame++; } } } } // free(lightedRawData); free(noLightRawData); //the LightCodedOutput class below is used to //communicate with the robot body //you can use the iOS project from //http://eyesbot.com/blog/?preload=talking_to_arduino_or_hardware_from_an_ipod.txt //for example code if(totalBrighter<2000){//noise [[LightCodedOutput sharedInstance] setRightPower:5]; [[LightCodedOutput sharedInstance] setLeftPower:5]; turnsInARow = 0; self.skipNextInterval = YES; self.hasDistance = NO; } else{ self.skipNextInterval = NO; //if the robot is thrashing, just //make a wide turn turnsInARow++; if(turnsInARow>3){ self.skipNextInterval = YES; } float ratio = (float)brightOnLeft/ (float)brightOnRight; if(ratio>1.5){ calculatedLocation = kObstacleLeft; [[LightCodedOutput sharedInstance] setRightPower:4]; [[LightCodedOutput sharedInstance] setLeftPower:-4]; } else if(ratio<0.75){ calculatedLocation = kObstacleRight; [[LightCodedOutput sharedInstance] setRightPower:-4]; [[LightCodedOutput sharedInstance] setLeftPower:4]; } else{ calculatedLocation = kObstacleFront; if((arc4random() % 2)==1){ [[LightCodedOutput sharedInstance] setRightPower:4]; [[LightCodedOutput sharedInstance] setLeftPower:-4]; } else{ [[LightCodedOutput sharedInstance] setRightPower:-4]; [[LightCodedOutput sharedInstance] setLeftPower:4]; } } } return calculatedLocation; }

Next Steps

We're now working on a signature algorithm and scalable back end (using Cassandra) to hold mapping data. This will make the robot able to map out its environment, and to share that mapping data with other robots.

Eyesbot Company

Computer vision

Artificial intelligence

Effecting the physical world