Robot Reacting to Visual Stimulus

John Keogh | August 7, 2013

One of the behaviors that make sentient beings appear sentient, is the ability to react to the environment. In the case of reacting to an object, it is easy to mimic the behavior of a sentient being by having a robot recognize a color palette, and go forward if the number of pixels that match that color palette are low and back away if the number matching the color palette gets too high. The following video shows the robot in action and the algorithm and code required to get this reaction are discussed further down the page.


The algorithm is:

  • Count the number of pixels that match the pallete in each vertical third of the image
  • Are the bulk of the pixels in the left, center, or right?
  • How far is the object (as measured by number of matching pixels)?
  • If the object is too far, go forward either to the left, right, or center
  • If the object is too near, go backward either to the left, right, or center
  • If the object is neither far nor near, or it isn't clear where the object is, don't do anything


The code to implement the above is surprisingly compact, and takes a UIImage, analyzes it, and returns a typedef that tells the calling code which way to go.

//these values were experimentally determined static const int kRedUpperLimit = 255; static const int kRedLowerLimit = 120; static const int kGreenUpperLimit = 100; static const int kGreenLowerLimit = 0; static const int kBlueUpperLimit = 100; static const int kBlueLowerLimit = 0; static const int skipPixels = 2; -(FollowingDirection)whereDoesTheLightTellMeToGo: (UIImage *)directingImage{ //get the pallette for this image CGImageRef imageRef = [directingImage CGImage]; NSUInteger width = CGImageGetWidth(imageRef); NSUInteger height = CGImageGetHeight(imageRef); //image data into raw data buffers CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *imageRawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char)); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * width; NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(imageRawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGColorSpaceRelease(colorSpace); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); NSInteger totalPoints = 0; NSInteger brightPoints = 0; NSInteger leftThirdBrightSpots = 1; NSInteger middleThirdBrightSpots = 1; NSInteger rightThirdBrightSpots = 1; NSInteger leftThirdBoundary = (NSInteger)(width/3.0); NSInteger middleThirdBoundary = (NSInteger)((2.0*width)/3.0); for(int row =0; (row+skipPixels)<height; row+=skipPixels){ for(int column=0; (column+skipPixels)<width; column+=skipPixels){ int byteIndex = (bytesPerRow * row) + (column * bytesPerPixel); int red = (int)(imageRawData[byteIndex]); int green =(int)(imageRawData[byteIndex+1]); int blue =(int)(imageRawData[byteIndex+2]); if((row == 100)&&(column == 100)){ NSLog(@"red %i, green %i, blue %i ", red, green, blue); } if(((red>kRedLowerLimit)&& (red<=kRedUpperLimit))&& ((green>kGreenLowerLimit)&& (green<=kGreenUpperLimit))&& ((blue>kBlueLowerLimit)&& (blue<=kBlueUpperLimit))){ if(column>middleThirdBoundary){ rightThirdBrightSpots++; } else if(column>leftThirdBoundary){ middleThirdBrightSpots++; } else{ leftThirdBrightSpots++; } brightPoints++; } totalPoints++; } } free(imageRawData); FollowingDirection whichWay = kFollowingNowhere; //ratios float brightRatio = ((float)brightPoints)/totalPoints; float leftRatio = ((float)leftThirdBrightSpots)/totalPoints; float rightRatio = ((float)rightThirdBrightSpots)/totalPoints; float centerRatio = ((float)middleThirdBrightSpots)/totalPoints; //are there two many absolute bright spots if(brightRatio>0.7){ whichWay = kFollowingNowhere;//for clarity }//or too few, ie no target else if(brightRatio<0.1){ whichWay = kFollowingNowhere;//for clarity } //what third has the most bright spots? //these values were experimentally determined else if((leftRatio>rightRatio)&& (leftRatio>centerRatio)){//left if(leftRatio<0.165){ whichWay = kFollowingForwardRight; } else if(leftRatio>0.18){ whichWay = kFollowingBackLeft; } } else if((rightRatio>leftRatio)&& (rightRatio>centerRatio)){//right if(rightRatio<0.165){ whichWay = kFollowingForwardLeft; } else if(rightRatio>0.18){ whichWay = kFollowingBackRight; } } else{//center if(centerRatio<0.165){ whichWay = kFollowingForwardStraight; } else if(centerRatio>0.18){ whichWay = kFollowingBackStraight; } } return whichWay; }

Incorporation into EyesBot Driver

This functionality isn't yet incorporated into the production version of EyesBot Driver, but is one of the candidates for version 1.2. We'll be releasing v 1.1 in the near future and v 1.0 is in the app store now.

Eyesbot Company

Computer vision

Artificial intelligence

Effecting the physical world