So, I am working with an FRC robotics team and would like to use computer vision to help make some actions in this years competition semi-automomous. We have used vision in the past quite well looking for well defined targets.
This year though I think simple ‘find the targets and locate their centers’ isn’t really sufficient.
The ‘targets’ this year that we want to use are lines on the floor that run up to objects on the play field and the idea is ultimately to align with these markers and drive straight down them.
What makes this a bit challenging (for me at least) is that the objects/walls are the same color as the lines.
What I would like to do is deconstruct the image into a vector describing the guide line and an anchor point (or 2). Potentially 2 anchor points for the guide would work too. I can handle all the subsequent processing to calculate how to move the robot, but need to get the anchors from the processed video images.
I have processed images by Gaussian blurring then thresholding to get a clear image to process further for line/anchor extraction.
I am basically looking to find the red circles.
Note, sometimes the ‘wall’ region at the top will not be as well defined but the guide line should always be high contrast to the floor and fairly clear.
Any thoughts or ideas on approaches to try?