OpenCV Vision help

So, I am working with an FRC robotics team and would like to use computer vision to help make some actions in this years competition semi-automomous. We have used vision in the past quite well looking for well defined targets.
This year though I think simple ‘find the targets and locate their centers’ isn’t really sufficient.
The ‘targets’ this year that we want to use are lines on the floor that run up to objects on the play field and the idea is ultimately to align with these markers and drive straight down them.
What makes this a bit challenging (for me at least) is that the objects/walls are the same color as the lines.
What I would like to do is deconstruct the image into a vector describing the guide line and an anchor point (or 2). Potentially 2 anchor points for the guide would work too. I can handle all the subsequent processing to calculate how to move the robot, but need to get the anchors from the processed video images.
I have processed images by Gaussian blurring then thresholding to get a clear image to process further for line/anchor extraction.
I am basically looking to find the red circles.

Note, sometimes the ‘wall’ region at the top will not be as well defined but the guide line should always be high contrast to the floor and fairly clear.

Any thoughts or ideas on approaches to try?

@Team_VCC and @thespacemaker any of our resident coders and hackers interested in chiming in?

can you post some unprocessed images?

1 Like

Why are you using the lines on the floor instead of the reflectors above the hatch ports?

Also have you looked at using a Limelight?

We are using one on the team I mentor (2714 BBQ) and our programs love it.

Here is a video of our autonomous running at the Austin District event. And yes this is fully autonomous there is no driver interaction during sand storm period (first 30 seconds of the match). Once teliop starts we use the Limelight the help the drivers aline the robot to place the hatches and cargo.

Edit: What team are you mentoring?

We don’t use the retro-markers since they don’t really give accurate vector information. If we just wanted to drive towards the center then they would work (and we might try that if we can’t get the line follower working). This would require a much more accurate initial starting position though (i.e. robot already close the the line.
We have used Pixy in the past and also have Pis which work pretty well for us, again if only looking for objects. We want to expand the feature extraction capabilities.
The Limelight is a) expensive and b) not 100% sure if it adds more than what Pixy/Pi/JeVois provide which we have experience with already.

Just really throwing ideas around to better assist the drivers and allow them to not be as accurate (and ultimately allow fully auto for panels)

I am a mentor on team 5242.

Here are some fabricated images I am using to play around with…
image001m KOVid1 Synthesis1 Synthesis2 Synthesis3

I am not a programing mentor so I don’t know the particulars of how our team has implemented it but I do know that we are using the Limelight to square ourselves to the target as well as center ourselves.

http://docs.limelightvision.io/en/latest/theory.html

Ya it is definitely not cheap but it saved us a ton of time. The programing students and mentors were working on rolling their own vision system with OpenCV at the beginning of the season but they were not able to get it to work well by week 4 so we got a Limelight and were up and running in 2 days.

I honestly can’t say enough good things about the Limelight.

Y’alls robot looks good! What are y’all using to do hatch panels?

For hatch panels we really just have a simple hook that we go in through the middle then lift up. It is on the front of our ball intake and to be honest looks like it was just stuck on at the last minute, but was designed that way :slight_smile: Works really well as long as we can align fairly accurately and is very simple.
To the sides of the hook are slight pushers which we might spring load a little, but these are purely passive. These may also help by meaning we don’t have to be as parallel to the rocket/ship and/or as well aligned in general hence retro-markers may be sufficient. Not done nearly enough practice yet though :frowning:

Ya driver practice is definitely critical. Being able to set up at least 1/4 field or 1/2 field makes a huge difference. We have a ~1/2 feild.

We have 1/2 field too.
We were also hoping to make it to your open house a few weeks back but had a critical failure (lifted the rocket up and broke a chain when the hatch lifter got caught !!!) We certainly appreciate the invitation though.

Oh no! Well I am glad it sounds like the robot is working well now. What district events will y’all be at?

I am an FRC alumnus as are most of the mentors on 2714 so we really do love helping the students create these awesome robots.

We are going to Greenville and to the event at Conrad.

Not really convinced the robot is doing nearly as well as we hope !!! 'course practice would help but our second bot is not completed yet :frowning:

I will be volunteering at the Conrad event. I will stop by your teams pit and say Hi.

1 Like

Looking forward :slight_smile:

In order to extrapolate the slope of the lines you have to use a canny edge detection and then apply a Hugh transform… after that you can group them by slope thresholds and derive a true slope line.

The idea would be to only consider lines above or below a certain slope, then average a cumulative line using the min and max lines from the Hugh transform.

This is a good pipeline for lane detection using linear model

Sorry for the delay getting back. Been sick unfortunately :frowning:
Anyhow, There is a great writeup for this project here…

I was using this as a guide, but my lack of OpenCV experience and lack of Python experience mean it is taking a while to fully comprehend subsequent processing after the HughP transforms. I was getting more broken line segments than I would have expected, but probably because I didn’t get as far as filtering the line arrays. I will look at this.
I was also really just wondering if this was even the right approach in general to find the nodes I highlighted, and/or if there was a ‘better’ way given I don’t really care about multiple lines and/or the lines per-se, Possibly corner detection, but again experience with how OpenCV works in detail and what I can truly expect.
I do appreciate your thoughts and input. It seems I am heading in the right direction at least :slight_smile:
BR,
Steve