banner



How To Get Stereo Vision With A Fisheye Camera

The goal of this series of manufactures is to create a compact indoor robot that can navigate using stereo vision. Every bit a platform for information technology, we'll apply a small Dagu RP5 platform on tracks that we have. Here's how it looks similar next to the StereoPi.

Dissimilar planes and copters, it won't be able to fly away. At the same time, it won't cause any damage to furniture or make a mess during tests, like its heavier counterparts tin.

Since nosotros want the StereoPi to perform all robot command functions at in one case, nosotros need to have into account all the 'frail spots' of Raspberry Pi performance in this example. Nosotros'll dedicate special article sections, and even whole articles, in this serial to these 'tweaks' and 'hacks'. For example, nosotros will use the stereo cameras in scanning lidar fashion, and we'll also cleverly bypass some hardware limitations to increase our solution's FPS. But each thing in its own time!

Wide-angle cameras

I recall very well one of the very first experiments our squad did with remote controlling a model car equipped with a camera. We did the tests in our office, the car was in one room, and I was in another ane nextdoor. I continued and saw the picture from the camera. I hit 'gas', but the automobile jerked weirdly and slowly 'drifted' to the side. Not understanding what was going on, I decided to support a piddling. And so everything fell into identify. On the left side of the camera epitome, I saw the leg of an office chair! Our crawler was pushing against it with the left side of its body. But I wasn't able to see this through the photographic camera. Why? Because the camera had a regular bending of view (about 50 degrees). Almost x years have passed since so, we have done a lot of projects for radio modelers, and have learned well: the bigger the camera angle, the better. We prefer to use optics with an angle of 160 to 220 degrees.

Let's look at a elementary case — we'll shoot the same scene from two Raspberry cameras positioned adjacent. One of them is a regular V2 camera with an angle of 62.2 degrees (according to its documentation), and the other i is a 160 caste Waveshare G broad-bending camera.
Nosotros quickly built this uncomplicated contraption for the photoshoot:

V2 and waveshare cameras together

Allow's look at the difference between the images from the two cameras:

62 degree and 160 degree shots comparison

Plain, the broad-angle camera sees SIGNIFICANTLY more than than the regular one. But you tin can use cameras with 200-degree angle optics, then the field of view will be even wider! For example, the Waveshare M's that we used in our experiments with 360 degree photos.

Hence, the outset conclusion — our robot needs wide-angle optics. Therefore, we'll employ the Waveshare G cameras (which, by the way, came included in Deluxe StereoPi kits).

Software

For this series of articles we took the codebase of our stereopi-tutorial examples, but made significant changes to them. If you oasis't seen the previous article even so, I recommend you accept a look, since the codebase and approaches are taken from in that location.

The goal is to become a spatial map of the surround for the robot to orient itself. For this y'all need to build a depth map. But in order to do this it's important to calibrate the cameras (since it's impossible to install them admittedly parallel). For calibration, you need to take a serial of shots and use them to practise this very calibration. To perform each of these steps, we developed a separate script, which I'll describe farther.

Hither is our Github repository stereopi-fisheye-robot.

1. Test script — 1_test.py

This script is used to test arrangement health and performance. It hasn't changed much since the previous tutorial. To exit the script, press the Q button on the keyboard. On exit, the script saves the concluding frame every bit an paradigm. It may come in handy later for you to fine tune your depth map. Therefore, before pressing Q, turn the stereo camera away from your face up (where information technology's usually pointed at during beginning tests), and point information technology to a stage with objects at different distances.

two. Script for shooting a series of frames for scale — 2_chess_cycle.py

This script is too similar to the one from the beginning series. Shooting the calibration series has its own nuances, merely you even so won't read this until y'all have problems with calibration, and then nosotros'll depict them after. ☺
The script volition output and save 30 images with a calibration chessboard overlay, which the arrangement volition utilise for calibration.

3. Cutting images into pairs — 3_pairs_cut.py

This script cuts frames from the last stage into left and right photos. This simplifies piece of work and further debugging. The script'south logic has not changed at all from the previous manual either.

4. Permit's calibrate! — 4_calibration_fisheye.py

StereoPi calibration Kung Fu

'What'southward the problem? Simply start the calibration from the previous instance — and Bob'southward your uncle!', you might think. Not so fast. We're done with the curt script descriptions, and we move on to the TL;DR part.

StereoPi cold water

Let's try to swoop a footling deeper into cold water. The calibration script from our previous stereopi-tutorial is designed to work with ordinary cameras. If you run images from broad-angle cams through it, at all-time you'll get images looking normal in the center and distorted at the edges, and at worst you'll get incomprehensible avant-garde-style drawings.

The reason is uncomplicated: there are 2 separate camera models in the OpenCV libraries — the regular ('pinhole') and the wide-bending ('fisheye'). The latter is distinguished past the presence of two additional parameters in its mathematical model, namely the matrices One thousand and D. Are you scared withal? Don't worry, we've already added these parameters to the electric current version of the script. They volition be calculated automatically.

deep dive

Let's dive a little deeper for a moment, for the sake of the most inquisitive people. In the previous version of our scripts, we actively used external stereo-vision calibration libraries, which did a lot of work for us 'nether the hood'. In this edition of the scripts, we pulled all the magic right into our script code. Yes, they have become a bit heavier and somewhat harder to analyze, just at present all the stuff is in front end of you, and the scripts have no dependencies on third-party libraries. If you read up to this point in this paragraph, then you are i of those who volition feel compelled to play with all the available parameters. To finish this paragraph, I'll share with y'all ane of the near successful assay of the applied use of the broad-angle camera model in Python:

Calibrate fisheye lens using OpenCV — part 1

Calibrate fisheye lens using OpenCV — role 2

Now let'due south return to our muttons. If yous wait at the script code, you lot'll find the following logic (simplified):

– Get-go, we calibrate each photographic camera separately to remove barrel distortion. At this stage, the picture from the left and right camera is 'straightened'.

– The second step is to calibrate and rectify the stereo pair, passing to it the discovered parameters of each camera.

If y'all turn on all the variables to True in the # Visualization options section, the script will show you all the steps, starting with the search for the chessboard, displaying the 'corrected' pictures from each photographic camera separately, and ending with the rectified stereo pair.

Well, now you seem to know kung fu already, don't y'all? Where's that depth map?

Again, not and then fast. We are solving an applied problem hither, then I only have to tell you about bypassing ii more traps.

Trap one — capturing stereo video at depression resolutions

As you may call up, for ease of adding, nosotros work with a resolution of 320x240 (we'll embrace the issue of increasing it in following articles). But if you capture the stereo pair direct abroad in the 640x240 resolution needed, the picture will have glitches (offsets and greenish bars). This is a bug in the implementation of PiCamera, and it can exist bypassed by capturing a picture with twice the resolution (1280x480), and then reducing it by half using the GPU (with no load on the chief processor). This solution was already implemented in by scripts, but I'm highlighting it for those who are just getting acquainted with the topic.

Trap 2. Suspense…

The 2nd trap is more interesting, and users of our stereopi-tutorial scripts ofttimes fall into it, even with regular narrow-angle cameras. The bottom line is that y'all can get bad or very bad calibration results with seemingly skillful input images. When using the quondam logic, you lot can get results like these, for instance:

Wrong fisheye lens calibration with non-fisheye logic

How do you lot like the beautiful 'curved' distortions in the upper left picture show? This is simply a small manifestation of the trouble.

The reason is quite simple. Since our working resolution when building the depth map is 320x240, we used it in all scripts, including the scale ones. But at such small resolutions, the algorithm for finding the chessboard corners often makes mistakes — it either doesn't discover the corners at all, or finds them in the wrong places and mismatches their sequence. The problem is exacerbated when using wide-angle cameras. Therefore, nosotros decided to utilise a lifehack in these scripts: use college resolution images for scale, and so utilise the calibration results on smaller images!

Despite the simplicity of the idea, its implementation required some serious digging into the code. Yes, the wide-angle cameras' K and D matrices don't depend on resolution. Only in about all the calibration and rectification functions yous have to laissez passer the image resolution, and but replacing it with the 1 you demand completely breaks the results.

Finally, nosotros chose not the prettiest, only an understandable and manageable way around it. In the process of calibration the scripts believe that they are working with a resolution of 320x240. But at the time of chessboard corner search we cheat, namely:

– to search for the corners' coordinates, we feed images with twice the resolution.

– after finding the corners' coordinates, we reduce all 10 and Y coordinates by half.

– the coordinates reduced by ii times are passed on to processing, and the substitution goes unnoticed.

In practice, this made information technology possible not only to get correct angle coordinates without whatever errors, just also to correctly find angles on those images where, with a smaller resolution, angles were non being found at all.

Afterwards applying this magic, the results of our calibration started to look like this:

Correct fisheye calibration results

What does this hateful? It means that if you lot demand extremely accurate calibration, you can take pictures with a resolution of, for example, iv Mpix, calibrate on them, and then apply this information at lower resolutions. Of class, these loftier resolutions must be multiples of your working resolution. Thus, for a 640x240 working resolution of a stereo pair, you tin calibrate it using pictures at 1280x480 and 1920x720. If y'all have a V1 (ov5647) sensor with a native resolution of 2592x1944, then the maximum real resolution for scale will exist 2560x1920, which is 4.9 Mpix.

It'south not time to relax yet, we're non done however. Remember the clarification of the script number 2 and my annotate about the nuances that no one reads about? Well, if you don't read near them and don't take them into account, then even following all the steps in our scripts very advisedly volition requite you only mediocre results. All sorts of funny things like 'glue the chessboard onto a flat surface', 'be generous with lighting' and other nonsense which turns out to be (suddenly!) an unexpected salvation after three days of unsuccessful calibrations. I won't echo what has already been perfectly described by many people, but I'll provide a link to one of the most concise and competent descriptions of these nuances on stackoverflow. Check out the first two answers to this question at Stackoverflow. Look through, add to bookmarks, and then read if nothing else helps ☺

v. Setting the depth map parameters: 5_dm_tune.py

Yes, nosotros're finally done with the long description of script number iv. Y'all probably already started thinking that I decided to troll you. Simply no, in that location really were a lot of important points there.

The fifth script has undergone only minimal changes, by and large in how it rectifies the image in accordance with the new code for fisheye cameras. To make this department less tiresome, I'll post here a short video from our starting time article, describing how it works:

six. Depth map with video: 6_dm_video.py

Honestly, we didn't programme to post this script in the very start article of this serial. Simply so we idea that in anticipation of the following articles, someone would similar to play around with the results already bachelor for wide-bending cameras. So we ended upward merely adapting the script to the updated logic, and you tin can now use information technology.

If y'all decide to repeat all the in a higher place scripts on your StereoPi and get to the sixth one, the event will puzzle you a bit. The code can be easily adjusted and so that the issue is similar to that of our stereopi-tuturial. And those who decide not to adjust the code will get a hint about the management in which we'll motility on from here in the continuation of our series of articles.

See you lot in our side by side publications!

Github repo: stereopi-fisheye-robot

Our wiki lives here: wiki.stereopi.com

And the forum is here: forum.stereopi.com

P.S.

Oh, and if you run the sixth script, wait at the displayed depth map calculation fourth dimension for each frame, and translate it into FPS. We leave the conclusions to y'all, dearest reader!

Special thanks:

  • Adrian and pyimagesearch.com
  • RPi-tankbot projection: https://github.com/Kheiden/RPi-tankbot
  • Rakali project: https://github.com/sthysel/rakali

Source: https://stereopi.com/blog/robot-stereopi-part-1-fisheye-cameras

Posted by: tsenggasselve.blogspot.com

0 Response to "How To Get Stereo Vision With A Fisheye Camera"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel