Hacker News new | past | comments | ask | show | jobs | submit login
Mrcal – Camera Calibrations and More (secretsauce.net)
87 points by pabs3 on March 2, 2021 | hide | past | favorite | 25 comments



Also check out the same guy's numpy library - https://github.com/dkogan/numpysane


This is awesome, it always bothered me that if you wanted to calibrate a camera, 99.9999% of the websites or tutorials you search for is using OpenCV or Matlab...


it's worth noting there's pretty trivial ffmpeg solutions with the lenscorrection filter for people who don't need any accuracy and just want to stop looking like a distorted alien. (https://www.danielplayfaircal.com/blogging/ffmpeg/lensfun/v3...)

Dima's solutions in mrcal are way more accurate and can even accommodate manufacturing imperfections with localized warping and other less-than-perfect real world realities but for the times such things don't matter (like say correcting a webcam for your zoom calls instead of, say, landing on mars), there's always lenscorrection.

If you're trying to set up a really basic pipeline for lens correction (more like adjustment) in linux that works with the web browser and apps here's a gross simplification

1. create a virtual v4l device with the v4l2loopback module, set exclusive_caps=1 (example: modprobe v4l2loopback devices=1 exclusive_caps=1)

2. Do something like ffmpeg -i /dev/video0 -vf "lenscorrection=cx=0.5:cy=0.5:k1=-0.015:k2=-0.072" -f v4l2 /dev/video2 where the first argument is the real video and the second is the virtual (do v4l2-ctl --list-devices) to see the devices

3. Point teams or whatever to the virtual device, /dev/video2 in this example.

You can use this same trick to force resolutions, compression ratios, rotate and mirror webcams - it's pretty useful.


Panotools/Hugin is pretty good at calibrating lenses when building a panorama. I'm sure its model is much simpler than this, but I can generally create a large panorama with double or triple digit number of constituent images, and it will work out the field of view and distortion parameters, along with the distortion centre. It's good enough that the error in position of objects between overlapping images is no more than a single pixel. It will also calibrate the vignetting and the colour response of the camera sensor.


i adore hugin, but it can just absolutely fall off a cliff in some cases. i was recently trying to get it to figure out the alignment of four separate cameras. i gave it a bunch of images from them, with all sorts of nice calibration targets, but it just kept churning out complete garbage and i couldn't get them aligned no matter how precisely i went in and laid out control points.

i went and fed it some real world images that should've been near-worthless for alignment (blue sky, empty fields) and hugin immediately produced a great alignment with no effort.

so... there's definitely room for some alternative software in this space. i'm excited to give mrcal a go when i've got the time.


There're lots of tools that do this. The primary thing mrcal does that others don't is reporting uncertainty and accuracy numbers. hugin is great at making panoramas that look nice, but not so much at making a map.


The uncertainty is indicated though by the error in the position of each control point. If you have a decent selection of control points, then you have a sampling of the calibration error.

Agreed that its purpose is not to make a map. It is designed to handle the case where the camera is in the same position for every picture.


That's the thing about the non-mrcal tools: they don't give you good feedback. The pixel errors do NOT indicate uncertainties. You can trivially prove this to yourself: throw away lots of your data and re-solve. The pixel errors then decrease. But we can all agree that throwing away data gives you worse solves, right?

mrcal actually gives you real uncertainties, so you can get proper feedback about what you're doing. There's a whole lot to say on this topic. If you're interested, read the docs. The "tour of mrcal" page gives a good overview.


Can you then load the calibration into OpenCV somehow?


Yeah, it supports opencv models, so just take the numbers and feed them to opencv. Call cameramodel.intrinsics() to get at them: http://mrcal.secretsauce.net/mrcal-python-api-reference.html...


Yeah, you can just load the intrinsics values from a numpy array


Does anyone know if this has been useful for Raspberry Pi camera calibration or stereo depth sensing on the Pi?


I've never used this on those boards, but there's no reason it wouldn't work. On some level the RPi is just a computer.


I'll loan you one if you want to try it out. Heck, I'll just buy you one as a gift. They're cheap enough. text me. I'll get it mailed to you.


Been reading the docs, very impressive amount detail! However I had some issues accessing the docs from work, something about DNS-malware something from my IT department. Anyone else had similar issues?


edit: seems to work now, don't know what's changed


This is wonderful, thank you. I would have loved to use this years ago when I was working on multi-camera systems requiring very accurate calibration parameters.


Can you tell me more what you were doing with multi camera systems? :)


Can discuss privately. Please PM or email me, happy to chat.


This is pretty cool! Quite application-specific, though. I used to work for a camera manufacturer, so I understand what it is good for.


This is for stereo vision, structure-from-motion computations and so on (i.e. figuring out the geometry of what you're looking at from photos).

Every lens that is manufactured is slightly different, and has a slightly different ray-in-space to pixel mapping that these tools help you obtain.


Care to elaborate?


All lenses have some innate distortion. In good lenses it can be hard to see, sometimes, but its there. Manufacturers use tools (maybe like this one?) to analyze their lenses during design as well as after production for consumer models and create corrective models.

So when I take RAW files from my canon DSLR and load them through a compatible raw processing software, it knows the lens and camera used to take the picture and it will correct those distortions, and sometimes deal with chromatic aberation etc.


This person said it better than I.

We would write pipeline modules that could calibrate for specific lenses (think chromatic aberration and vignetting).


I wish this existed many moons ago when I was doing stereo vision myself. Oh how I wish.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: