-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lens shading control #470
base: master
Are you sure you want to change the base?
Lens shading control #470
Conversation
NB I've not yet added the new datatypes for e.g. lens shading. However, I have wrapped analog and digital gain so that you can set them.
Wrapped the videocore shared memory functions needed for lens shading (not the whole file. Is there a script that does this??)
mmal.h is not documented, so probably this needn't be either. However, I thought it was worth at least adding a link to the C header I wrapped (which has extensive comments).
I've not tested this yet!!!
Should now be complete...
PS this includes the changes in my other PR #463 so I will close it now. |
@rwb27 thank you so much for putting this up! I was just looking for something exactly like this. Is there any place you could show an example of loading in a lens shading table and initializing the camera with it? It would be helpful to see the format in which the lens shading table needs to be loaded and passed in. |
No problem. I have some code that does exactly that as part of my microscope control scripts but I will try to chop it out into a stand-alone script. The basic principle is quite simple though: the array should be a 3-dimensional numpy array, with shape You can either pass your numpy array to the camera's constructor ( A complete example is below. This will set the camera's lens shading table to be flat (i.e. unity gain everywhere). from picamera import PiCamera
import numpy as np
import time
with PiCamera() as cam:
lst_shape = cam._lens_shading_table_shape()
lst = np.zeros(lst_shape, dtype=np.uint8)
lst[...] = 32 # NB 32 corresponds to unity gain
with PiCamera(lens_shading_table=lst) as cam:
cam.start_preview()
time.sleep(5)
cam.stop_preview() I should probably put this in the docs somewhere... |
This is amazing - thank you so much for putting all this together. As a last clarification, are you sure the channel order should be [R, G1, G2, B]? I was looking through userland's lens_analyze script and it seems that script outputs in the order of [B, Gb2, Gb1, R]. At least that's what it looks like in my ls_table.h file after running their script. Thanks! |
hmm, you may be correct there - that would explain a few things. I think the middle ones are probably both green but I may have R and B swapped, it's possible that my code that generates the correction from a raw image has the channels swapped somewhere else. If you're able to test it before I am, do let me know. Bear in mind that white balance is applied after the shading table, so it's not quite as simple as just changing the average values for different channels. |
Hi Richard,
I ended up trying it as your original post suggested [R, G1, G2, B] and it
worked beautifully! Thanks for putting this together and let me know if
there's any way I can help.
Dhruv
…On Mon, Feb 19, 2018 at 4:02 PM, Richard Bowman ***@***.***> wrote:
hmm, you may be correct there - that would explain a few things. I think
the middle ones are probably both green but I may have R and B swapped,
it's possible that my code that generates the correction from a raw image
has the channels swapped somewhere else. If you're able to test it before I
am, do let me know. Bear in mind that white balance is applied after the
shading table, so it's not quite as simple as just changing the average
values for different channels.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#470 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACJrSpvcxJ_TuLkgj1ANTHM1QXw4n9iXks5tWguAgaJpZM4SAO6->
.
|
Hello Richard, I find your Lens shading control extremely useful, problem is that I'm not an expert in programming and I'm not able to follow your requirements to enable it. Would it be possible to get a tutorial on how to install it? Is there a package I can download and install? Thanks, Marc |
Hi Marc,
The only requirement you should need to upgrade is the “userland” libraries on your Raspberry Pi, which you can do using the rpi-update command. However, the version that ships with the latest Raspbian image is already new enough, so if burning a new SD card is simpler, you can just do that. |
Oh, and while I'm here, for those of you interested in calibrating a camera, I've now written a closed-loop calibration script that works much better than my first attempt (which ports 6by9's c code more or less directly). I guess there must be something nonlinear that happens in the shading compensation - I have not figured out what it is, but 3-4 cycles of trying a correction function and tweaking it seems to fix things. It's currently on a branch, but I'll most likely merge it into master soon, here's a link to the recalibration script. |
Incredible! I managed to install your OpenFlexure microscope control with your installation guide. I also ran one of your examples and worked perfectly. Now I was trying to use your recalibration script but it's telling me I need the microscope library... can I find it in one of your repositories or should I look somewhere else? Thanks |
Excellent, glad that worked! If you've installed the openflexure_microscope library, it's best to run it from the command line. It will try to talk to a motor controller on the serial port by default, but there's a command line flag to turn that off. You can use:
If you are running the python script directly, it might get confused about relative imports (because it's designed to be part of the module) - that is probably where the error comes from about the I should probably figure out a way to crop out the camera-related parts of this, but if you look in the relevant Python files you can probably figure out what's going on - or just use it through the Hope that helps... |
Ok, I understood everything now. The program works even better than I expected! I don't know how can I repay you, thanks! |
several python3 related fixes: ctype char * now requries bytes (was string) some calculations now seem to need int()
python3 compatibility
Okay, I've finally had time to review this now and it'll definitely be going into 1.14 but I am going to make some alterations. The major one is I'm not entirely happy depending on numpy for the table and I don't think it's necessary - i.e. we can simply require that whatever is passed in for the table implements the buffer protocol (which numpy arrays do, so this doesn't mean you can't use them - you can - but it'll mean numpy isn't absolutely required for it). Basically I'll make it similar to add_overlay. Incidentally, we can still have all the checks about correct shape, stride, order, etc. as the memoryview interface implements all of that too (well ... most of that in 2.7, all of that in 3.3 onwards so I'll need to throw some backward compat workarounds in there, but that's fine). Anyway, other than that the rest is looking great! I've yet to read through the whole thread above but it looks like there might be some useful snippets there for examples in the Advanced Recipes section of the manual so I'll try and get through those too this week. |
@cpixip I've just run a slightly more in-depth calibration routine on the v2 camera, which calculates a full colour-unmixing matrix for each position on the sensor. That means that, if you're prepared to do post-processing on the images (or implement some sort of super exciting GPU-accelerated rendering) it's possible to completely compensate the effect of the lenslet array. The only penalty is an increase in noise, of 2-3x at the edges of the image. Of course, as you say, the lens shading correction that's built in to the camera pipeline doesn't do this, which means you will always lose saturation towards the edges of the image. I'm currently tidying up my analysis and will write a report, which I'll share soon I hope! |
@rwb27 - wow, cool. It will be interesting to see the results. And yes, you are right - ideally you want to do it on the GPU, otherwise the computations will be probably too time-consuming. Some years ago I did implement some image processing algorithms for ultrasound images on a PC graphics card. Basically by directly programming the algorithms into vertex- and fragment-shaders (no CUDA or so). But that was years ago (so I forgot most of that stuff, I am afraid) and I do not know whether the Raspberry hardware does support easy access to shaders or has sufficient processing power to handle such an approach. I have seen approaches to this problem where not a full decorrelation matrix is stored for each pixel, but only a functional description described by a few components (which would take substantially less texture memory to implement). Thinking about it, GPU-hardware should be well suited for the task at hand - multiplying the original RGB-signal with a position-dependent 3x3-matrix. In any case, I am really curious how your approach works and how good the results are. And probably a few other people are interested in this too... - so please share your report if possible! |
Hi rwb27 (and others), I just tried out your lens-shading algo on a new Raspi 4/2GB and it froze on this line:
Also tried to set the table directly, as in and it failed as well at that point. Tested with a v1-camera. Both codes work on older raspi hardware, like a raspi 2 or 3. Did anyone else succeed in getting this to run on newer raspi-hardware? |
Hi, I have a similar problem, when trying to run the code on a Pi 3 which has been upgraded to the latest release of the OS. Then loading the lens shading table takes a long time and eventually, usually, comes back with a timeout and buffer size error. During this time the preview window is not (cannot?) be shown. Perhaps an OS change to support the Pi4 has broken something? |
Hi @cpixip @TimBrownConsulting we've had that issue too. It relates to a recent update to the GPU firmware that runs the camera, specifically the auto-exposure algorithm (which has been replaced with a newer, fancier version). We (by which I mean @jtc42) opened an issue upstream on the firmware repo, which has been fixed - but there's another issue (relating to the white balance gains) that means our calibration still goes wrong. The work-around for now is to use the debug mode (helpfully referenced in the first issue thread) to disable the new behaviour and revert to the old auto-exposure algorithm. It's not 100% satisfying but works for now, and hopefully we can work with the firmware developers to sort it out in the new version. @jtc42 is away this week, but I'm sure he'll comment here once he returns. |
@rwb27 , @TimBrownConsulting - Hi everybody. Just wanted to confirm that the magic |
@rwb27 I have some questions. What are the possible values I can put into analog_gain and digital_gain? How is the iso calculatet? analog_gain * digital_gain * 100 and then heavily rounded? What is the highest iso I can get this way using V1 or V2? |
@iHD992 I believe the sensible values range from below 1 up to about 4, but I don't remember ever actually reading minimum or maximum values. What I can say is that, by and large, it's pretty safe to experiment by writing a value, then reading it back a second later (the delay is important). If you try to set it to an invalid value, it will either raise an error, or the value you read back will not be the same as the one you wrote. Calculating ISO is neither trivial nor linear! I would have to do some googling on that point, but I'm pretty sure the answer is very much not as simple as the calculation you suggest. Setting the ISO value is only meangingful if you're using auto-exposure, and generally if you're setting gains manually, you are probably also setting the exposure time manually. I believe asking for ISO 100 will tend to use a relatively low value of analogue gain - but this isn't necessarily the lowest possible gain on the v2 camera module, as it was deliberately set to be consistent with the v1. I am not the authority on ISO numbers though, I'd keep googling because I know there are some discussion threads where people go into some detail. |
@iHD992 I've played with this a bit since for my application I have to do my own "auto exposure". I am working on a digital microscope so for different samples I set the image exposure using the gains to get close to what I want and then vary the intensity of the lighting to get the "right" exposure. For what it's worth the numbers I use for the gains are: |
@TimBrownConsulting Do you have V1 or V2 of the camera? Do you know how these values correspond to the registers in V1? |
@iHD992 I'm using a V2 camera.
|
Is forked picamera library not working on raspbian Jassie ? In my case it is not working even after updating the rpi- firmware? |
Hi @prayuktibid what error are you getting? I haven't tested it on Jessie, only on Buster and Stretch. It does rely on relatively recent userland libraries, so it is possible that the lens shading part won't work on Jessie, but I can't see why the upstream version wouldn't work - and my fork shouldn't break that, as far as I can see. Does the upstream (i.e. official) version of the library work for you? What error do you get when you try to use the fork? |
Thank You Mr. Richard @rwb27 for your reply. |
Maybe Jessie did not get the necessary update for the “userland” libraries. |
Hi there, i have installed the microscope software and everything goes okey. I have run the microscope --recalibrate and obtained microscope_settings.npz. Now I have to merge the camera lens shading table with the picamera library to run my own code. How can I do this ? I am getting crazy |
Hi @marcodc-sys, the simplest way is to pass a lens_shading_table argument to the constructor of your PiCamera object. This can come directly from the numpy file, if you do: Import numpy as np
settings = np.load(“microscope_settings.npz”)
with PiCamera(lens_shading_table=settings[“lens_shading_table”]) as cam:
# use the camera The other PiCamera settings are also saved in the npz file, which can be accessed in a similar dictionary-like way. There is a convenience function in the microscope software that you can import, that will simply take the settings file as an argument, and return a PiCamera object you can start using immediately. I should also mention that the version of the microscope software on github is no longer what we are using - the new version is on gitlab, in the “OpenFlexure” organisation. However, that new version of the software is rather more complicated, so might not be as useful a resource. |
I've done some more work on this in my fork (a different branch) to make it merge-able, but have not yet tested it. The revised version is here: |
I've now tested https://github.com/rwb27/picamera/tree/master which is up to date with waveform80/master. I've also added a test for the lens shading table property, which passes (although there are a few other failures on my system, which I don't think are due to my changes). I think it might make sense to start a new PR for that - though I may also merge the changes onto this branch, unless anybody would find that really annoying? There is exactly one breaking change, which is that arr = np.array(cam.lens_shading_table) This is to address @waveform80 's request to avoid baking |
Hi I tryed to use the lens shading control for Astro longtime exposure. but 'I've the issue that i can take one shot. all finisched and Python process is not longer in the process list, what is fine, but it hangs at the 2nd shot and i can not kill the python Procees but only reboot. 🟢 SOLVED: SOLUTION AT THE TAIL OF THIS COMMENT 🟢 the code i did use: from picamera import PiCamera
import numpy as np
import time
from datetime import datetime
from fractions import Fraction
print 'init done'
with PiCamera() as cam:
lst_shape = cam._lens_shading_table_shape()
print 'shape done'
lst = np.zeros(lst_shape, dtype=np.uint8)
lst[...] = 32 # NB 32 corresponds to unity gain
print 'shape defined'
with PiCamera(lens_shading_table=lst) as cam:
print 'cam opened'
cam.resolution = (1296,976)
print 'resolution set'
cam.framerate = Fraction(1, 2)
print 'framereate set'
cam.shutter_speed = 2000000
print 'shutter set'
cam.exposure_mode = 'off'
print 'mode set'
cam.iso = 800
print 'iso set'
cam.capture('image${env.BUILD_NUMBER}.jpg')
print 'capture done' the result (unsharp flat with noir chip) looks fine: Any Idea ? here is the time corresponding messages Log ❗ SOLUTION ❗ some changes in code by investigation, but the root cause was the value of from picamera import PiCamera
import numpy as np
import time
from datetime import datetime
from fractions import Fraction
print 'init done'
with PiCamera() as cam:
lst_shape = cam._lens_shading_table_shape()
print 'shape done'
lst = np.zeros(lst_shape, dtype=np.uint8)
lst[...] = 32 # NB 32 corresponds to unity gain
print 'shape defined'
with PiCamera(lens_shading_table=lst,resolution = [1640,1232], sensor_mode = 4,framerate = Fraction(1, 3) ) as cam:
print 'cam opened'
cam.exposure_mode = 'verylong' # 'off'
print 'mode set'
cam.shutter_speed = 3000000
print 'shutter set'
cam.iso = 200
print 'iso set'
print 'timeout for 5s'
time.sleep(5) # on timeout of 20 seconds the cam.exposure_speed value is set but that was a time i did not want to spend. 5Sec work fine in my case but may be changest in your adaption
print cam.exposure_speed
print cam.shutter_speed
for cnt, _ in enumerate(cam.capture_continuous('image{counter:03d}.jpg', burst=True, format='jpeg', bayer=True, thumbnail=None, quality=60)):
print 'start capture: {c:03d}'.format(c=cnt)
if (cnt >= 4):
break
print 'capture done'
cam.framerate = Fraction(1, 1)
print 'timeout for 2s'
time.sleep(2)
print 'close cam'
cam.close()
exit()
|
🟢 SOLVED see previous comment. |
🟢 SOLVED see previous comments. My current installed modues are Tests have been done in Python 2.7 |
Another Question: 🟢 SOLVED here 🟢 |
What's the status of this PR? |
we're currently maintaining a fork with this (and a few other) pull request that is now distributed on pypi as |
This PR adds:
picamera.PiCamera
that:PiCamera.analog_gain
writeablePiCamera.digital_gain
writeablePiCamera.lens_shading_table
that allows setting of the camera's lens shading compensation table.user_vcsm.h
and an object-oriented wrapper in the style ofmmalobj
that makes it possible to work with VideoCore shared memory from Pythonuserland
code that enable setting the gains directly and manipulating lens shading correctinThe module will run fine with older versions of the userland code, but will throw an exception if you try to set analog or digital gain, or use the lens shading table. I guess that makes it a "soft" dependency? The features were introduced late 2017 in a commit.
I thought passing in the lens shading table as a numpy array made good sense, but I have been fairly careful to avoid introducing any hard depndencies on numpy, having read the docs on
picamera.array
and assumed that this would be desirable.I have tried to keep things like docstrings and code style consistent, but please do say if I can tidy up my proposed changes.