-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More robust rejection of ETC fibers with stellar contamination #7
Comments
Use the following code to examine the frames just before and after the first jump around 2:20:
The results for SKYCAM0 are:
and SKYCAM1:
This reveals that the jump is due to SKYCAM1, most likely from its fiber[1] which has a >10 sigma increase in flux. The algorithm is supposed to drop up to 2 fibers until chisq < 5. It did drop the max of 2 fibers, but with chisq still > 1000. It looks like a cut on the final chisq would be an easy fix, at least when the other camera has a reasonable chisq. However, I am suspicious that the fiber dropping code is not doing the right thing (after seeing this example) so that needs to be checked first. |
This was a similar incident:
Since that's twice in 2 weeks, I will start working on a fix... |
The spectro redux confirm that for this last example (exposures 107643-107644), the sky is fainter and the reported EFFTIME_SPEC is approx twice the EFFTIME_ETC. |
Another likely example tile 2484 observed 2021-11-10 expid 108309. |
For reference, FITS files for the SKYCAM data in the examples above are at: |
I have tested that increasing the max allowed number of dropped fibers (due to being chisq outliers) from 1 to 2 fixes both of the examples above. To be safe, I am increasing the max to 3. |
Re-opening issue since it looks like it may still be with us. In particular, for exposure 111452 on TILEID 3254 on 2021-11-29, the sky level was alternating between ~1.5 and ~2.5. Earlier in the evening, exposure 111440 on TILEID 11621 has somewhat suspiciously high sky as well. We could always revisit the option of moving the Sky Monitor fibers to known blank sky positions instead if leaving them stuck where they are. Or, easier yet, pre-compute which fibers are known to be near sources (the same as we do for the stuck skies). We would need to measure the physical locations of those fibers by backlighting them for FVC images, something we maybe haven't done since early in commissioning. . |
Thanks for reporting this. I will take a look. |
This might be worth a more exhaustive search of such cases since the last change, but I'll note two more from last night (2022-03-07): Exposure 125318 on tile 24546 and exposure 125324 on tile 26228. |
Adding some discussion from a past survey-ops call here... the fiberassign files do include a SKY_MONITOR extension which looks to intend to have the information for the 20 sky fibers. This does have FIBERASSIGN_{X/Y} and TARGET_{RA/DEC}, which we presumably transform in the usual way for the stuck fibers. I don't see something actually saying whether we think those locations are good sky, unfortunately, but I may be missing something. So I think we'd have the action item of propagating that bit into this extension in the usual way for the stuck skies. Then we'd also have to make sure that FIBERASSIGN_X and FIBERASSIGN_Y are correct for these fibers, which would involve backlighting them. And then the ETC would have to use the bit. But that may be everything? |
For the exercise of determining the mapping of the ETC fibers on the Sky Monitor images: On the night of 2022-04-16, we observed five BACKUP tiles at the start of last at low Galactic latitude, where a good fraction of the ETC fibers should be landing on or near stars. The exposures are 130554-130557 and 130563. |
Another likely example of contaminated sky fibers is tile 9759 exposure 209314 observed on 2023-12-12, based on the sky level jumping up for only that one exposure. |
Exposures 106015-8 of tile 3072 (RA 293.37 DEC 66.43) on 20211025 have sudden jumps in the sky level that appear to correlate with downward spikes in FFRAC:
![sky-ffrac](https://user-images.githubusercontent.com/185007/138918200-40361ef8-53b9-422e-8c20-f85ed6252826.png)
This issue is to document what happened, as a starting point for possibly updating the algorithm to avoid situations like this (in case we start to see this more often).
The text was updated successfully, but these errors were encountered: