Quickmaps for IMAP-Lo#3179
Conversation
tmplummer
left a comment
There was a problem hiding this comment.
Jon was right that there would be some opinions on the attitude and reinventing the wheel for much of this code. So, I will state them here and act as if this is a normal PR.
- Much of this code duplicates thing we already have in place.
- Why does Lo need to extract it's own mean spin-axis?
- Why does it need to write it's own tools to do vector transforms instead of using the industry standard, SPICE?
- Much of this implementation is at odds with the "one mission" philosophy of the IMAP mission.
We really can't merge this without test coverage. The throw it into production and then come back and write meaningful tests logic is flawed. Writing functions with limited scope and writing concise unit tests for them can save time improve code reliability tremendously.
| <<: *instrument_base | ||
| Data_type: L1B_goodtimes>Level-1B Goodtimes List | ||
| Logical_source: imap_lo_l1b_good-times | ||
| Logical_source: imap_lo_l1b_goodtimes |
There was a problem hiding this comment.
Nice. This makes the product consistent with Hi and Ultra.
| quaternion_ds = xr.concat(quaternion_datasets, dim="epoch") | ||
| attitude_ds = assemble_quaternions(quaternion_ds) | ||
|
|
||
| attitude_met = attitude_ds["epoch"].values |
There was a problem hiding this comment.
epoch should contain Terrestrial Time J2000 nanoseconds. Very different from Mission Elapsed Time (MET).
Oh, I see... the L1A product has MET in the epoch variable. But why not use the L1B products such that you don't have to assemble them?
| Rotation.from_quat(quaternion_array).apply([0.0, 0.0, 1.0]).mean(axis=0) | ||
| ) | ||
| mean_spin_axis /= np.linalg.norm(mean_spin_axis) | ||
| spin_ecl_lon, spin_ecl_lat = cartesian_to_spherical(mean_spin_axis[None])[0, 1:] |
There was a problem hiding this comment.
I personally find this syntax for adding an axis pretty opaque. Prefer mean_spin_axis[np.newaxis, :] for readability. Please apply this comment throughout.
| mask = np.any( | ||
| (hist_met[:, None] >= gt_begin) & (hist_met[:, None] <= gt_end), axis=1 | ||
| ) |
There was a problem hiding this comment.
Hmmm, this bit of code is odd. I think that the np.any is being applied along a singleton dimension. Why not just do mask = (hist_met[:, np.newaxis] >= gt_begin) & (hist_met[:, np.newaxis] <= gt_end)?
|
|
||
| attitude_met = attitude_ds["epoch"].values | ||
| attitude_mask = np.any( | ||
| (attitude_met[:, None] >= gt_begin) & (attitude_met[:, None] <= gt_end), axis=1 |
|
|
||
| # Compute ecliptic sky pointing for each of 60 spin-angle bins | ||
| bins, ecl_lons, ecl_lats = create_ra_dec(spin_ecl_lon, spin_ecl_lat, pivot_angle) | ||
| pivot_df = pd.DataFrame( |
There was a problem hiding this comment.
I'm curious why you are using pandas throughout instead of xarray? It would be much more idiomatic to just use xarray.
There was a problem hiding this comment.
We went from passing around csvs between functions to dataframes. This is a temporary struct so I guess I'll go with built-in types when I refactor this.
| for esa_level in range(c.N_ESA_LEVELS): | ||
| hist_counts = np.sum(hist_ds["h_counts"].values[mask, esa_level, :].T, axis=1) | ||
| exposure = np.sum( | ||
| hist_ds["exposure_time_6deg"].values[mask, esa_level, :].T, axis=1 | ||
| ) | ||
| nep_counts = np.roll(hist_counts, nep_roll) | ||
| nep_exposure = np.roll(exposure, nep_roll) | ||
|
|
||
| esa_df = pd.DataFrame( | ||
| {"bins": bins, "counts": nep_counts, "expo": nep_exposure} | ||
| ) | ||
| df = pivot_df.merge(esa_df, on="bins") | ||
| df.insert(0, "esa_level", esa_level + 1) | ||
| df["spin_ecl_lon"] = spin_ecl_lon | ||
| df["spin_ecl_lat"] = spin_ecl_lat | ||
| map_dataframes.append(df) |
There was a problem hiding this comment.
It seems like this could all be vectorized.
| stonoise_var_map, | ||
| ) = [np.zeros(shape) for _ in range(14)] | ||
|
|
||
| for esa in range(c.N_ESA_LEVELS): |
There was a problem hiding this comment.
Again, can any of this be vectorized? For loops are my arch nemesis in Python.
|
@tmplummer - is there a way for us to test the pipeline with this code? (on dev or something)? The products need to be generated so that they can be validated in parallel by the team with the refactorings you're requesting here (which are reasonable). EDIT: Or if there's a way to tell what dependency json the pipeline would have used for certain invocations, we could generate a bunch of |
If you wanted to spot check one or two pointings, it would be feasible in dev. We can build a docker image for a certain branch and push that to dev. Then you would need to sync all the upstream dependencies required to dev as well. We don't have a good way to do a bulk sync AFAIK. For running locally, you would have to generate the dependency json files locally and run the If neither of those options are sufficient, I think that we could merge past the codecov check if you create issue(s) for all the PR comments such that they don't get lost and forgotten. @tech3371 @bryan-harter what do you think? |
|
@tmplummer - no worries then. I just wanted to make sure there wasn't a shortcut we could take. Let me incorporate your suggestions tomorrow, which will take comparable time to what you guys will have to go through. This will also help me get familiar with more of the existing code. |
|
An update - the |
Closes issue #3153
Change Summary
Overview
Added a
quickmapproduct for Lo/L1C.This product uses quaternions. I understand there may be strong opinions on using other approaches to get attitude (ephemeris_* etc) (@jtniehof mentioned this) that other instruments are using - perhaps we can open an issue on this and iterate on it once this is merged..
File changes
lo_l1b.py- added alo_l1c_quickmapfunction that takes in sci dependencies and quaternion files and runs the algorithm identified in the issue.Testing
No testing yet ! Once products start getting produced from
sds-data-manager, I'll iterate with the team to weed out any bugs and write meaningful test cases.