Skip to content

Quickmaps for IMAP-Lo#3179

Open
vineetbansal wants to merge 6 commits into
IMAP-Science-Operations-Center:devfrom
vineetbansal:vb/issue3153
Open

Quickmaps for IMAP-Lo#3179
vineetbansal wants to merge 6 commits into
IMAP-Science-Operations-Center:devfrom
vineetbansal:vb/issue3153

Conversation

@vineetbansal
Copy link
Copy Markdown
Collaborator

@vineetbansal vineetbansal commented May 11, 2026

Closes issue #3153

Change Summary

Overview

Added a quickmap product for Lo/L1C.

This product uses quaternions. I understand there may be strong opinions on using other approaches to get attitude (ephemeris_* etc) (@jtniehof mentioned this) that other instruments are using - perhaps we can open an issue on this and iterate on it once this is merged..

File changes

lo_l1b.py - added a lo_l1c_quickmap function that takes in sci dependencies and quaternion files and runs the algorithm identified in the issue.

Testing

No testing yet ! Once products start getting produced from sds-data-manager, I'll iterate with the team to weed out any bugs and write meaningful test cases.

@vineetbansal vineetbansal changed the title Vb/issue3153 Quickmaps for IMAP-Lo May 11, 2026
@vineetbansal vineetbansal marked this pull request as ready for review May 11, 2026 21:27
@tmplummer tmplummer self-requested a review May 12, 2026 14:44
@tmplummer tmplummer added this to IMAP May 12, 2026
Copy link
Copy Markdown
Contributor

@tmplummer tmplummer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Jon was right that there would be some opinions on the attitude and reinventing the wheel for much of this code. So, I will state them here and act as if this is a normal PR.

  • Much of this code duplicates thing we already have in place.
  • Why does Lo need to extract it's own mean spin-axis?
  • Why does it need to write it's own tools to do vector transforms instead of using the industry standard, SPICE?
  • Much of this implementation is at odds with the "one mission" philosophy of the IMAP mission.

We really can't merge this without test coverage. The throw it into production and then come back and write meaningful tests logic is flawed. Writing functions with limited scope and writing concise unit tests for them can save time improve code reliability tremendously.

<<: *instrument_base
Data_type: L1B_goodtimes>Level-1B Goodtimes List
Logical_source: imap_lo_l1b_good-times
Logical_source: imap_lo_l1b_goodtimes
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. This makes the product consistent with Hi and Ultra.

quaternion_ds = xr.concat(quaternion_datasets, dim="epoch")
attitude_ds = assemble_quaternions(quaternion_ds)

attitude_met = attitude_ds["epoch"].values
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

epoch should contain Terrestrial Time J2000 nanoseconds. Very different from Mission Elapsed Time (MET).

Oh, I see... the L1A product has MET in the epoch variable. But why not use the L1B products such that you don't have to assemble them?

Rotation.from_quat(quaternion_array).apply([0.0, 0.0, 1.0]).mean(axis=0)
)
mean_spin_axis /= np.linalg.norm(mean_spin_axis)
spin_ecl_lon, spin_ecl_lat = cartesian_to_spherical(mean_spin_axis[None])[0, 1:]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally find this syntax for adding an axis pretty opaque. Prefer mean_spin_axis[np.newaxis, :] for readability. Please apply this comment throughout.

Comment thread imap_processing/lo/l1c/lo_l1c.py Outdated
Comment on lines +1421 to +1423
mask = np.any(
(hist_met[:, None] >= gt_begin) & (hist_met[:, None] <= gt_end), axis=1
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, this bit of code is odd. I think that the np.any is being applied along a singleton dimension. Why not just do mask = (hist_met[:, np.newaxis] >= gt_begin) & (hist_met[:, np.newaxis] <= gt_end)?


attitude_met = attitude_ds["epoch"].values
attitude_mask = np.any(
(attitude_met[:, None] >= gt_begin) & (attitude_met[:, None] <= gt_end), axis=1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use np.newaxis.


# Compute ecliptic sky pointing for each of 60 spin-angle bins
bins, ecl_lons, ecl_lats = create_ra_dec(spin_ecl_lon, spin_ecl_lat, pivot_angle)
pivot_df = pd.DataFrame(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why you are using pandas throughout instead of xarray? It would be much more idiomatic to just use xarray.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We went from passing around csvs between functions to dataframes. This is a temporary struct so I guess I'll go with built-in types when I refactor this.

Comment on lines +1431 to +1446
for esa_level in range(c.N_ESA_LEVELS):
hist_counts = np.sum(hist_ds["h_counts"].values[mask, esa_level, :].T, axis=1)
exposure = np.sum(
hist_ds["exposure_time_6deg"].values[mask, esa_level, :].T, axis=1
)
nep_counts = np.roll(hist_counts, nep_roll)
nep_exposure = np.roll(exposure, nep_roll)

esa_df = pd.DataFrame(
{"bins": bins, "counts": nep_counts, "expo": nep_exposure}
)
df = pivot_df.merge(esa_df, on="bins")
df.insert(0, "esa_level", esa_level + 1)
df["spin_ecl_lon"] = spin_ecl_lon
df["spin_ecl_lat"] = spin_ecl_lat
map_dataframes.append(df)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like this could all be vectorized.

stonoise_var_map,
) = [np.zeros(shape) for _ in range(14)]

for esa in range(c.N_ESA_LEVELS):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, can any of this be vectorized? For loops are my arch nemesis in Python.

Comment thread imap_processing/lo/l1c/lo_l1c.py Outdated
@vineetbansal
Copy link
Copy Markdown
Collaborator Author

vineetbansal commented May 12, 2026

@tmplummer - is there a way for us to test the pipeline with this code? (on dev or something)? The products need to be generated so that they can be validated in parallel by the team with the refactorings you're requesting here (which are reasonable).

EDIT: Or if there's a way to tell what dependency json the pipeline would have used for certain invocations, we could generate a bunch of imap_processing commands locally..

@tmplummer
Copy link
Copy Markdown
Contributor

@tmplummer - is there a way for us to test the pipeline with this code? (on dev or something)? The products need to be generated so that they can be validated in parallel by the team with the refactorings you're requesting here (which are reasonable).

EDIT: Or if there's a way to tell what dependency json the pipeline would have used, we could generate a bunch of imap_processing commands locally..

If you wanted to spot check one or two pointings, it would be feasible in dev. We can build a docker image for a certain branch and push that to dev. Then you would need to sync all the upstream dependencies required to dev as well. We don't have a good way to do a bulk sync AFAIK. For running locally, you would have to generate the dependency json files locally and run the imap_cli commands locally. I realize that is pretty onerous.

If neither of those options are sufficient, I think that we could merge past the codecov check if you create issue(s) for all the PR comments such that they don't get lost and forgotten. @tech3371 @bryan-harter what do you think?

@vineetbansal
Copy link
Copy Markdown
Collaborator Author

@tmplummer - no worries then. I just wanted to make sure there wasn't a shortcut we could take. Let me incorporate your suggestions tomorrow, which will take comparable time to what you guys will have to go through. This will also help me get familiar with more of the existing code.

@vineetbansal
Copy link
Copy Markdown
Collaborator Author

An update - the Lo team has decided to continue to use its local pipeline because they anticipate making changes till the last moment of delivery date. That'll give me some time to do this properly and address all concerns above so I'm not going to try to rush this. I may make small tweaks to this PR reflecting any further algorithmic tweaks I see on their end.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants