Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] add depth-limiting to off-axis sph particle projections #4712

Closed

Conversation

chrishavlin
Copy link
Contributor

@chrishavlin chrishavlin commented Oct 25, 2023

Closes #4711

The initial push does indeed work for sph particle projections, but I intend to refactor...

that said,

import yt  
import numpy as np

ds = yt.load_sample("gizmo_64")  

z_hat_original = ds.arr([0.3, 0.3, 0.3], 'kpc')
y_hat_original = ds.arr([1., 0., 0.], 'kpc')

for d in [2, 20]: 
    prj = yt.ProjectionPlot(ds, - z_hat_original, 
                                       center = ds.domain_center, 
                                       fields=('gas', 'density'),                   
                                       depth = (d, 'Mpc'), 
                                       north_vector = - y_hat_original, 
                                       )
    prj.save(f"depth_{str(d).zfill(2)}_proj.png")

returns
depth_02_proj

and
depth_20_proj

which seems to make sense (first image is 2 Mpc depth, second is 20 Mpc depth).

Still to do:

  • consider default behavior... i'm expecting answer tests to fail because the default depth in OffAxisProjectionPlot is (1, '1') instead of None, so it may drop some particles, changing what the plots with default values look like? UPDATE: didn't actually break tests, but that may be a function of test coverage and there could be a change in behavior introduced here?
  • make sure I'm not introducing confusing or diverging behavior between the particle and non-particle projections (which both use the same OffAxisProjectionDummyDataSource). I think it's fine, but want to double check... UPDATE: I think I'm coming back around on this ... now thinking that it'd be better to simply always apply the depth limitation, following how the off axis projection for gridded data works. Will marinate on this for a bit ....
  • refactor: in the initial push I created new versions of the pixelation routines based on the existing pixelize_sph_kernel_projection and rotate_particle_coord and introduced a ton of code duplication in the process. I mainly wanted to avoid creating extra arrays for the rotated z coordinates and running extra checks on z bounds in the case where it's not limiting by z, so started off by simply creating separate functions... but will go back and refactor this now that I have it nominally working.
  • add tests
  • add note in docs

@chrishavlin chrishavlin added the enhancement Making something better label Oct 25, 2023
@chrishavlin chrishavlin marked this pull request as draft October 25, 2023 19:46
Copy link
Member

@matthewturk matthewturk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this. Thank you! I've left some comments.

cdef np.float64_t * yiterv
cdef np.float64_t * ziterv

if weight_field is not None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you got this code from elsewhere, but I will say that it definitely confused me. I at first thought this was a string, but now I see it's either a numpy array or none. Yowza!

Copy link
Contributor Author

@chrishavlin chrishavlin Nov 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed! copied directly from pixelize_sph_kernel_projection. Up in the python somewhere there's a weight that is a string, so definitely confusing. Could rename this variable to weight_array (and also rename in pixelize_sph_kernel_projection).

# pixelize_sph_kernel_projection.

local_buff = <np.float64_t *> malloc(sizeof(np.float64_t) * xsize * ysize)
xiterv = <np.float64_t *> malloc(sizeof(np.float64_t) * 2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a sneaking suspicion it's to avoid a race condition, but I think a long time ago we were supposed to be able to declare variables inside a parallel block to get them to be thread-local. No changes needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

helpful context! I was confused by why this was needed.

for yyi in range(ysize):
buff[xxi, yyi] += local_buff[xxi + yyi*xsize]
free(local_buff)
free(xiterv)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this also need to happen inside the parallel context?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hard to tell with the github interface, but these lines are all within the

with nogil, parallel():

block way up top.

Screenshot 2023-11-13 at 11 09 45 AM

@@ -1910,6 +2095,61 @@ def rotate_particle_coord(np.float64_t[:] px,
return px_rotated, py_rotated, rot_bounds_x0, rot_bounds_x1, rot_bounds_y0, rot_bounds_y1


@cython.boundscheck(False)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I think this function looks right. What I'm wondering is if it needs to be in Cython as-is. I think it could potentially take advantage of being in cython by consolidating the matmul stuff with the copying of coordinates -- i.e., iterating over the particles one-by-one and avoiding temporary arrays. I'm not 100% sure that it's necessary, but it could potentially eliminate some allocations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ya, I agree -- it could be written with just numpy matrix operations at the python level or rewritten with a different loop control.

Also, could actually cut out an entire loop over the particles by calculating the rotated particle positions as needed within pixelize_sph_kernel_projection_with_depth_check (and pixelize_sph_kernel_projection)?

weight_field=weight_field,
check_period=0)
if depth_set:
px_rotated, py_rotated, pz_rotated, \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't fix this here, but is this a case where something like a named tuple or simple data class could simplify our code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I do think that would help. it might be worth fixing here too :) I think it could cut down on some of the code duplication I'm introducing.

@nastasha-w
Copy link
Contributor

I ended up putting in depth limits in SPH projections in #4939 -- too bad it was duplicate work.

@chrishavlin
Copy link
Contributor Author

I ended up putting in depth limits in SPH projections in #4939 -- too bad it was duplicate work.

great!! I'm just glad the functionality got in :) I'll close this one out.

@chrishavlin chrishavlin deleted the off_axis_sph_proj_depth branch September 23, 2024 14:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Making something better
Projects
None yet
Development

Successfully merging this pull request may close these issues.

depth keyword in off-axis projections with sph particles has no effect?
3 participants