Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python frontend should use storage type of array to determine map schedule #1185

Closed
tbennun opened this issue Jan 19, 2023 · 1 comment · Fixed by #1262 · May be fixed by #1241
Closed

Python frontend should use storage type of array to determine map schedule #1185

tbennun opened this issue Jan 19, 2023 · 1 comment · Fixed by #1262 · May be fixed by #1241

Comments

@tbennun
Copy link
Collaborator

tbennun commented Jan 19, 2023

@alexnick83
In the following example, the storage types of the output arrays are well-inferred:

@dace.program
def add(a: dace.float32[10, 10] @ dace.StorageType.GPU_Global, 
        b: dace.float32[10, 10] @ dace.StorageType.GPU_Global):
    return a + b @ b

add.to_sdfg().view()

However, the map generated by a + b would have the Default schedule, which translates to a CPU map, which subsequently crashes.

Additionally, the following code should probably fail gracefully, due to the output storage / computation schedule not being able to be inferred:

@dace.program
def add(a: dace.float32[10, 10] @ dace.StorageType.GPU_Global, 
        b: dace.float32[10, 10]):
    return a + b

but it doesn't fail.

@FlorianDeconinck
Copy link
Contributor

+1

Hit this multiple times in different occasion. The worst is that it can manifest as a host code accessing device memory, crashing in non-obvious way for non GPU native coders.

Another example would be:

a_device_array = np.min(a_device_array , 0.0)

Here the coder mistake of using np.min instead of np.minumum leads to the generation of a reduction, which crashes because it thinks a_device_array can be host accessed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants