-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inconsistent output from fftconv_func and native pytorch fft #15
Comments
Hi, thank your interest! Can you provide more details about the differences in output you're seeing? It may be slight numerical errors due to fp32/fp16/bf16. We have a test in |
Thanks so much for your reply. After running
Using the above code, the error is ~1e-4. But would that be an acceptable range for the numerical errors you mentioned? |
Yes, that is within a standard numerical error that can come from two slightly different but mathematically equivalent implementations of the same operation, or from converting between fp32/fp16 and bf16. |
Hello,
I have noticed that the output of an h3 layer, when provided the same input tensor, is different when use_fast_fftconv=True versus when use_fast_fftconv=False. Similarly, I have tested this on the fftconv_func and fftconv_ref functions (in fftconv.py) and they give different outputs when given the same input arguments. Is this behavior expected? Is fftconv_func performing any approximation that causes this relatively large Euclidean distance between the two outputs?
Thanks in advance for your help.
The text was updated successfully, but these errors were encountered: