Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【PaddlePaddle Hackathon 4】No.56 : add fp16 test and bf16 for bernoulli and trunc #51657

Closed
wants to merge 28 commits into from

Conversation

longranger2
Copy link
Contributor

@longranger2 longranger2 commented Mar 14, 2023

PR types

Others

PR changes

APIs

Description

  • add fp16 test and bf16 test for bernoulli
  • add fp16 test and bf16 test for trunc

相关链接:
#51281

@paddle-bot
Copy link

paddle-bot bot commented Mar 14, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@@ -98,5 +103,10 @@ def test_fixed_random_number(self):
paddle.enable_static()


class TestBernoulliFP16OP(TestBernoulliOp):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是不是还要添加一下BF16的单测

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对的,已经添加好了~

@longranger2
Copy link
Contributor Author

@Vvsmile 这个报错应该怎么修复呢?
image

@longranger2
Copy link
Contributor Author

longranger2 commented Apr 22, 2023

@Vvsmile 这个报错应该怎么修复呢? image

错误消息表示,编译器遇到了一个歧义,即从 const phi::dtype::float16 和 const phi::dtype::bfloat16 类型转换为内置类型时,有多个可用的转换函数。这些转换函数分别是 operator float() const 和 operator double() const。当编译器尝试确定如何将 phi::dtype::float16 或 phi::dtype::bfloat16 转换为内置类型时,它无法确定哪个函数应该被调用。这就导致了编译错误。

为了解决这个问题,需要消除歧义,确保编译器可以正确地确定如何将 phi::dtype::float16 和 phi::dtype::bfloat16 类型转换为内置类型。通过将隐式转换移动到特化的 convert_to_T 函数中来实现这一点。这样,每种数据类型的转换都在特定的函数实现中进行,从而消除了歧义。

修改了 bernoulli_cuda_kernel 函数,使其调用 convert_to_T 函数时,传递 (&rand.x)[j] 和 x_data[idx] 两个参数,而不是将它们进行比较。这样,便可以在特化的 convert_to_T 函数中使用 x_data[idx] 的原始类型,而不需要进行任何转换。

然后,在特化的 convert_to_T 函数中执行比较操作,这样就可以对每种数据类型进行特定的处理。对于 phi::dtype::float16 和 phi::dtype::bfloat16,首先将其显式地转换为 float 类型,然后执行比较操作。对于 float 和 double 类型,可以直接执行比较操作。这样,就消除了类型转换的歧义,解决了编译错误。

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Apr 30, 2023

Sorry to inform you that 099d3bb's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@longranger2 longranger2 requested a review from Vvsmile May 5, 2023 01:24
@ZzSean
Copy link
Contributor

ZzSean commented May 5, 2023

@@ -55,7 +82,7 @@ __global__ void bernoulli_cuda_kernel(
for (size_t j = 0; j < 4; j++) {
size_t idx = i + j;
if (idx < size) {
out_data[idx] = static_cast<T>((&rand.x)[j] <= x_data[idx]);
out_data[idx] = convert_to_T<T>((&rand.x)[j], x_data[idx]);
Copy link
Contributor

@ZzSean ZzSean May 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里感觉直接使用MPType,然后把x_data[idx]做个cast就可以?
out_data[idx] = static_cast<T>((&rand.x)[j] <= static_cast<MPType>(x_data[idx]));

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的👌

__device__ TruncFunctor(const T x) : x_(x) {}
__device__ T operator()() { return trunc(x_); }
__device__ TruncFunctor(T x) : x_(x) {}
__device__ T operator()() { return device_trunc(x_); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感觉也是可以直接用MPType来计算

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的👌

self.inputs = {"X": np.random.uniform(size=(1000, 784))}
self.inputs = {
"X": np.random.uniform(size=(1000, 784)).astype(self.dtype)
}
self.attrs = {}
self.outputs = {"Out": np.zeros((1000, 784)).astype("float32")}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

float16的输出不应该是float32类型吧

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的👌

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented May 24, 2023

Sorry to inform you that 10336f8's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@paddle-bot
Copy link

paddle-bot bot commented Jun 3, 2023

很抱歉,经过我们的反复讨论,你的PR暂未达到合入标准,请阅读飞桨原生算子开发规范,你可以重新提交新的PR,我们先将此PR关闭,感谢你的贡献。
Sorry to inform you that through our discussion, your PR fails to meet the merging standard (Reference: Paddle Custom Operator Design Doc). You can also submit an new one. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants