Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy drop after fuse_layer_norms #34

Open
Niko-zyf opened this issue Jul 26, 2024 · 1 comment
Open

Accuracy drop after fuse_layer_norms #34

Niko-zyf opened this issue Jul 26, 2024 · 1 comment

Comments

@Niko-zyf
Copy link

Niko-zyf commented Jul 26, 2024

I encountered an unexpected precision loss while using Quarot. I conducted comparison experiments on LLaMA-2-7b:

Performing w4a16 RTN quantization on the model resulted in a PPL (Perplexity) of 7.354664.
Performing only fuse_layer_norms without rotation operations resulted in a PPL of 7.641615. Based on this, the precision of these two results should be the same.
I also replaced RMSN with LlamaRMSNorm in fuse_layer_norms and set self.weight to 1, which resulted in a PPL of 9.577629.
Logically, these three results should be equal, shouldn't they?
`class RMSN(torch.nn.Module):
def init(self, hidden_size, eps=1e-6):
"""
LlamaRMSNorm is equivalent to T5LayerNorm
"""
super().init()
self.weight = torch.nn.Parameter(torch.ones(hidden_size)).to(torch.float16)
self.variance_epsilon = eps

def forward(self, hidden_states):
    input_dtype = hidden_states.dtype
    hidden_states = hidden_states.to(torch.float32)
    variance = hidden_states.pow(2).mean(-1, keepdim=True)
    hidden_states =self.weight * hidden_states * torch.rsqrt(variance + self.variance_epsilon)
    return hidden_states.to(input_dtype)

`

@tellyoung
Copy link

The reason for removing the non-equivalence is that the weight in rms cannot be multiplied by the end of XQ. Otherwise, Q.T cannot be canceled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants