-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generated compute_size() returns incorrect value on packed repeated fields #281
Comments
stepancheg
added a commit
that referenced
this issue
Apr 2, 2018
Thank you! Fixed and published as 1.5.1. |
stepancheg
added a commit
that referenced
this issue
Apr 2, 2018
Also published 1.4.5 from 1.4 branch. |
stepancheg
added a commit
that referenced
this issue
Apr 2, 2018
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm writing to a format which consists of length-delimited proto messages (TensorFlow's TFRecord format) and I was having trouble with getting my output read correctly. Since I was writing in a stream, I used
compute_size
to get the length and write it to the stream before I write the actual data. Unfortunately, I noticed that the length returned was smaller than the actual number of bytes written.After some debugging, I tracked it down to the following line (in this particular case, it's a packed repeated field of 32-bit floats):
Notice that
my_size
is the size of the entire message, in bytes. A length-delimited message consists of a tag (hence the1
), the length of the data contained in it (in bytes) and then the data itself (value.len()
floats, each consisting of4
bytes). The error here is that this function is calculating the size of the varint which would be used to encode the number of elements of the list, but actually it should calculate the size of the varint which encodes the number of bytes which the contents of the list take:I've fixed this by hand in my generated files, since I only have a couple of lists in the protos I'm using, so I don't know where the bug is in the actual codegen, but hopefully this will allow someone to fix it.
The text was updated successfully, but these errors were encountered: