Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version [email protected] is broken #2318

Closed
Elrendio opened this issue Jan 10, 2023 · 28 comments
Closed

Version [email protected] is broken #2318

Elrendio opened this issue Jan 10, 2023 · 28 comments

Comments

@Elrendio
Copy link

Problem description

When updating to version 1.8.2 our scripts randomly stops without errors. It works when version is pinned to 1.8.1. I have not investigated further.

Environment

  • OS name, version and architecture: Arch Linux amd64
  • Node version 19.3.0
  • Node installation method: yay
  • If applicable, compiler version [e.g. clang 3.8.0-2ubuntu4]: unknown
  • Package name and version [e.g. [email protected]] [email protected]
@mbwhite
Copy link

mbwhite commented Jan 10, 2023

We've also seen issues with 1.18.2 today. Initially it looked like a programming error that triggered the steam to close.

On a full duplex stream, a message is sent from client to server, the client always to instantly close the stream before the server can respond

@murgatroid99
Copy link
Member

Can you share a trace of a client doing that, which you would get by running the client with the environment variables GRPC_TRACE=all and GRPC_VERBOSITY=DEBUG?

jyecusch added a commit to nitrictech/node-sdk that referenced this issue Jan 11, 2023
jyecusch added a commit to nitrictech/node-sdk that referenced this issue Jan 11, 2023
tjholm added a commit to nitrictech/node-sdk that referenced this issue Jan 11, 2023
@mbwhite
Copy link

mbwhite commented Jan 11, 2023

@murgatroid99 thanks will try and get the information - I see there's a PR as well; if there's a build for that available might be able to try it as well

@murgatroid99
Copy link
Member

I published version 1.8.3 with the change from that PR. Please try it out.

@nikolaymatrosov
Copy link

I also observe the same strange behavior: my app silently exits unexpectedly. So I had to downgrade the package version even to 1.7.3.
Unfortunately, I cannot trace the reason even with debug env variables turned on.

The message I get when I use the debugger in the WebStorm is Process finished with exit code 130 (interrupted by signal 2: SIGINT). I understand it is probably not directly related to the @grpc/grpc-js package, but maybe someone else gets the same symptoms and could go into debugging further.

The most strange thing is that using 'grpc.service_config_disable_resolution': 1 instead of downgrading also works for me. And I do not understand why. The reason may be related to the DNS TXT resolution. But resolution errors should be ignored and not interfere with the success path.

@murgatroid99
Copy link
Member

What do you mean by "Unfortunately, I cannot trace the reason even with debug env variables turned on."? Does it not output anything at all?

@nikolaymatrosov
Copy link

It outputs a lot of info, but I can't find there the reason why the process stops silently.

@murgatroid99
Copy link
Member

The fact that disabling service config resolution stops the problem is especially weird because as far as I can remember, the only thing that has changed about the TXT resolution code path is adding that option to skip that code path. One possibility is that the TXT resolution is actually succeeding, and that is triggering the use of a service config that causes some misbehavior in other new code.

@travisghansen
Copy link

travisghansen commented Jan 12, 2023

Somewhere after 1.8.1 I have an issue with client side. Here are the logs:

v1.8.1 (works)

# client
NET 13212: createConnection [
  { host: '127.0.0.1', port: 50051 },
  null,
  [Symbol(normalizedArgs)]: true
]
NET 13212: pipe false undefined
HTTP2 13212: Http2Session client: created
(node:13212) Warning: Setting the NODE_DEBUG environment variable to 'http2' can expose sensitive data (such as passwords, tokens and authentication headers) in the resulting log.
(Use `node --trace-warnings ...` to show where the warning was created)
NET 13212: afterConnect
HTTP2 13212: Http2Session client: setting up session handle
Http2Session client (42) session created
Http2Session client (42) i/o stream consumed
HTTP2 13212: Http2Session client: sending settings
HTTP2 13212: Http2Session client: submitting settings
Http2Session client (42) scheduling write
STREAM 13212: read 0
STREAM 13212: need readable false
STREAM 13212: length less than watermark true
STREAM 13212: do read
NET 13212: _read
NET 13212: Socket._handle.readStart
HTTP2 13212: Http2Session client: initiating request
HTTP2 13212: Http2Session client: connected, initializing request
Http2Session client (42) request submitted
Http2Priority: parent: 0, weight: 16, exclusive: no
Http2Session client (42) submitting request
Http2Session client (42) request submitted, new stream id 1
Http2Session client (42) refreshing state
Http2Session client (42) refreshing state
STREAM 13212: resume
HttpStream 1 (46) [Http2Session client (42)] queuing 1 buffers to send
STREAM 13212: resume false
STREAM 13212: read 0
STREAM 13212: need readable false
STREAM 13212: length less than watermark true
STREAM 13212: do read
HttpStream 1 (46) [Http2Session client (42)] reading starting
STREAM 13212: flow true
STREAM 13212: read undefined
STREAM 13212: need readable true
STREAM 13212: length less than watermark true
STREAM 13212: reading, ended or constructing false
Http2Session client (42) sending pending data
Http2Session client (42) nghttp2 has 24 bytes to send
Http2Session client (42) nghttp2 has 9 bytes to send
Http2Session client (42) nghttp2 has 128 bytes to send
Http2Session client (42) reading outbound data for stream 1
Http2Session client (42) stream 1 has pending outbound data
Http2Session client (42) sending 27 bytes for data frame on stream 1
Http2Session client (42) nghttp2 has 27 bytes to send directly
Http2Session client (42) reading outbound data for stream 1
Http2Session client (42) deferring stream 1
STREAM 13212: onWriteComplete 0 undefined
HttpStream 1 (46) [Http2Session client (42)] writable side shutdown
Http2Session client (42) scheduling write
Http2Session client (42) wants read? 1
Http2Session client (42) sending pending data
Http2Session client (42) reading outbound data for stream 1
Http2Session client (42) no more data for stream 1
Http2Session client (42) nghttp2 has 9 bytes to send
Http2Session client (42) wants read? 1
Http2Session client (42) receiving 9 bytes, offset 0
Http2Session client (42) receiving 9 bytes [wants data? 1]
Http2Session client (42) complete frame received: type: 4
HTTP2 13212: Http2Session client: new settings received
Http2Session client (42) settings refreshed for session
Http2Session client (42) sending pending data
Http2Session client (42) nghttp2 has 9 bytes to send
Http2Session client (42) wants read? 1
Http2Session client (42) wants read? 1
Http2Session client (42) receiving 9 bytes, offset 0
Http2Session client (42) receiving 9 bytes [wants data? 1]
Http2Session client (42) complete frame received: type: 4
HTTP2 13212: Http2Session client: settings received
Http2Session client (42) settings refreshed for session
Http2Session client (42) sending pending data
Http2Session client (42) wants read? 1
Http2Session client (42) receiving 118 bytes, offset 0
Http2Session client (42) receiving 118 bytes [wants data? 1]
Http2Session client (42) beginning headers for stream 1
HttpStream 1 (46) [Http2Session client (42)] starting headers, category: true
Http2Session client (42) complete frame received: type: 1
Http2Session client (42) handle headers frame for stream 1
Http2Session client (42) buffering data chunk for stream 1, size: 5, flags: 0
STREAM 13212: readableAddChunk <Buffer 00 00 00 00 00>
STREAM 13212: call pause flowing=true
STREAM 13212: pause
HttpStream 1 (46) [Http2Session client (42)] reading stopped
STREAM 13212: maybeReadMore read 0
STREAM 13212: read 0
STREAM 13212: need readable true
STREAM 13212: length less than watermark true
STREAM 13212: do read
HttpStream 1 (46) [Http2Session client (42)] reading starting
Http2Session client (42) complete frame received: type: 0
Http2Session client (42) handling data frame for stream 1
Http2Session client (42) sending pending data
Http2Session client (42) wants read? 1
Http2Session client (42) receiving 35 bytes, offset 0
Http2Session client (42) receiving 35 bytes [wants data? 1]
Http2Session client (42) beginning headers for stream 1
HttpStream 1 (46) [Http2Session client (42)] starting headers, category: true
Http2Session client (42) complete frame received: type: 1
Http2Session client (42) handle headers frame for stream 1
STREAM 13212: readableAddChunk null
STREAM 13212: onEofChunk
STREAM 13212: emitReadable_ false 0 true
STREAM 13212: flow false
HttpStream 1 (46) [Http2Session client (42)] sending rst_stream with code 0
Http2Session client (42) sending pending data
Http2Session client (42) stream 1 closed with code: 0
Http2Session client (42) sending pending data
Http2Session client (42) nghttp2 has 13 bytes to send
Http2Session client (42) wants read? 1
Http2Session client (42) wants read? 1
Http2Session client (42) stream 1 closed with code: 0
HttpStream 1 (46) [Http2Session client (42)] closed with code 0
STREAM 13212: readableAddChunk null
STREAM 13212: onEofChunk
STREAM 13212: read 0
STREAM 13212: endReadable false
STREAM 13212: endReadableNT false 0
STREAM 13212: resume
STREAM 13212: readableAddChunk null
STREAM 13212: onEofChunk
HttpStream 1 (46) [Http2Session client (42)] destroying stream
HttpStream 1 (46) [Http2Session client (42)] destroying stream
STREAM 13212: resume false
STREAM 13212: read 0
STREAM 13212: endReadable true
STREAM 13212: flow true
STREAM 13212: read undefined
STREAM 13212: endReadable true
STREAM 13212: read 0
STREAM 13212: endReadable true
NodeUnstageVolume - error: null, data: {}
HttpStream 1 (46) [Http2Session client (42)] tearing down stream
Http2Session client (42) freeing nghttp2 session
# server
NET 13162: onconnection
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
NET 13162: _read
NET 13162: Socket._handle.readStart
HTTP2 13162: Http2Session server: received a connection
HTTP2 13162: Http2Session server: setting up session handle
HTTP2 13162: Http2Session server: sending settings
HTTP2 13162: Http2Session server: submitting settings
HTTP2 13162: Http2Session server: created
(node:13162) Warning: Setting the NODE_DEBUG environment variable to 'http2' can expose sensitive data (such as passwords, tokens and authentication headers) in the resulting log.
(Use `node --trace-warnings ...` to show where the warning was created)
D 2023-01-12T00:13:33.490Z | server | (1) Received call to method /csi.v1.Node/NodeUnstageVolume at address null
D 2023-01-12T00:13:33.491Z | server_call | Request to /csi.v1.Node/NodeUnstageVolume received headers {"grpc-accept-encoding":["identity,deflate,gzip"],"accept-encoding":["identity"],"user-agent":["grpc-node-js/1.8.1"],"content-type":["application/grpc"],"te":["trailers"]}
STREAM 13162: resume
STREAM 13162: resume false
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: reading, ended or constructing false
STREAM 13162: readableAddChunk <Buffer 00 00 00 00 16 0a 07 50 56 43 2d 31 31 31 12 0b 2f 6d 6e 74 2f 73 74 61 67 65 64>
STREAM 13162: maybeReadMore read 0
STREAM 13162: read 0
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: do read
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: endReadable false
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
STREAM 13162: readableAddChunk {
  level: '\x1B[32minfo\x1B[39m',
  message: 'new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.1"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false}',
  timestamp: '2023-01-12T00:13:33.494Z',
  [Symbol(level)]: 'info',
  [Symbol(splat)]: [
    'ControllerZfsGenericDriver',
    'NodeUnstageVolume',
    { metadata: [Object], request: [Object], cancelled: false }
  ],
  [Symbol(message)]: '\x1B[32minfo\x1B[39m: new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.1"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false} {"timestamp":"2023-01-12T00:13:33.494Z"}'
}
STREAM 13162: ondata
info: new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.1"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false} {"timestamp":"2023-01-12T00:13:33.494Z"}
STREAM 13162: dest.write true
STREAM 13162: readableAddChunk {
  level: '\x1B[34mdebug\x1B[39m',
  message: 'operation lock keys: ["volume_id_PVC-111"]',
  timestamp: '2023-01-12T00:13:33.494Z',
  [Symbol(level)]: 'debug',
  [Symbol(splat)]: [ [ 'volume_id_PVC-111' ] ],
  [Symbol(message)]: '\x1B[34mdebug\x1B[39m: operation lock keys: ["volume_id_PVC-111"] {"timestamp":"2023-01-12T00:13:33.494Z"}'
}
STREAM 13162: ondata
debug: operation lock keys: ["volume_id_PVC-111"] {"timestamp":"2023-01-12T00:13:33.494Z"}
STREAM 13162: dest.write true
executing mount command: findmnt --mountpoint /mnt/staged/block_device --output source,target,fstype,label,options -b -J --nofsroot
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
NET 13162: _read
NET 13162: Socket._handle.readStart
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
NET 13162: _read
NET 13162: Socket._handle.readStart
STREAM 13162: resume
STREAM 13162: resume
STREAM 13162: endReadableNT true 0
STREAM 13162: maybeReadMore read 0
STREAM 13162: read 0
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: do read
STREAM 13162: resume true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: reading, ended or constructing false
STREAM 13162: resume true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: reading, ended or constructing false
HTTP2 13162: Http2Session server: settings received
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: endReadable false
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
STREAM 13162: endReadableNT true 0
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: endReadable false
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
STREAM 13162: endReadableNT true 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
NET 13162: emit close
NET 13162: emit close
NET 13162: emit close
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
NET 13162: _read
NET 13162: Socket._handle.readStart
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
NET 13162: _read
NET 13162: Socket._handle.readStart
STREAM 13162: resume
STREAM 13162: resume
STREAM 13162: resume true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: reading, ended or constructing false
STREAM 13162: resume true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: reading, ended or constructing false
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: endReadable false
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
STREAM 13162: endReadableNT true 0
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: endReadable false
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
STREAM 13162: endReadableNT true 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
NET 13162: emit close
NET 13162: emit close
NET 13162: emit close
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
NET 13162: _read
NET 13162: Socket._handle.readStart
STREAM 13162: read 0
STREAM 13162: need readable false
STREAM 13162: length less than watermark true
STREAM 13162: do read
NET 13162: _read
NET 13162: Socket._handle.readStart
STREAM 13162: resume
STREAM 13162: resume
STREAM 13162: resume true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: reading, ended or constructing false
STREAM 13162: resume true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: reading, ended or constructing false
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: endReadable false
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
STREAM 13162: endReadableNT true 0
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow true
STREAM 13162: read undefined
STREAM 13162: endReadable false
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
STREAM 13162: endReadableNT true 0
NET 13162: destroy
NET 13162: close
NET 13162: close handle
NET 13162: emit close
NET 13162: emit close
NET 13162: emit close
retry - failed condition, not trying again
STREAM 13162: readableAddChunk {
  level: '\x1B[32minfo\x1B[39m',
  message: 'new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {}',
  timestamp: '2023-01-12T00:13:33.524Z',
  [Symbol(level)]: 'info',
  [Symbol(splat)]: [ 'ControllerZfsGenericDriver', 'NodeUnstageVolume', {} ],
  [Symbol(message)]: '\x1B[32minfo\x1B[39m: new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {} {"timestamp":"2023-01-12T00:13:33.524Z"}'
}
STREAM 13162: ondata
info: new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {} {"timestamp":"2023-01-12T00:13:33.524Z"}
STREAM 13162: dest.write true
D 2023-01-12T00:13:33.525Z | server_call | Request to method /csi.v1.Node/NodeUnstageVolume ended with status code: OK details: OK
STREAM 13162: maybeReadMore read 0
STREAM 13162: read 0
STREAM 13162: need readable true
STREAM 13162: length less than watermark true
STREAM 13162: do read
STREAM 13162: onWriteComplete 0 undefined
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
D 2023-01-12T00:13:33.528Z | server_call | Request to method /csi.v1.Node/NodeUnstageVolume stream closed with rstCode 0
STREAM 13162: readableAddChunk null
STREAM 13162: onEofChunk
STREAM 13162: emitReadable_ false 0 true
STREAM 13162: flow null
STREAM 13162: read 0
STREAM 13162: endReadable false
STREAM 13162: endReadableNT false 0
NET 13162: _final: not ended, call shutdown()
NET 13162: afterShutdown destroyed=false ReadableState {
  objectMode: false,
  highWaterMark: 16384,
  buffer: BufferList { head: null, tail: null, length: 0 },
  length: 0,
  pipes: [],
  flowing: null,
  ended: true,
  endEmitted: true,
  reading: false,
  constructed: true,
  sync: false,
  needReadable: false,
  emittedReadable: false,
  readableListening: false,
  resumeScheduled: false,
  errorEmitted: false,
  emitClose: false,
  autoDestroy: true,
  destroyed: false,
  errored: null,
  closed: false,
  closeEmitted: false,
  defaultEncoding: 'utf8',
  awaitDrainWriters: null,
  multiAwaitDrain: false,
  readingMore: false,
  dataEmitted: false,
  decoder: null,
  encoding: null,
  [Symbol(kPaused)]: null
}
NET 13162: destroy
NET 13162: close
NET 13162: close handle
NET 13162: has server
NET 13162: SERVER _emitCloseIfDrained
NET 13162: SERVER handle? true   connections? 0
NET 13162: emit close
HTTP2 13162: Http2Session server: socket closed
HTTP2 13162: Http2Session server: marking session closed
HTTP2 13162: Http2Session server: submitting goaway
HTTP2 13162: Http2Session server: destroying
HTTP2 13162: Http2Session server: start closing/destroying undefined
HTTP2 13162: Http2Session server: finishSessionClose

v1.8.2 (no work, at least from the client's perspective)

# client
NET 12076: createConnection [ { path: '/tmp/csi.sock' }, null, [Symbol(normalizedArgs)]: true ]
NET 12076: pipe true /tmp/csi.sock
HTTP2 12076: Http2Session client: created
(node:12076) Warning: Setting the NODE_DEBUG environment variable to 'http2' can expose sensitive data (such as passwords, tokens and authentication headers) in the resulting log.
(Use `node --trace-warnings ...` to show where the warning was created)
NET 12076: afterConnect
HTTP2 12076: Http2Session client: setting up session handle
Http2Session client (29) session created
Http2Session client (29) i/o stream consumed
HTTP2 12076: Http2Session client: sending settings
HTTP2 12076: Http2Session client: submitting settings
Http2Session client (29) scheduling write
STREAM 12076: read 0
STREAM 12076: need readable false
STREAM 12076: length less than watermark true
STREAM 12076: do read
NET 12076: _read
NET 12076: Socket._handle.readStart
HTTP2 12076: Http2Session client: initiating request
HTTP2 12076: Http2Session client: connected, initializing request
Http2Session client (29) request submitted
Http2Priority: parent: 0, weight: 16, exclusive: no
Http2Session client (29) submitting request
Http2Session client (29) request submitted, new stream id 1
Http2Session client (29) refreshing state
Http2Session client (29) refreshing state
STREAM 12076: resume
HttpStream 1 (34) [Http2Session client (29)] queuing 1 buffers to send
STREAM 12076: resume false
STREAM 12076: read 0
STREAM 12076: need readable false
STREAM 12076: length less than watermark true
STREAM 12076: do read
HttpStream 1 (34) [Http2Session client (29)] reading starting
STREAM 12076: flow true
STREAM 12076: read undefined
STREAM 12076: need readable true
STREAM 12076: length less than watermark true
STREAM 12076: reading, ended or constructing false
Http2Session client (29) sending pending data
Http2Session client (29) nghttp2 has 24 bytes to send
Http2Session client (29) nghttp2 has 9 bytes to send
Http2Session client (29) nghttp2 has 123 bytes to send
Http2Session client (29) reading outbound data for stream 1
Http2Session client (29) stream 1 has pending outbound data
Http2Session client (29) sending 27 bytes for data frame on stream 1
Http2Session client (29) nghttp2 has 27 bytes to send directly
Http2Session client (29) reading outbound data for stream 1
Http2Session client (29) deferring stream 1
STREAM 12076: onWriteComplete 0 undefined
HttpStream 1 (34) [Http2Session client (29)] writable side shutdown
Http2Session client (29) scheduling write
Http2Session client (29) wants read? 1
Http2Session client (29) sending pending data
Http2Session client (29) reading outbound data for stream 1
Http2Session client (29) no more data for stream 1
Http2Session client (29) nghttp2 has 9 bytes to send
Http2Session client (29) wants read? 1
Http2Session client (29) freeing nghttp2 session
HttpStream 1 (34) [Http2Session client (29)] tearing down stream
# server
NET 12761: onconnection
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
NET 12761: _read
NET 12761: Socket._handle.readStart
HTTP2 12761: Http2Session server: received a connection
HTTP2 12761: Http2Session server: setting up session handle
HTTP2 12761: Http2Session server: sending settings
HTTP2 12761: Http2Session server: submitting settings
HTTP2 12761: Http2Session server: created
(node:12761) Warning: Setting the NODE_DEBUG environment variable to 'http2' can expose sensitive data (such as passwords, tokens and authentication headers) in the resulting log.
(Use `node --trace-warnings ...` to show where the warning was created)
D 2023-01-12T00:10:25.705Z | server | (1) Received call to method /csi.v1.Node/NodeUnstageVolume at address null
D 2023-01-12T00:10:25.706Z | server_call | Request to /csi.v1.Node/NodeUnstageVolume received headers {"grpc-accept-encoding":["identity,deflate,gzip"],"accept-encoding":["identity"],"user-agent":["grpc-node-js/1.8.2"],"content-type":["application/grpc"],"te":["trailers"]}
STREAM 12761: resume
STREAM 12761: resume false
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: reading, ended or constructing false
STREAM 12761: readableAddChunk <Buffer 00 00 00 00 16 0a 07 50 56 43 2d 31 31 31 12 0b 2f 6d 6e 74 2f 73 74 61 67 65 64>
STREAM 12761: maybeReadMore read 0
STREAM 12761: read 0
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: do read
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: endReadable false
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
STREAM 12761: readableAddChunk {
  level: '\x1B[32minfo\x1B[39m',
  message: 'new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.2"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false}',
  timestamp: '2023-01-12T00:10:25.712Z',
  [Symbol(level)]: 'info',
  [Symbol(splat)]: [
    'ControllerZfsGenericDriver',
    'NodeUnstageVolume',
    { metadata: [Object], request: [Object], cancelled: false }
  ],
  [Symbol(message)]: '\x1B[32minfo\x1B[39m: new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.2"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false} {"timestamp":"2023-01-12T00:10:25.712Z"}'
}
STREAM 12761: ondata
info: new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.2"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false} {"timestamp":"2023-01-12T00:10:25.712Z"}
STREAM 12761: dest.write true
STREAM 12761: readableAddChunk {
  level: '\x1B[34mdebug\x1B[39m',
  message: 'operation lock keys: ["volume_id_PVC-111"]',
  timestamp: '2023-01-12T00:10:25.713Z',
  [Symbol(level)]: 'debug',
  [Symbol(splat)]: [ [ 'volume_id_PVC-111' ] ],
  [Symbol(message)]: '\x1B[34mdebug\x1B[39m: operation lock keys: ["volume_id_PVC-111"] {"timestamp":"2023-01-12T00:10:25.713Z"}'
}
STREAM 12761: ondata
debug: operation lock keys: ["volume_id_PVC-111"] {"timestamp":"2023-01-12T00:10:25.713Z"}
STREAM 12761: dest.write true
executing mount command: findmnt --mountpoint /mnt/staged/block_device --output source,target,fstype,label,options -b -J --nofsroot
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
NET 12761: _read
NET 12761: Socket._handle.readStart
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
NET 12761: _read
NET 12761: Socket._handle.readStart
STREAM 12761: resume
STREAM 12761: resume
STREAM 12761: endReadableNT true 0
STREAM 12761: maybeReadMore read 0
STREAM 12761: read 0
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: do read
STREAM 12761: resume true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: reading, ended or constructing false
STREAM 12761: resume true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: reading, ended or constructing false
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow null
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
NET 12761: _final: not ended, call shutdown()
NET 12761: afterShutdown destroyed=false ReadableState {
  objectMode: false,
  highWaterMark: 16384,
  buffer: BufferList { head: null, tail: null, length: 0 },
  length: 0,
  pipes: [],
  flowing: null,
  ended: true,
  endEmitted: true,
  reading: false,
  constructed: true,
  sync: false,
  needReadable: false,
  emittedReadable: false,
  readableListening: false,
  resumeScheduled: false,
  errorEmitted: false,
  emitClose: false,
  autoDestroy: true,
  destroyed: false,
  errored: null,
  closed: false,
  closeEmitted: false,
  defaultEncoding: 'utf8',
  awaitDrainWriters: null,
  multiAwaitDrain: false,
  readingMore: false,
  dataEmitted: false,
  decoder: null,
  encoding: null,
  [Symbol(kPaused)]: null
}
NET 12761: destroy
NET 12761: close
NET 12761: close handle
NET 12761: has server
NET 12761: SERVER _emitCloseIfDrained
NET 12761: SERVER handle? true   connections? 0
NET 12761: emit close
HTTP2 12761: Http2Session server: socket closed
HTTP2 12761: Http2Session server: marking session closed
HTTP2 12761: Http2Session server: submitting goaway
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
HTTP2 12761: Http2Session server: destroying
HTTP2 12761: Http2Session server: start closing/destroying undefined
HTTP2 12761: Http2Session server: finishSessionClose
D 2023-01-12T00:10:25.731Z | server_call | Request to method /csi.v1.Node/NodeUnstageVolume stream closed with rstCode 8
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: endReadable false
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
STREAM 12761: endReadableNT true 0
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: endReadable false
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
STREAM 12761: endReadableNT true 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
NET 12761: emit close
NET 12761: emit close
NET 12761: emit close
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
NET 12761: _read
NET 12761: Socket._handle.readStart
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
NET 12761: _read
NET 12761: Socket._handle.readStart
STREAM 12761: resume
STREAM 12761: resume
STREAM 12761: resume true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: reading, ended or constructing false
STREAM 12761: resume true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: reading, ended or constructing false
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: endReadable false
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
STREAM 12761: endReadableNT true 0
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: endReadable false
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
STREAM 12761: endReadableNT true 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
NET 12761: emit close
NET 12761: emit close
NET 12761: emit close
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
NET 12761: _read
NET 12761: Socket._handle.readStart
STREAM 12761: read 0
STREAM 12761: need readable false
STREAM 12761: length less than watermark true
STREAM 12761: do read
NET 12761: _read
NET 12761: Socket._handle.readStart
STREAM 12761: resume
STREAM 12761: resume
STREAM 12761: resume true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: reading, ended or constructing false
STREAM 12761: resume true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: reading, ended or constructing false
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: endReadable false
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
STREAM 12761: endReadableNT true 0
STREAM 12761: readableAddChunk null
STREAM 12761: onEofChunk
STREAM 12761: emitReadable_ false 0 true
STREAM 12761: flow true
STREAM 12761: read undefined
STREAM 12761: endReadable false
STREAM 12761: read 0
STREAM 12761: endReadable false
STREAM 12761: endReadableNT false 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
STREAM 12761: endReadableNT true 0
NET 12761: destroy
NET 12761: close
NET 12761: close handle
NET 12761: emit close
NET 12761: emit close
NET 12761: emit close
retry - failed condition, not trying again
STREAM 12761: readableAddChunk {
  level: '\x1B[32minfo\x1B[39m',
  message: 'new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {}',
  timestamp: '2023-01-12T00:10:25.752Z',
  [Symbol(level)]: 'info',
  [Symbol(splat)]: [ 'ControllerZfsGenericDriver', 'NodeUnstageVolume', {} ],
  [Symbol(message)]: '\x1B[32minfo\x1B[39m: new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {} {"timestamp":"2023-01-12T00:10:25.752Z"}'
}
STREAM 12761: ondata
info: new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {} {"timestamp":"2023-01-12T00:10:25.752Z"}
STREAM 12761: dest.write true
STREAM 12761: maybeReadMore read 0
STREAM 12761: read 0
STREAM 12761: need readable true
STREAM 12761: length less than watermark true
STREAM 12761: do read

@murgatroid99
Copy link
Member

Those are not the logs I asked for. Please share the gRPC trace logs that are produced using the environment variables I specified in my previous comment.

@travisghansen
Copy link

travisghansen commented Jan 12, 2023

Both of those (both client and server side, in both 1.8.1 and 1.8.2) were ran with the env vars you asked for.

My server script for example ran with this:

...
export NODE_DEBUG="net,tls,stream,request,http2"
export GRPC_TRACE=all
export GRPC_VERBOSITY=DEBUG
...

Not sure why GRPC stuff isn't coming through.

@murgatroid99
Copy link
Member

I don't understand this. Your server logs include some of the expected trace logs, but not all of them. And your client logs don't include any of those trace logs. What do you get if you don't include the NODE_DEBUG setting?

@travisghansen
Copy link

Definitely strange :( Forgive some of my own application logs.

v1.8.1

./dev/start.sh
zfs-generic-local.yaml
grpc implementation: @grpc/grpc-js
D 2023-01-12T00:48:39.339Z | index | Loading @grpc/grpc-js version 1.8.1
info: initializing csi driver: zfs-generic-nvmeof {"timestamp":"2023-01-12T00:48:39.465Z"}
debug: setting default identity service caps {"timestamp":"2023-01-12T00:48:39.728Z"}
debug: setting default identity volume_expansion caps {"timestamp":"2023-01-12T00:48:39.728Z"}
debug: setting default controller caps {"timestamp":"2023-01-12T00:48:39.728Z"}
debug: setting default node caps {"timestamp":"2023-01-12T00:48:39.731Z"}
D 2023-01-12T00:48:39.731Z | server | (1) Server constructed
info: starting csi server - node version: v16.19.0, package version: 1.8.0, config file: /home/thansen/Projects/github/democratic-csi/democratic-csi/dev/zfs-generic-local.yaml, csi-name: driver-test, csi-driver: zfs-generic-nvmeof, csi-mode: controller,node, csi-version: 1.5.0, address: 0.0.0.0:50051, socket: unix:///tmp/csi.sock {"timestamp":"2023-01-12T00:48:39.731Z"}
D 2023-01-12T00:48:39.732Z | dns_resolver | Resolver constructed for target dns:0.0.0.0:50051
D 2023-01-12T00:48:39.733Z | dns_resolver | Returning IP address for target dns:0.0.0.0:50051
D 2023-01-12T00:48:39.737Z | server | (1) Attempting to bind 0.0.0.0:50051
D 2023-01-12T00:48:39.740Z | server | (1) Successfully bound 0.0.0.0:50051
D 2023-01-12T00:48:39.740Z | server | (1) Attempting to bind /tmp/csi.sock
D 2023-01-12T00:48:39.740Z | server | (1) Successfully bound /tmp/csi.sock
D 2023-01-12T00:48:44.526Z | server | (1) Received call to method /csi.v1.Node/NodeUnstageVolume at address null
D 2023-01-12T00:48:44.527Z | server_call | Request to /csi.v1.Node/NodeUnstageVolume received headers {"grpc-accept-encoding":["identity,deflate,gzip"],"accept-encoding":["identity"],"user-agent":["grpc-node-js/1.8.1"],"content-type":["application/grpc"],"te":["trailers"]}
info: new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.1"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false} {"timestamp":"2023-01-12T00:48:44.529Z"}
debug: operation lock keys: ["volume_id_PVC-111"] {"timestamp":"2023-01-12T00:48:44.530Z"}
executing mount command: findmnt --mountpoint /mnt/staged/block_device --output source,target,fstype,label,options -b -J --nofsroot
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
retry - failed condition, not trying again
info: new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {} {"timestamp":"2023-01-12T00:48:44.555Z"}
D 2023-01-12T00:48:44.556Z | server_call | Request to method /csi.v1.Node/NodeUnstageVolume ended with status code: OK details: OK
D 2023-01-12T00:48:44.558Z | server_call | Request to method /csi.v1.Node/NodeUnstageVolume stream closed with rstCode 0
GRPC_TRACE=all GRPC_VERBOSITY=DEBUG node dev/client.js
D 2023-01-12T00:48:44.446Z | index | Loading @grpc/grpc-js version 1.8.1
D 2023-01-12T00:48:44.512Z | resolving_load_balancer | dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:48:44.512Z | connectivity_state | (1) dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:48:44.513Z | dns_resolver | Resolver constructed for target dns:localhost:50051
D 2023-01-12T00:48:44.513Z | channel | (1) dns:localhost:50051 Channel constructed with options {}
D 2023-01-12T00:48:44.514Z | channel_stacktrace | (1) Channel constructed 
    at new InternalChannel (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/internal-channel.js:196:23)
    at new ChannelImplementation (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/channel.js:35:32)
    at new Client (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/client.js:65:36)
    at new ServiceClientImpl (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/make-client.js:58:5)
    at Object.<anonymous> (/home/thansen/Projects/github/democratic-csi/democratic-csi/dev/client.js:38:22)
    at Module._compile (node:internal/modules/cjs/loader:1165:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1219:10)
    at Module.load (node:internal/modules/cjs/loader:1043:32)
    at Function.Module._load (node:internal/modules/cjs/loader:878:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
D 2023-01-12T00:48:44.514Z | resolving_load_balancer | dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:48:44.514Z | connectivity_state | (2) dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:48:44.514Z | dns_resolver | Resolver constructed for target dns:localhost:50051
D 2023-01-12T00:48:44.514Z | channel | (2) dns:localhost:50051 Channel constructed with options {}
D 2023-01-12T00:48:44.514Z | channel_stacktrace | (2) Channel constructed 
    at new InternalChannel (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/internal-channel.js:196:23)
    at new ChannelImplementation (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/channel.js:35:32)
    at new Client (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/client.js:65:36)
    at new ServiceClientImpl (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/make-client.js:58:5)
    at Object.<anonymous> (/home/thansen/Projects/github/democratic-csi/democratic-csi/dev/client.js:43:24)
    at Module._compile (node:internal/modules/cjs/loader:1165:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1219:10)
    at Module.load (node:internal/modules/cjs/loader:1043:32)
    at Function.Module._load (node:internal/modules/cjs/loader:878:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
D 2023-01-12T00:48:44.514Z | resolving_load_balancer | dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:48:44.514Z | connectivity_state | (3) dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:48:44.515Z | dns_resolver | Resolver constructed for target dns:localhost:50051
D 2023-01-12T00:48:44.515Z | channel | (3) dns:localhost:50051 Channel constructed with options {}
D 2023-01-12T00:48:44.515Z | channel_stacktrace | (3) Channel constructed 
    at new InternalChannel (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/internal-channel.js:196:23)
    at new ChannelImplementation (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/channel.js:35:32)
    at new Client (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/client.js:65:36)
    at new ServiceClientImpl (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/make-client.js:58:5)
    at Object.<anonymous> (/home/thansen/Projects/github/democratic-csi/democratic-csi/dev/client.js:48:18)
    at Module._compile (node:internal/modules/cjs/loader:1165:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1219:10)
    at Module.load (node:internal/modules/cjs/loader:1043:32)
    at Function.Module._load (node:internal/modules/cjs/loader:878:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
D 2023-01-12T00:48:44.515Z | channel | (3) dns:localhost:50051 createResolvingCall [0] method="/csi.v1.Node/NodeUnstageVolume", deadline=Infinity
D 2023-01-12T00:48:44.515Z | resolving_call | [0] Created
D 2023-01-12T00:48:44.515Z | resolving_call | [0] Deadline: Infinity
D 2023-01-12T00:48:44.516Z | resolving_call | [0] start called
D 2023-01-12T00:48:44.516Z | dns_resolver | Looking up DNS hostname localhost
D 2023-01-12T00:48:44.517Z | resolving_load_balancer | dns:localhost:50051 IDLE -> CONNECTING
D 2023-01-12T00:48:44.517Z | connectivity_state | (3) dns:localhost:50051 IDLE -> CONNECTING
D 2023-01-12T00:48:44.517Z | channel | (3) dns:localhost:50051 callRefTimer.ref | configSelectionQueue.length=1 pickQueue.length=0
D 2023-01-12T00:48:44.517Z | resolving_call | [0] startRead called
D 2023-01-12T00:48:44.517Z | resolving_call | [0] write() called with message of length 22
D 2023-01-12T00:48:44.517Z | resolving_call | [0] halfClose called
D 2023-01-12T00:48:44.518Z | dns_resolver | Resolved addresses for target dns:localhost:50051: [127.0.0.1:50051]
D 2023-01-12T00:48:44.518Z | pick_first | Connect to address list 127.0.0.1:50051
D 2023-01-12T00:48:44.519Z | subchannel | (4) 127.0.0.1:50051 Subchannel constructed with options {}
D 2023-01-12T00:48:44.519Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 0 -> 1
D 2023-01-12T00:48:44.519Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 1 -> 2
D 2023-01-12T00:48:44.519Z | pick_first | Start connecting to subchannel with address 127.0.0.1:50051
D 2023-01-12T00:48:44.519Z | pick_first | IDLE -> CONNECTING
D 2023-01-12T00:48:44.519Z | resolving_load_balancer | dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:48:44.519Z | channel | (3) dns:localhost:50051 callRefTimer.unref | configSelectionQueue.length=1 pickQueue.length=0
D 2023-01-12T00:48:44.519Z | connectivity_state | (3) dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:48:44.519Z | subchannel | (4) 127.0.0.1:50051 IDLE -> CONNECTING
D 2023-01-12T00:48:44.520Z | pick_first | CONNECTING -> CONNECTING
D 2023-01-12T00:48:44.520Z | resolving_load_balancer | dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:48:44.520Z | connectivity_state | (3) dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:48:44.520Z | subchannel | (4) 127.0.0.1:50051 creating HTTP/2 session
D 2023-01-12T00:48:44.521Z | channel | (3) dns:localhost:50051 createRetryingCall [1] method="/csi.v1.Node/NodeUnstageVolume"
D 2023-01-12T00:48:44.521Z | retrying_call | [1] start called
D 2023-01-12T00:48:44.521Z | channel | (3) dns:localhost:50051 createLoadBalancingCall [2] method="/csi.v1.Node/NodeUnstageVolume"
D 2023-01-12T00:48:44.521Z | retrying_call | [1] Created child call [2] for attempt 1
D 2023-01-12T00:48:44.521Z | load_balancing_call | [2] start called
D 2023-01-12T00:48:44.522Z | load_balancing_call | [2] Pick result: QUEUE subchannel: null status: undefined undefined
D 2023-01-12T00:48:44.522Z | channel | (3) dns:localhost:50051 callRefTimer.ref | configSelectionQueue.length=0 pickQueue.length=1
D 2023-01-12T00:48:44.522Z | retrying_call | [1] startRead called
D 2023-01-12T00:48:44.522Z | load_balancing_call | [2] startRead called
D 2023-01-12T00:48:44.522Z | retrying_call | [1] write() called with message of length 27
D 2023-01-12T00:48:44.522Z | load_balancing_call | [2] write() called with message of length 27
D 2023-01-12T00:48:44.522Z | retrying_call | [1] halfClose called
D 2023-01-12T00:48:44.523Z | subchannel | (4) 127.0.0.1:50051 CONNECTING -> READY
D 2023-01-12T00:48:44.523Z | pick_first | Pick subchannel with address 127.0.0.1:50051
D 2023-01-12T00:48:44.524Z | pick_first | CONNECTING -> READY
D 2023-01-12T00:48:44.524Z | resolving_load_balancer | dns:localhost:50051 CONNECTING -> READY
D 2023-01-12T00:48:44.524Z | channel | (3) dns:localhost:50051 callRefTimer.unref | configSelectionQueue.length=0 pickQueue.length=0
D 2023-01-12T00:48:44.524Z | load_balancing_call | [2] Pick result: COMPLETE subchannel: (4) 127.0.0.1:50051 status: undefined undefined
D 2023-01-12T00:48:44.524Z | connectivity_state | (3) dns:localhost:50051 CONNECTING -> READY
D 2023-01-12T00:48:44.524Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 2 -> 3
D 2023-01-12T00:48:44.524Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 3 -> 2
D 2023-01-12T00:48:44.525Z | subchannel_flowctrl | (4) 127.0.0.1:50051 local window size: 65535 remote window size: 65535
D 2023-01-12T00:48:44.525Z | subchannel_internals | (4) 127.0.0.1:50051 session.closed=false session.destroyed=false session.socket.destroyed=false
D 2023-01-12T00:48:44.525Z | subchannel_refcount | (4) 127.0.0.1:50051 callRefcount 0 -> 1
D 2023-01-12T00:48:44.525Z | load_balancing_call | [2] Created child call [3]
D 2023-01-12T00:48:44.525Z | subchannel_call | [3] write() called with message of length 27
D 2023-01-12T00:48:44.525Z | subchannel_call | [3] sending data chunk of length 27
D 2023-01-12T00:48:44.526Z | load_balancing_call | [2] halfClose called
D 2023-01-12T00:48:44.526Z | subchannel_call | [3] end() called
D 2023-01-12T00:48:44.526Z | subchannel_call | [3] calling end() on HTTP/2 stream
D 2023-01-12T00:48:44.526Z | subchannel | (4) 127.0.0.1:50051 new settings received: {"headerTableSize":4096,"enablePush":true,"initialWindowSize":65535,"maxFrameSize":16384,"maxConcurrentStreams":4294967295,"maxHeaderListSize":4294967295,"maxHeaderSize":4294967295,"enableConnectProtocol":false}
D 2023-01-12T00:48:44.528Z | subchannel | (4) 127.0.0.1:50051 local settings acknowledged by remote: {"headerTableSize":4096,"enablePush":true,"initialWindowSize":65535,"maxFrameSize":16384,"maxConcurrentStreams":4294967295,"maxHeaderListSize":4294967295,"maxHeaderSize":4294967295,"enableConnectProtocol":false}
D 2023-01-12T00:48:44.557Z | subchannel_call | [3] Received server headers:
		:status: 200
		content-type: application/grpc+proto
		grpc-accept-encoding: identity,deflate,gzip
		grpc-encoding: identity
		date: Thu, 12 Jan 2023 00:48:44 GMT

D 2023-01-12T00:48:44.558Z | load_balancing_call | [2] Received metadata
D 2023-01-12T00:48:44.558Z | retrying_call | [1] Received metadata from child [2]
D 2023-01-12T00:48:44.558Z | retrying_call | [1] Committing call [2] at index 0
D 2023-01-12T00:48:44.558Z | subchannel_call | [3] receive HTTP/2 data frame of length 5
D 2023-01-12T00:48:44.559Z | subchannel_call | [3] parsed message of length 5
D 2023-01-12T00:48:44.559Z | subchannel_call | [3] pushing to reader message of length 5
D 2023-01-12T00:48:44.559Z | load_balancing_call | [2] Received message
D 2023-01-12T00:48:44.559Z | retrying_call | [1] Received message from child [2]
D 2023-01-12T00:48:44.560Z | subchannel_call | [3] Received server trailers:
		grpc-status: 0
		grpc-message: OK

D 2023-01-12T00:48:44.560Z | subchannel_call | [3] received status code 0 from server
D 2023-01-12T00:48:44.560Z | subchannel_call | [3] received status details string "OK" from server
D 2023-01-12T00:48:44.560Z | subchannel_call | [3] close http2 stream with code 0
D 2023-01-12T00:48:44.560Z | subchannel_call | [3] ended with status: code=0 details="OK"
D 2023-01-12T00:48:44.561Z | subchannel_refcount | (4) 127.0.0.1:50051 callRefcount 1 -> 0
D 2023-01-12T00:48:44.561Z | load_balancing_call | [2] Received status
D 2023-01-12T00:48:44.561Z | load_balancing_call | [2] ended with status: code=0 details="OK"
D 2023-01-12T00:48:44.561Z | retrying_call | [1] Received status from child [2]
D 2023-01-12T00:48:44.561Z | retrying_call | [1] state=COMMITTED handling status with progress PROCESSED from child [2] in state ACTIVE
D 2023-01-12T00:48:44.561Z | retrying_call | [1] ended with status: code=0 details="OK"
D 2023-01-12T00:48:44.561Z | resolving_call | [0] ended with status: code=0 details="OK"
D 2023-01-12T00:48:44.561Z | subchannel_call | [3] HTTP/2 stream closed with code 0
NodeUnstageVolume - error: null, data: {}

v1.8.2

./dev/start.sh
zfs-generic-local.yaml
grpc implementation: @grpc/grpc-js
D 2023-01-12T00:50:54.959Z | index | Loading @grpc/grpc-js version 1.8.2
info: initializing csi driver: zfs-generic-nvmeof {"timestamp":"2023-01-12T00:50:55.088Z"}
debug: setting default identity service caps {"timestamp":"2023-01-12T00:50:55.353Z"}
debug: setting default identity volume_expansion caps {"timestamp":"2023-01-12T00:50:55.354Z"}
debug: setting default controller caps {"timestamp":"2023-01-12T00:50:55.354Z"}
debug: setting default node caps {"timestamp":"2023-01-12T00:50:55.356Z"}
D 2023-01-12T00:50:55.356Z | server | (1) Server constructed
info: starting csi server - node version: v16.19.0, package version: 1.8.0, config file: /home/thansen/Projects/github/democratic-csi/democratic-csi/dev/zfs-generic-local.yaml, csi-name: driver-test, csi-driver: zfs-generic-nvmeof, csi-mode: controller,node, csi-version: 1.5.0, address: 0.0.0.0:50051, socket: unix:///tmp/csi.sock {"timestamp":"2023-01-12T00:50:55.357Z"}
D 2023-01-12T00:50:55.357Z | dns_resolver | Resolver constructed for target dns:0.0.0.0:50051
D 2023-01-12T00:50:55.358Z | dns_resolver | Returning IP address for target dns:0.0.0.0:50051
D 2023-01-12T00:50:55.359Z | server | (1) Attempting to bind 0.0.0.0:50051
D 2023-01-12T00:50:55.361Z | server | (1) Successfully bound 0.0.0.0:50051
D 2023-01-12T00:50:55.361Z | server | (1) Attempting to bind /tmp/csi.sock
D 2023-01-12T00:50:55.362Z | server | (1) Successfully bound /tmp/csi.sock
D 2023-01-12T00:51:04.297Z | server | (1) Received call to method /csi.v1.Node/NodeUnstageVolume at address null
D 2023-01-12T00:51:04.298Z | server_call | Request to /csi.v1.Node/NodeUnstageVolume received headers {"grpc-accept-encoding":["identity,deflate,gzip"],"accept-encoding":["identity"],"user-agent":["grpc-node-js/1.8.2"],"content-type":["application/grpc"],"te":["trailers"]}
info: new request - driver: ControllerZfsGenericDriver method: NodeUnstageVolume call: {"metadata":{"user-agent":["grpc-node-js/1.8.2"]},"request":{"volume_id":"PVC-111","staging_target_path":"/mnt/staged"},"cancelled":false} {"timestamp":"2023-01-12T00:51:04.299Z"}
debug: operation lock keys: ["volume_id_PVC-111"] {"timestamp":"2023-01-12T00:51:04.300Z"}
executing mount command: findmnt --mountpoint /mnt/staged/block_device --output source,target,fstype,label,options -b -J --nofsroot
D 2023-01-12T00:51:04.308Z | server_call | Request to method /csi.v1.Node/NodeUnstageVolume stream closed with rstCode 8
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
executing mount command: findmnt --mountpoint /mnt/staged --output source,target,fstype,label,options -b -J --nofsroot
retry - failed condition, not trying again
info: new response - driver: ControllerZfsGenericDriver method: NodeUnstageVolume response: {} {"timestamp":"2023-01-12T00:51:04.323Z"}
GRPC_TRACE=all GRPC_VERBOSITY=DEBUG node dev/client.js
D 2023-01-12T00:51:04.219Z | index | Loading @grpc/grpc-js version 1.8.2
D 2023-01-12T00:51:04.283Z | resolving_load_balancer | dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:51:04.283Z | connectivity_state | (1) dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:51:04.284Z | dns_resolver | Resolver constructed for target dns:localhost:50051
D 2023-01-12T00:51:04.284Z | channel | (1) dns:localhost:50051 Channel constructed with options {}
D 2023-01-12T00:51:04.285Z | channel_stacktrace | (1) Channel constructed 
    at new InternalChannel (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/internal-channel.js:196:23)
    at new ChannelImplementation (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/channel.js:35:32)
    at new Client (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/client.js:65:36)
    at new ServiceClientImpl (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/make-client.js:58:5)
    at Object.<anonymous> (/home/thansen/Projects/github/democratic-csi/democratic-csi/dev/client.js:38:22)
    at Module._compile (node:internal/modules/cjs/loader:1165:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1219:10)
    at Module.load (node:internal/modules/cjs/loader:1043:32)
    at Function.Module._load (node:internal/modules/cjs/loader:878:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
D 2023-01-12T00:51:04.285Z | resolving_load_balancer | dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:51:04.285Z | connectivity_state | (2) dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:51:04.285Z | dns_resolver | Resolver constructed for target dns:localhost:50051
D 2023-01-12T00:51:04.285Z | channel | (2) dns:localhost:50051 Channel constructed with options {}
D 2023-01-12T00:51:04.285Z | channel_stacktrace | (2) Channel constructed 
    at new InternalChannel (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/internal-channel.js:196:23)
    at new ChannelImplementation (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/channel.js:35:32)
    at new Client (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/client.js:65:36)
    at new ServiceClientImpl (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/make-client.js:58:5)
    at Object.<anonymous> (/home/thansen/Projects/github/democratic-csi/democratic-csi/dev/client.js:43:24)
    at Module._compile (node:internal/modules/cjs/loader:1165:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1219:10)
    at Module.load (node:internal/modules/cjs/loader:1043:32)
    at Function.Module._load (node:internal/modules/cjs/loader:878:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
D 2023-01-12T00:51:04.285Z | resolving_load_balancer | dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:51:04.285Z | connectivity_state | (3) dns:localhost:50051 IDLE -> IDLE
D 2023-01-12T00:51:04.286Z | dns_resolver | Resolver constructed for target dns:localhost:50051
D 2023-01-12T00:51:04.286Z | channel | (3) dns:localhost:50051 Channel constructed with options {}
D 2023-01-12T00:51:04.286Z | channel_stacktrace | (3) Channel constructed 
    at new InternalChannel (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/internal-channel.js:196:23)
    at new ChannelImplementation (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/channel.js:35:32)
    at new Client (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/client.js:65:36)
    at new ServiceClientImpl (/home/thansen/Projects/github/democratic-csi/democratic-csi/node_modules/@grpc/grpc-js/build/src/make-client.js:58:5)
    at Object.<anonymous> (/home/thansen/Projects/github/democratic-csi/democratic-csi/dev/client.js:48:18)
    at Module._compile (node:internal/modules/cjs/loader:1165:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1219:10)
    at Module.load (node:internal/modules/cjs/loader:1043:32)
    at Function.Module._load (node:internal/modules/cjs/loader:878:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
D 2023-01-12T00:51:04.286Z | channel | (3) dns:localhost:50051 createResolvingCall [0] method="/csi.v1.Node/NodeUnstageVolume", deadline=Infinity
D 2023-01-12T00:51:04.286Z | resolving_call | [0] Created
D 2023-01-12T00:51:04.286Z | resolving_call | [0] Deadline: Infinity
D 2023-01-12T00:51:04.287Z | resolving_call | [0] start called
D 2023-01-12T00:51:04.287Z | dns_resolver | Looking up DNS hostname localhost
D 2023-01-12T00:51:04.287Z | resolving_load_balancer | dns:localhost:50051 IDLE -> CONNECTING
D 2023-01-12T00:51:04.288Z | connectivity_state | (3) dns:localhost:50051 IDLE -> CONNECTING
D 2023-01-12T00:51:04.288Z | channel | (3) dns:localhost:50051 callRefTimer.ref | configSelectionQueue.length=1 pickQueue.length=0
D 2023-01-12T00:51:04.288Z | resolving_call | [0] startRead called
D 2023-01-12T00:51:04.288Z | resolving_call | [0] write() called with message of length 22
D 2023-01-12T00:51:04.288Z | resolving_call | [0] halfClose called
D 2023-01-12T00:51:04.289Z | dns_resolver | Resolved addresses for target dns:localhost:50051: [127.0.0.1:50051]
D 2023-01-12T00:51:04.289Z | pick_first | Connect to address list 127.0.0.1:50051
D 2023-01-12T00:51:04.290Z | subchannel | (4) 127.0.0.1:50051 Subchannel constructed with options {}
D 2023-01-12T00:51:04.290Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 0 -> 1
D 2023-01-12T00:51:04.290Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 1 -> 2
D 2023-01-12T00:51:04.290Z | pick_first | Start connecting to subchannel with address 127.0.0.1:50051
D 2023-01-12T00:51:04.290Z | pick_first | IDLE -> CONNECTING
D 2023-01-12T00:51:04.290Z | resolving_load_balancer | dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:51:04.290Z | channel | (3) dns:localhost:50051 callRefTimer.unref | configSelectionQueue.length=1 pickQueue.length=0
D 2023-01-12T00:51:04.290Z | connectivity_state | (3) dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:51:04.290Z | subchannel | (4) 127.0.0.1:50051 IDLE -> CONNECTING
D 2023-01-12T00:51:04.290Z | pick_first | CONNECTING -> CONNECTING
D 2023-01-12T00:51:04.290Z | resolving_load_balancer | dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:51:04.290Z | connectivity_state | (3) dns:localhost:50051 CONNECTING -> CONNECTING
D 2023-01-12T00:51:04.292Z | channel | (3) dns:localhost:50051 createRetryingCall [1] method="/csi.v1.Node/NodeUnstageVolume"
D 2023-01-12T00:51:04.292Z | retrying_call | [1] start called
D 2023-01-12T00:51:04.292Z | channel | (3) dns:localhost:50051 createLoadBalancingCall [2] method="/csi.v1.Node/NodeUnstageVolume"
D 2023-01-12T00:51:04.292Z | retrying_call | [1] Created child call [2] for attempt 1
D 2023-01-12T00:51:04.292Z | load_balancing_call | [2] start called
D 2023-01-12T00:51:04.292Z | load_balancing_call | [2] Pick result: QUEUE subchannel: null status: undefined undefined
D 2023-01-12T00:51:04.292Z | channel | (3) dns:localhost:50051 callRefTimer.ref | configSelectionQueue.length=0 pickQueue.length=1
D 2023-01-12T00:51:04.292Z | retrying_call | [1] startRead called
D 2023-01-12T00:51:04.293Z | load_balancing_call | [2] startRead called
D 2023-01-12T00:51:04.293Z | retrying_call | [1] write() called with message of length 27
D 2023-01-12T00:51:04.293Z | load_balancing_call | [2] write() called with message of length 27
D 2023-01-12T00:51:04.293Z | retrying_call | [1] halfClose called
D 2023-01-12T00:51:04.294Z | subchannel | (4) 127.0.0.1:50051 CONNECTING -> READY
D 2023-01-12T00:51:04.294Z | pick_first | Pick subchannel with address 127.0.0.1:50051
D 2023-01-12T00:51:04.294Z | pick_first | CONNECTING -> READY
D 2023-01-12T00:51:04.294Z | resolving_load_balancer | dns:localhost:50051 CONNECTING -> READY
D 2023-01-12T00:51:04.294Z | channel | (3) dns:localhost:50051 callRefTimer.unref | configSelectionQueue.length=0 pickQueue.length=0
D 2023-01-12T00:51:04.294Z | load_balancing_call | [2] Pick result: COMPLETE subchannel: (4) 127.0.0.1:50051 status: undefined undefined
D 2023-01-12T00:51:04.295Z | connectivity_state | (3) dns:localhost:50051 CONNECTING -> READY
D 2023-01-12T00:51:04.295Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 2 -> 3
D 2023-01-12T00:51:04.295Z | subchannel_refcount | (4) 127.0.0.1:50051 refcount 3 -> 2
D 2023-01-12T00:51:04.296Z | transport_flowctrl | (5) 127.0.0.1:50051 local window size: 65535 remote window size: 65535
D 2023-01-12T00:51:04.296Z | transport_internals | (5) 127.0.0.1:50051 session.closed=false session.destroyed=false session.socket.destroyed=false
D 2023-01-12T00:51:04.296Z | load_balancing_call | [2] Created child call [3]
D 2023-01-12T00:51:04.296Z | subchannel_call | [3] write() called with message of length 27
D 2023-01-12T00:51:04.296Z | subchannel_call | [3] sending data chunk of length 27
D 2023-01-12T00:51:04.297Z | load_balancing_call | [2] halfClose called
D 2023-01-12T00:51:04.297Z | subchannel_call | [3] end() called
D 2023-01-12T00:51:04.297Z | subchannel_call | [3] calling end() on HTTP/2 stream

@amanBhargava05
Copy link

Chat stream with peer cancelled or closed when starting chaincode hyperledger/fabric-chaincode-node#372

Hi @murgatroid99 with respect to the issue mentioned in hyperledger/fabric-chaincode-node#372 we are still facing it when we use the 1.8.3 version of grpc-js

@murgatroid99
Copy link
Member

I have published a likely fix for this in version 1.8.4. Please try it out.

@travisghansen
Copy link

For my simple test/use-case above it seems to have resolved the issue. Thanks!

@nikolaymatrosov
Copy link

Thank you very much. I can confirm that version 1.8.4 solved the issue for me.

@amanBhargava05
Copy link

@murgatroid99 I tried with 1.8.4v too still getting the error(mentioned below), the peer to chaincode connection is instantly dropping

failed to execute transaction 556226542ee54bd3ed8da2f840fc86b003a92a86c0ad377829ebdec2bb71acf0: error sending: chaincode stream terminated

@nikopavlica
Copy link

Was having the same issue, and 1.8.4 resolved it.

FYI: Original issue could be worked around if there was any other ref alive (eg. setTimeout or setInterval)

@amanBhargava05
Copy link

amanBhargava05 commented Jan 13, 2023

Was having the same issue, and 1.8.4 resolved it.

FYI: Original issue could be worked around if there was any other ref alive (eg. setTimeout or setInterval)

Hi @nikopavlica , can you share/refer to an example of the workaround you found to bypass this ? Do you require this change you are mentioning while using 1.8.4v also ?

@murgatroid99
Copy link
Member

If it is the same problem that @nikopavlica had, you can work around it by calling setInterval(() => {}, 1_000_000_000). When they say that the problem was resolved in 1.8.4 they mean that they don't need to work around it anymore.

@yamen
Copy link

yamen commented Jan 25, 2023

This seems to have regressed in 1.8.5 again...

@murgatroid99
Copy link
Member

murgatroid99 commented Jan 25, 2023

@yamen Can you share a trace log from a client that experiences this error, which you would get by running the client with the environment variables GRPC_TRACE=all and GRPC_VERBOSITY=DEBUG?

@murgatroid99
Copy link
Member

I just made a change in version 1.8.6 that might fix things. Please try it out, and if you're still having the problem, the trace log will still be useful.

@yamen
Copy link

yamen commented Jan 25, 2023

OK 1.8.6 still exits early, here is the trace log when it fails:

D 2023-01-25T19:28:00.677Z | index | Loading @grpc/grpc-js version 1.8.6
1674674880743
D 2023-01-25T19:28:00.745Z | resolving_load_balancer | dns:127.0.0.1:7233 IDLE -> IDLE
D 2023-01-25T19:28:00.745Z | connectivity_state | (1) dns:127.0.0.1:7233 IDLE -> IDLE
D 2023-01-25T19:28:00.746Z | dns_resolver | Resolver constructed for target dns:127.0.0.1:7233
D 2023-01-25T19:28:00.746Z | channel | (1) dns:127.0.0.1:7233 Channel constructed with options {
  "grpc.keepalive_permit_without_calls": 1,
  "grpc.keepalive_time_ms": 30000,
  "grpc.keepalive_timeout_ms": 15000
}
D 2023-01-25T19:28:00.754Z | channel_stacktrace | (1) Channel constructed
    at new InternalChannel (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/internal-channel.ts:287:19)
    at new ChannelImplementation (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/channel.ts:116:28)
    at new Client (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/client.ts:158:30)
    at new ServiceClientImpl (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/make-client.ts:129:3)
    at Function.createCtorOptions (/home/yamen/my-temporal/node_modules/@temporalio/client/src/connection.ts:218:20)
    at Function.lazy (/home/yamen/my-temporal/node_modules/@temporalio/client/src/connection.ts:296:26)
    at Function.connect (/home/yamen/my-temporal/node_modules/@temporalio/client/src/connection.ts:306:23)
    at run (/home/yamen/my-temporal/src/client.ts:8:39)
    at Object.<anonymous> (/home/yamen/my-temporal/src/client.ts:33:1)
    at Module._compile (node:internal/modules/cjs/loader:1218:14)
D 2023-01-25T19:28:00.793Z | dns_resolver | Returning IP address for target dns:127.0.0.1:7233
D 2023-01-25T19:28:00.793Z | resolving_load_balancer | dns:127.0.0.1:7233 IDLE -> CONNECTING
D 2023-01-25T19:28:00.793Z | connectivity_state | (1) dns:127.0.0.1:7233 IDLE -> CONNECTING
D 2023-01-25T19:28:00.794Z | pick_first | Connect to address list 127.0.0.1:7233
D 2023-01-25T19:28:00.795Z | subchannel | (2) 127.0.0.1:7233 Subchannel constructed with options {
  "grpc.keepalive_permit_without_calls": 1,
  "grpc.keepalive_time_ms": 30000,
  "grpc.keepalive_timeout_ms": 15000
}
D 2023-01-25T19:28:00.795Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 0 -> 1
D 2023-01-25T19:28:00.795Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 1 -> 2
D 2023-01-25T19:28:00.795Z | pick_first | Start connecting to subchannel with address 127.0.0.1:7233
D 2023-01-25T19:28:00.795Z | pick_first | IDLE -> CONNECTING
D 2023-01-25T19:28:00.795Z | resolving_load_balancer | dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:28:00.795Z | connectivity_state | (1) dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:28:00.796Z | subchannel | (2) 127.0.0.1:7233 IDLE -> CONNECTING
D 2023-01-25T19:28:00.796Z | pick_first | CONNECTING -> CONNECTING
D 2023-01-25T19:28:00.796Z | resolving_load_balancer | dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:28:00.796Z | connectivity_state | (1) dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:28:00.799Z | subchannel | (2) 127.0.0.1:7233 CONNECTING -> READY
D 2023-01-25T19:28:00.800Z | pick_first | Pick subchannel with address 127.0.0.1:7233
D 2023-01-25T19:28:00.800Z | pick_first | CONNECTING -> READY
D 2023-01-25T19:28:00.800Z | resolving_load_balancer | dns:127.0.0.1:7233 CONNECTING -> READY
D 2023-01-25T19:28:00.800Z | connectivity_state | (1) dns:127.0.0.1:7233 CONNECTING -> READY
D 2023-01-25T19:28:00.800Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 2 -> 3
D 2023-01-25T19:28:00.800Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 3 -> 2
D 2023-01-25T19:28:00.802Z | channel | (1) dns:127.0.0.1:7233 createResolvingCall [0] method="/temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo", deadline=1674674890792
D 2023-01-25T19:28:00.803Z | resolving_call | [0] Created
D 2023-01-25T19:28:00.803Z | resolving_call | [0] Deadline: 1674674890792
D 2023-01-25T19:28:00.803Z | resolving_call | [0] Deadline will be reached in 9989ms
D 2023-01-25T19:28:00.803Z | resolving_call | [0] start called
D 2023-01-25T19:28:00.804Z | resolving_call | [0] startRead called
D 2023-01-25T19:28:00.804Z | resolving_call | [0] write() called with message of length 0
D 2023-01-25T19:28:00.804Z | resolving_call | [0] halfClose called
D 2023-01-25T19:28:00.804Z | channel | (1) dns:127.0.0.1:7233 createRetryingCall [1] method="/temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo"
D 2023-01-25T19:28:00.805Z | retrying_call | [1] start called
D 2023-01-25T19:28:00.805Z | channel | (1) dns:127.0.0.1:7233 createLoadBalancingCall [2] method="/temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo"
D 2023-01-25T19:28:00.805Z | retrying_call | [1] Created child call [2] for attempt 1
D 2023-01-25T19:28:00.805Z | load_balancing_call | [2] start called
D 2023-01-25T19:28:00.805Z | load_balancing_call | [2] Pick result: COMPLETE subchannel: (2) 127.0.0.1:7233 status: undefined undefined
D 2023-01-25T19:28:00.805Z | retrying_call | [1] startRead called
D 2023-01-25T19:28:00.806Z | load_balancing_call | [2] startRead called
D 2023-01-25T19:28:00.807Z | transport_flowctrl | (3) 127.0.0.1:7233 local window size: 65535 remote window size: 65535
D 2023-01-25T19:28:00.807Z | transport_internals | (3) 127.0.0.1:7233 session.closed=false session.destroyed=false session.socket.destroyed=false
D 2023-01-25T19:28:00.808Z | load_balancing_call | [2] Created child call [3]
D 2023-01-25T19:28:00.808Z | retrying_call | [1] write() called with message of length 5
D 2023-01-25T19:28:00.808Z | load_balancing_call | [2] write() called with message of length 5
D 2023-01-25T19:28:00.808Z | subchannel_call | [3] write() called with message of length 5
D 2023-01-25T19:28:00.808Z | subchannel_call | [3] sending data chunk of length 5
D 2023-01-25T19:28:00.809Z | retrying_call | [1] halfClose called
D 2023-01-25T19:28:00.809Z | load_balancing_call | [2] halfClose called
D 2023-01-25T19:28:00.809Z | subchannel_call | [3] end() called
D 2023-01-25T19:28:00.809Z | subchannel_call | [3] calling end() on HTTP/2 stream
D 2023-01-25T19:28:00.810Z | transport | (3) 127.0.0.1:7233 local settings acknowledged by remote: {"headerTableSize":4096,"enablePush":true,"initialWindowSize":65535,"maxFrameSize":16384,"maxConcurrentStreams":4294967295,"maxHeaderListSize":4294967295,"maxHeaderSize":4294967295,"enableConnectProtocol":false}
D 2023-01-25T19:28:00.810Z | subchannel_call | [3] Received server headers:
                :status: 200
                content-type: application/grpc

D 2023-01-25T19:28:00.810Z | load_balancing_call | [2] Received metadata
D 2023-01-25T19:28:00.810Z | retrying_call | [1] Received metadata from child [2]
D 2023-01-25T19:28:00.811Z | retrying_call | [1] Committing call [2] at index 0
D 2023-01-25T19:28:00.811Z | subchannel_call | [3] receive HTTP/2 data frame of length 27
D 2023-01-25T19:28:00.811Z | subchannel_call | [3] parsed message of length 27
D 2023-01-25T19:28:00.812Z | subchannel_call | [3] pushing to reader message of length 27
D 2023-01-25T19:28:00.812Z | load_balancing_call | [2] Received message
D 2023-01-25T19:28:00.812Z | retrying_call | [1] Received message from child [2]
D 2023-01-25T19:28:00.812Z | subchannel_call | [3] Received server trailers:
                grpc-status: 0
                grpc-message:

D 2023-01-25T19:28:00.813Z | subchannel_call | [3] received status code 0 from server
D 2023-01-25T19:28:00.813Z | subchannel_call | [3] received status details string "" from server
D 2023-01-25T19:28:00.813Z | subchannel_call | [3] close http2 stream with code 0
D 2023-01-25T19:28:00.813Z | subchannel_call | [3] ended with status: code=0 details=""
D 2023-01-25T19:28:00.814Z | load_balancing_call | [2] Received status
D 2023-01-25T19:28:00.814Z | load_balancing_call | [2] ended with status: code=0 details=""
D 2023-01-25T19:28:00.814Z | retrying_call | [1] Received status from child [2]
D 2023-01-25T19:28:00.814Z | retrying_call | [1] state=COMMITTED handling status with progress PROCESSED from child [2] in state ACTIVE
D 2023-01-25T19:28:00.814Z | retrying_call | [1] ended with status: code=0 details=""
D 2023-01-25T19:28:00.814Z | resolving_call | [0] ended with status: code=0 details=""
D 2023-01-25T19:28:00.815Z | subchannel_call | [3] HTTP/2 stream closed with code 0
1674674880816
D 2023-01-25T19:28:00.822Z | channel | (1) dns:127.0.0.1:7233 createResolvingCall [4] method="/temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution", deadline=Infinity
D 2023-01-25T19:28:00.822Z | resolving_call | [4] Created
D 2023-01-25T19:28:00.822Z | resolving_call | [4] Deadline: Infinity
D 2023-01-25T19:28:00.822Z | resolving_call | [4] start called
D 2023-01-25T19:28:00.823Z | resolving_call | [4] startRead called
D 2023-01-25T19:28:00.823Z | resolving_call | [4] write() called with message of length 163
D 2023-01-25T19:28:00.823Z | resolving_call | [4] halfClose called
D 2023-01-25T19:28:00.823Z | channel | (1) dns:127.0.0.1:7233 createRetryingCall [5] method="/temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution"
D 2023-01-25T19:28:00.823Z | retrying_call | [5] start called
D 2023-01-25T19:28:00.823Z | channel | (1) dns:127.0.0.1:7233 createLoadBalancingCall [6] method="/temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution"
D 2023-01-25T19:28:00.823Z | retrying_call | [5] Created child call [6] for attempt 1
D 2023-01-25T19:28:00.823Z | load_balancing_call | [6] start called
D 2023-01-25T19:28:00.823Z | load_balancing_call | [6] Pick result: COMPLETE subchannel: (2) 127.0.0.1:7233 status: undefined undefined
D 2023-01-25T19:28:00.823Z | retrying_call | [5] startRead called
D 2023-01-25T19:28:00.823Z | load_balancing_call | [6] startRead called
D 2023-01-25T19:28:00.824Z | transport_flowctrl | (3) 127.0.0.1:7233 local window size: 65508 remote window size: 65535
D 2023-01-25T19:28:00.824Z | transport_internals | (3) 127.0.0.1:7233 session.closed=false session.destroyed=false session.socket.destroyed=false
D 2023-01-25T19:28:00.824Z | load_balancing_call | [6] Created child call [7]
D 2023-01-25T19:28:00.824Z | retrying_call | [5] write() called with message of length 168
D 2023-01-25T19:28:00.824Z | load_balancing_call | [6] write() called with message of length 168
D 2023-01-25T19:28:00.824Z | subchannel_call | [7] write() called with message of length 168
D 2023-01-25T19:28:00.824Z | subchannel_call | [7] sending data chunk of length 168
D 2023-01-25T19:28:00.824Z | retrying_call | [5] halfClose called
D 2023-01-25T19:28:00.824Z | load_balancing_call | [6] halfClose called
D 2023-01-25T19:28:00.824Z | subchannel_call | [7] end() called
D 2023-01-25T19:28:00.824Z | subchannel_call | [7] calling end() on HTTP/2 stream

And here is one where I add an open ref so that it succeeds:

D 2023-01-25T19:27:10.385Z | index | Loading @grpc/grpc-js version 1.8.6
1674674830451
D 2023-01-25T19:27:10.453Z | resolving_load_balancer | dns:127.0.0.1:7233 IDLE -> IDLE
D 2023-01-25T19:27:10.453Z | connectivity_state | (1) dns:127.0.0.1:7233 IDLE -> IDLE
D 2023-01-25T19:27:10.453Z | dns_resolver | Resolver constructed for target dns:127.0.0.1:7233
D 2023-01-25T19:27:10.454Z | channel | (1) dns:127.0.0.1:7233 Channel constructed with options {
  "grpc.keepalive_permit_without_calls": 1,
  "grpc.keepalive_time_ms": 30000,
  "grpc.keepalive_timeout_ms": 15000
}
D 2023-01-25T19:27:10.461Z | channel_stacktrace | (1) Channel constructed
    at new InternalChannel (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/internal-channel.ts:287:19)
    at new ChannelImplementation (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/channel.ts:116:28)
    at new Client (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/client.ts:158:30)
    at new ServiceClientImpl (/home/yamen/my-temporal/node_modules/@grpc/grpc-js/src/make-client.ts:129:3)
    at Function.createCtorOptions (/home/yamen/my-temporal/node_modules/@temporalio/client/src/connection.ts:218:20)
    at Function.lazy (/home/yamen/my-temporal/node_modules/@temporalio/client/src/connection.ts:296:26)
    at Function.connect (/home/yamen/my-temporal/node_modules/@temporalio/client/src/connection.ts:306:23)
    at run (/home/yamen/my-temporal/src/client.ts:8:39)
    at Object.<anonymous> (/home/yamen/my-temporal/src/client.ts:33:1)
    at Module._compile (node:internal/modules/cjs/loader:1218:14)
D 2023-01-25T19:27:10.502Z | dns_resolver | Returning IP address for target dns:127.0.0.1:7233
D 2023-01-25T19:27:10.503Z | resolving_load_balancer | dns:127.0.0.1:7233 IDLE -> CONNECTING
D 2023-01-25T19:27:10.503Z | connectivity_state | (1) dns:127.0.0.1:7233 IDLE -> CONNECTING
D 2023-01-25T19:27:10.504Z | pick_first | Connect to address list 127.0.0.1:7233
D 2023-01-25T19:27:10.504Z | subchannel | (2) 127.0.0.1:7233 Subchannel constructed with options {
  "grpc.keepalive_permit_without_calls": 1,
  "grpc.keepalive_time_ms": 30000,
  "grpc.keepalive_timeout_ms": 15000
}
D 2023-01-25T19:27:10.504Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 0 -> 1
D 2023-01-25T19:27:10.505Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 1 -> 2
D 2023-01-25T19:27:10.505Z | pick_first | Start connecting to subchannel with address 127.0.0.1:7233
D 2023-01-25T19:27:10.505Z | pick_first | IDLE -> CONNECTING
D 2023-01-25T19:27:10.505Z | resolving_load_balancer | dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:27:10.505Z | connectivity_state | (1) dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:27:10.505Z | subchannel | (2) 127.0.0.1:7233 IDLE -> CONNECTING
D 2023-01-25T19:27:10.506Z | pick_first | CONNECTING -> CONNECTING
D 2023-01-25T19:27:10.506Z | resolving_load_balancer | dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:27:10.506Z | connectivity_state | (1) dns:127.0.0.1:7233 CONNECTING -> CONNECTING
D 2023-01-25T19:27:10.509Z | subchannel | (2) 127.0.0.1:7233 CONNECTING -> READY
D 2023-01-25T19:27:10.509Z | pick_first | Pick subchannel with address 127.0.0.1:7233
D 2023-01-25T19:27:10.509Z | pick_first | CONNECTING -> READY
D 2023-01-25T19:27:10.509Z | resolving_load_balancer | dns:127.0.0.1:7233 CONNECTING -> READY
D 2023-01-25T19:27:10.510Z | connectivity_state | (1) dns:127.0.0.1:7233 CONNECTING -> READY
D 2023-01-25T19:27:10.510Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 2 -> 3
D 2023-01-25T19:27:10.510Z | subchannel_refcount | (2) 127.0.0.1:7233 refcount 3 -> 2
D 2023-01-25T19:27:10.512Z | channel | (1) dns:127.0.0.1:7233 createResolvingCall [0] method="/temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo", deadline=1674674840501
D 2023-01-25T19:27:10.512Z | resolving_call | [0] Created
D 2023-01-25T19:27:10.513Z | resolving_call | [0] Deadline: 1674674840501
D 2023-01-25T19:27:10.513Z | resolving_call | [0] Deadline will be reached in 9988ms
D 2023-01-25T19:27:10.513Z | resolving_call | [0] start called
D 2023-01-25T19:27:10.514Z | resolving_call | [0] startRead called
D 2023-01-25T19:27:10.514Z | resolving_call | [0] write() called with message of length 0
D 2023-01-25T19:27:10.514Z | resolving_call | [0] halfClose called
D 2023-01-25T19:27:10.514Z | channel | (1) dns:127.0.0.1:7233 createRetryingCall [1] method="/temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo"
D 2023-01-25T19:27:10.515Z | retrying_call | [1] start called
D 2023-01-25T19:27:10.515Z | channel | (1) dns:127.0.0.1:7233 createLoadBalancingCall [2] method="/temporal.api.workflowservice.v1.WorkflowService/GetSystemInfo"
D 2023-01-25T19:27:10.515Z | retrying_call | [1] Created child call [2] for attempt 1
D 2023-01-25T19:27:10.515Z | load_balancing_call | [2] start called
D 2023-01-25T19:27:10.515Z | load_balancing_call | [2] Pick result: COMPLETE subchannel: (2) 127.0.0.1:7233 status: undefined undefined
D 2023-01-25T19:27:10.516Z | retrying_call | [1] startRead called
D 2023-01-25T19:27:10.516Z | load_balancing_call | [2] startRead called
D 2023-01-25T19:27:10.517Z | transport_flowctrl | (3) 127.0.0.1:7233 local window size: 65535 remote window size: 65535
D 2023-01-25T19:27:10.518Z | transport_internals | (3) 127.0.0.1:7233 session.closed=false session.destroyed=false session.socket.destroyed=false
D 2023-01-25T19:27:10.518Z | load_balancing_call | [2] Created child call [3]
D 2023-01-25T19:27:10.518Z | retrying_call | [1] write() called with message of length 5
D 2023-01-25T19:27:10.519Z | load_balancing_call | [2] write() called with message of length 5
D 2023-01-25T19:27:10.519Z | subchannel_call | [3] write() called with message of length 5
D 2023-01-25T19:27:10.519Z | subchannel_call | [3] sending data chunk of length 5
D 2023-01-25T19:27:10.519Z | retrying_call | [1] halfClose called
D 2023-01-25T19:27:10.520Z | load_balancing_call | [2] halfClose called
D 2023-01-25T19:27:10.520Z | subchannel_call | [3] end() called
D 2023-01-25T19:27:10.520Z | subchannel_call | [3] calling end() on HTTP/2 stream
D 2023-01-25T19:27:10.520Z | transport | (3) 127.0.0.1:7233 local settings acknowledged by remote: {"headerTableSize":4096,"enablePush":true,"initialWindowSize":65535,"maxFrameSize":16384,"maxConcurrentStreams":4294967295,"maxHeaderListSize":4294967295,"maxHeaderSize":4294967295,"enableConnectProtocol":false}
D 2023-01-25T19:27:10.521Z | subchannel_call | [3] Received server headers:
                :status: 200
                content-type: application/grpc

D 2023-01-25T19:27:10.521Z | load_balancing_call | [2] Received metadata
D 2023-01-25T19:27:10.521Z | retrying_call | [1] Received metadata from child [2]
D 2023-01-25T19:27:10.521Z | retrying_call | [1] Committing call [2] at index 0
D 2023-01-25T19:27:10.522Z | subchannel_call | [3] receive HTTP/2 data frame of length 27
D 2023-01-25T19:27:10.522Z | subchannel_call | [3] parsed message of length 27
D 2023-01-25T19:27:10.523Z | subchannel_call | [3] pushing to reader message of length 27
D 2023-01-25T19:27:10.523Z | load_balancing_call | [2] Received message
D 2023-01-25T19:27:10.523Z | retrying_call | [1] Received message from child [2]
D 2023-01-25T19:27:10.523Z | subchannel_call | [3] Received server trailers:
                grpc-status: 0
                grpc-message:

D 2023-01-25T19:27:10.524Z | subchannel_call | [3] received status code 0 from server
D 2023-01-25T19:27:10.524Z | subchannel_call | [3] received status details string "" from server
D 2023-01-25T19:27:10.524Z | subchannel_call | [3] close http2 stream with code 0
D 2023-01-25T19:27:10.524Z | subchannel_call | [3] ended with status: code=0 details=""
D 2023-01-25T19:27:10.525Z | load_balancing_call | [2] Received status
D 2023-01-25T19:27:10.525Z | load_balancing_call | [2] ended with status: code=0 details=""
D 2023-01-25T19:27:10.525Z | retrying_call | [1] Received status from child [2]
D 2023-01-25T19:27:10.525Z | retrying_call | [1] state=COMMITTED handling status with progress PROCESSED from child [2] in state ACTIVE
D 2023-01-25T19:27:10.525Z | retrying_call | [1] ended with status: code=0 details=""
D 2023-01-25T19:27:10.526Z | resolving_call | [0] ended with status: code=0 details=""
D 2023-01-25T19:27:10.526Z | subchannel_call | [3] HTTP/2 stream closed with code 0
1674674830528
D 2023-01-25T19:27:10.533Z | channel | (1) dns:127.0.0.1:7233 createResolvingCall [4] method="/temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution", deadline=Infinity
D 2023-01-25T19:27:10.534Z | resolving_call | [4] Created
D 2023-01-25T19:27:10.534Z | resolving_call | [4] Deadline: Infinity
D 2023-01-25T19:27:10.534Z | resolving_call | [4] start called
D 2023-01-25T19:27:10.534Z | resolving_call | [4] startRead called
D 2023-01-25T19:27:10.534Z | resolving_call | [4] write() called with message of length 163
D 2023-01-25T19:27:10.534Z | resolving_call | [4] halfClose called
D 2023-01-25T19:27:10.534Z | channel | (1) dns:127.0.0.1:7233 createRetryingCall [5] method="/temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution"
D 2023-01-25T19:27:10.534Z | retrying_call | [5] start called
D 2023-01-25T19:27:10.534Z | channel | (1) dns:127.0.0.1:7233 createLoadBalancingCall [6] method="/temporal.api.workflowservice.v1.WorkflowService/StartWorkflowExecution"
D 2023-01-25T19:27:10.534Z | retrying_call | [5] Created child call [6] for attempt 1
D 2023-01-25T19:27:10.534Z | load_balancing_call | [6] start called
D 2023-01-25T19:27:10.534Z | load_balancing_call | [6] Pick result: COMPLETE subchannel: (2) 127.0.0.1:7233 status: undefined undefined
D 2023-01-25T19:27:10.534Z | retrying_call | [5] startRead called
D 2023-01-25T19:27:10.534Z | load_balancing_call | [6] startRead called
D 2023-01-25T19:27:10.535Z | transport_flowctrl | (3) 127.0.0.1:7233 local window size: 65508 remote window size: 65535
D 2023-01-25T19:27:10.535Z | transport_internals | (3) 127.0.0.1:7233 session.closed=false session.destroyed=false session.socket.destroyed=false
D 2023-01-25T19:27:10.535Z | load_balancing_call | [6] Created child call [7]
D 2023-01-25T19:27:10.535Z | retrying_call | [5] write() called with message of length 168
D 2023-01-25T19:27:10.535Z | load_balancing_call | [6] write() called with message of length 168
D 2023-01-25T19:27:10.535Z | subchannel_call | [7] write() called with message of length 168
D 2023-01-25T19:27:10.535Z | subchannel_call | [7] sending data chunk of length 168
D 2023-01-25T19:27:10.535Z | retrying_call | [5] halfClose called
D 2023-01-25T19:27:10.535Z | load_balancing_call | [6] halfClose called
D 2023-01-25T19:27:10.536Z | subchannel_call | [7] end() called
D 2023-01-25T19:27:10.536Z | subchannel_call | [7] calling end() on HTTP/2 stream
D 2023-01-25T19:27:10.552Z | subchannel_call | [7] Received server headers:
                :status: 200
                content-type: application/grpc

D 2023-01-25T19:27:10.552Z | load_balancing_call | [6] Received metadata
D 2023-01-25T19:27:10.553Z | retrying_call | [5] Received metadata from child [6]
D 2023-01-25T19:27:10.553Z | retrying_call | [5] Committing call [6] at index 0
D 2023-01-25T19:27:10.553Z | subchannel_call | [7] receive HTTP/2 data frame of length 43
D 2023-01-25T19:27:10.553Z | subchannel_call | [7] parsed message of length 43
D 2023-01-25T19:27:10.553Z | subchannel_call | [7] pushing to reader message of length 43
D 2023-01-25T19:27:10.553Z | load_balancing_call | [6] Received message
D 2023-01-25T19:27:10.553Z | retrying_call | [5] Received message from child [6]
D 2023-01-25T19:27:10.553Z | subchannel_call | [7] Received server trailers:
                grpc-status: 0
                grpc-message:

D 2023-01-25T19:27:10.553Z | subchannel_call | [7] received status code 0 from server
D 2023-01-25T19:27:10.553Z | subchannel_call | [7] received status details string "" from server
D 2023-01-25T19:27:10.553Z | subchannel_call | [7] close http2 stream with code 0
D 2023-01-25T19:27:10.554Z | subchannel_call | [7] ended with status: code=0 details=""
D 2023-01-25T19:27:10.554Z | load_balancing_call | [6] Received status
D 2023-01-25T19:27:10.554Z | load_balancing_call | [6] ended with status: code=0 details=""
D 2023-01-25T19:27:10.554Z | retrying_call | [5] Received status from child [6]
D 2023-01-25T19:27:10.554Z | retrying_call | [5] state=COMMITTED handling status with progress PROCESSED from child [6] in state ACTIVE
D 2023-01-25T19:27:10.554Z | retrying_call | [5] ended with status: code=0 details=""
D 2023-01-25T19:27:10.554Z | resolving_call | [4] ended with status: code=0 details=""
D 2023-01-25T19:27:10.554Z | subchannel_call | [7] HTTP/2 stream closed with code 0

I've kept the console.log(Date.now()) entries in there (2 of them in the non-working code, 3 of them in the working code) to help synchronise the logs.

Extra notes:

  • 1.7.3 worked, and my script does finish when it reaches the end
  • 1.8.4 worked, but my script doesn't finish when it reaches the end (open connection or reference presumably)

@murgatroid99
Copy link
Member

I would bet that the issue in 1.8.4 where the script wouldn't finish was caused by the bug that was fixed in 1.8.5. That bug was also coincidentally shadowing this bug that you are experiencing now. Specifically, the deadline timer holds a ref, and prior to 1.8.5, it would continue running after the call ended. Your first call has a deadline, and it was long enough that it would keep the process alive through the rest of your script.

@murgatroid99
Copy link
Member

OK, another attempt, please try version 1.8.7.

@yamen
Copy link

yamen commented Jan 25, 2023

Success! Awesome job

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants