Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lots of error messages like: Cancelling subscription, and marking self as failed with Invalid StartingSequenceNumber since upgrading to 2.0 #391

Closed
ryangardner opened this issue Sep 4, 2018 · 110 comments
Labels
investigation v2.x Issues related to the 2.x version
Milestone

Comments

@ryangardner
Copy link

java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000234, while it was used in a call to a shard with shardId-000000000309 (Service: Kinesis, Status Code: 400, Request ID: eab0b990-9136-1aad-beb8-145829c423c5)
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java:155) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:121) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$Completable.lambda$complete$0(MakeAsyncHttpRequestStage.java:200) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) [na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [na:na]
	at java.base/java.lang.Thread.run(Thread.java:844) [na:na]
Caused by: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000234, while it was used in a call to a shard with shardId-000000000309 (Service: Kinesis, Status Code: 400, Request ID: eab0b990-9136-1aad-beb8-145829c423c5)
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:104) ~[kinesis-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:64) ~[kinesis-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:46) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:30) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.complete(SyncResponseHandlerAdapter.java:92) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.client.handler.BaseAsyncClientHandler$InterceptorCallingHttpResponseHandler.complete(BaseAsyncClientHandler.java:225) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.handleResponse(MakeAsyncHttpRequestStage.java:185) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:171) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:122) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:255) ~[netty-nio-client-2.0.1.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.publishMessage(HandlerPublisher.java:362) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.flushBuffer(HandlerPublisher.java:304) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.receivedDemand(HandlerPublisher.java:258) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.access$200(HandlerPublisher.java:41) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher$ChannelSubscription$1.run(HandlerPublisher.java:452) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	... 1 common frames omitted

In the last three hours, I've had these errors / warnings with the following combinations of encoded shardId's and the shard ID it claims it was used with:

(all shardId's are with the format shardId-000000000301 - so I'm omitting the first part)

Encoded Shard ID Used-with ShardID count
301 349 2507
190 205 6385
222 196 2589
302 238 2582
270 258 2503
236 198 2512
253 229 356
266 342 7336
234 309 4710
260 229 2634
333 278 2519
308 318 2669

My stream has not been resharded recently and has 80 shards.

Question: Is this something that happens normally as part of the KCL client behavior, or does this indicate something I should be concerned about? (And If it is considered to be normal behavior, can the logs be changed to something other than warn-level and the full stacktrace not be logged?)

If this is something I should be worried about - what might be causing this?

@pfifer
Copy link
Contributor

pfifer commented Sep 4, 2018

That should not be normally happening. That indicates the sequence number being used isn't for the shard.

Is it consistently reporting the sequence is for the other shard e.g. 301 is always report 349?
Can you check the sequence numbers from your lease table. You can use the AWS CLI with something like this:

aws kinesis get-shard-iterator --stream-name <stream name> --shard-id <shard-id> --shard-iterator-type AFTER_SEQUENCE_NUMBER --starting-sequence-number <sequence number>

If it fails it means you have invalid sequence numbers in your lease table. If it doesn't fail it would indicate that somehow the KCL is getting sequence numbers mixed up internally.

@ryangardner
Copy link
Author

ryangardner commented Sep 4, 2018

If I run aws kinesis get-shard-iterator --stream-name <stream name> --shard-id <shard-id> --shard-iterator-type AFTER_SEQUENCE_NUMBER --starting-sequence-number <sequence number> it returns a shard iterator to me for those sequences.

It's consistent with the mismatch for a specific kinesis stream across multiple application restarts - but it's not consistent between environments. We have another 80-shard kinesis stream that is in a different aws account running the same code that has mismatches between

I dug through our log aggregation a bit more to identify the count per each unique ecs instance ID and also get the vpc IP address of the ECS host (though the IP address part shouldn't matter at all)

It shows that some of the time the lease complaining happens to just one specific instance of the application, but other times multiple instances will have that same complaint.

Here are the results from scanning over our logs for the past 7 days on two different AWS accounts using the same code with two similar kinesis streams (both 80 shards)

From our first environment / aws account

Encoded Shard ID Used in call with Host IP ECS Instance ID Count
190 205 ip-172-22-3-78 6a2e220011d4 11541
222 196 ip-172-22-1-23 fbf546269d26 161264
234 309 ip-172-22-3-219 2d0e9eaddadd 9869
236 198 ip-172-22-3-219 28975e005306 338416
245 234 ip-172-22-5-39 9b76adc85be9 4011
253 229 ip-172-22-1-23 a4aea1accd2c 17
253 229 ip-172-22-5-247 bc9c12864689 339
260 229 ip-172-22-1-167 04455b78fa22 297165
266 342 ip-172-22-1-23 a4aea1accd2c 926
266 342 ip-172-22-5-39 9b76adc85be9 3974
266 342 ip-172-22-5-194 a40d444baf06 9806
266 342 ip-172-22-1-23 8a00e48f440a 11570
266 342 ip-172-22-1-23 0cbb3b128a6b 64131
270 258 ip-172-22-3-219 310d05603251 50207
301 349 ip-172-22-3-78 24ee540d4235 333078
302 238 ip-172-22-5-194 a40d444baf06 119851
308 318 ip-172-22-5-247 2eb66729d2cd 301885
333 278 ip-172-22-3-219 28975e005306 338421

From our second environment / aws account

Encoded Shard ID Used in call with Host IP ECS Instance ID Count
177 281 ip-172-22-3-59 fa58468448f0 46438
200 302 ip-172-22-3-182 774623fa7d60 465526
302 192 ip-172-22-3-154 235ee0e3676b 30105
298 264 ip-172-22-1-13 e4600c447ba4 190447
288 242 ip-172-22-1-205 5f4c61573241 196259
288 242 ip-172-22-1-172 04c509806a3f 11826
280 281 ip-172-22-5-180 e9623dbadb82 405401

One setting I have enabled in our scheduler is:

configsBuilder.coordinatorConfig()
                        .skipShardSyncAtWorkerInitializationIfLeasesExist(true)

I can turn that off if you think it might be involved - though it seems like that should only be involved at startup

@pfifer
Copy link
Contributor

pfifer commented Sep 4, 2018

You're not using the same application name for different streams?

@ryangardner
Copy link
Author

No, we have only one application using the KCL and it's only reading from the one stream. (Eventually we may add more applications processing this stream)

We have this same set up running in two different accounts - and there is a different application name (and kinesis stream) used on each of the aws accounts running this.

@pfifer
Copy link
Contributor

pfifer commented Sep 4, 2018

Skip shard sync won't cause this sort of issue. What version of Java are you using? The stack traces look like a newer JVM. A newer version of Java shouldn't cause something like this, but it's at least one dimension.

Does the same behavior happen if you run the 1.9.1 version of the KCL against the stream?

@ryangardner
Copy link
Author

We're using Java 1.10. I could try to swap it out with 1.9 or 1.8 if that would help.

We saw some issues in 1.9.1 where it seemed like the client binary was crashing / restarting at odd points which is part of what motivated us to go to 2.0 - the error messages we saw on 1.9.1 were more along the lines of:

Caught shutdown exception, skipping checkpoint. com.amazonaws.services.kinesis.clientlibrary.exceptions.ShutdownException: Can't update checkpoint - instance doesn't hold the lease for this shard

but they weren't nearly as frequent. I think there were some other errors in our logs for clients prior to 2.0 about how the kcl processor was shutting down. I'll have to dig around more to find the specific log messages

@pfifer
Copy link
Contributor

pfifer commented Sep 4, 2018

If trying out Java 1.8 is easy that would be the best. I don't know why Java 1.10 would be causing issues, and I can look at testing it in the future.

The error you're seeing can occur on 2.0 as well. It means the worker/scheduler no longer holds the lease. The 1.x version of the KCL unfortunately overloaded the shutdown(...) method with two different possible entry points:

  • Lease loss
    On lease loss you absolutely must not checkpoint
  • Shard end
    On shard end you absolutely must checkpoint

This was confusing so 2.0 broke it out to two different methods:

  • leaseLost(...)
  • shardEnded(...)

@pfifer pfifer added investigation v1.x Issues related to the 1.x version labels Sep 4, 2018
@ryangardner
Copy link
Author

An update on this issue. We haven't had a chance to deploy out with java 1.8 yet, but yesterday we did reshard one of the streams. The error was gone this morning, but looking at the logs the error went away around six or seven hours after we re-sharded so it seems unlikely resharding it was what made the error go away.

I checked the logs on our other account where we did not reshard it, and on that account the error also went away without us doing any changes on our side. One the account that we didn't reshard, it went away a few hours before it went away on the other account.

Maybe something changed on the AWS side with regard to those streams? We certainly didn't change anything on our side yesterday.

@ryangardner
Copy link
Author

Another update: I re-sharded that account that we didn't reshard yesterday and then the errors came back on that account but not on the other one.

We're investigating some issues with messages not getting through our pipeline and this particular issue is our prime suspect.

Looking at the shard-level metrics in cloudwatch, the shard that gets complained about ("It encodes X, while it was used in a call to a shard with Y") - shard Y shows it has incoming bytes but no outgoing bytes.

Every other shard has both incoming and outgoing bytes.

At the point this error has happened, what mitigation strategy do we have? I'm assuming it wouldn't have advanced any checkpointing forward - if we kill / restart any instance that shows those errors would another one get that lease and read those records?

@pfifer
Copy link
Contributor

pfifer commented Sep 11, 2018

Are there any messages emitted from software.amazon.kinesis.retrieval.fanout.FanOutRecordsPublisher? There should be warning messages about the exception.

@athielen2
Copy link

Seeing the same thing in our application since updating to use KCL 2.0. We have multiple shards that have data coming in, but none coming out. When we redeploy to our nodes, and the application is restarted, some of those shards will begin reading again from the last checkpoint. However, the issue will often reappear on different shards.

Here is a graph showing one of the shards not reading:

screen shot 2018-09-11 at 11 29 52 am

Below is the error we are seeing from the FanOutRecordsPublisher:

java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000027, while it was used in a call to a shard with shardId-000000000008 (Service: Kinesis, Status Code: 400, Request ID: f6ab9909-d399-21e0-a2ba-74dd3fafa6ff)
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:593) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977) ~[?:1.8.0_181]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java:155) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:121) ~[measurement-snapshot-captor-stream-reader.jar:?]
at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_181]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) ~[?:1.8.0_181]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$Completable.lambda$complete$0(MakeAsyncHttpRequestStage.java:200) ~[measurement-snapshot-captor-stream-reader.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
Caused by: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000027, while it was used in a call to a shard with shardId-000000000008 (Service: Kinesis, Status Code: 400, Request ID: f6ab9909-d399-21e0-a2ba-74dd3fafa6ff)
at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:104) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:64) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:46) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:30) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.complete(SyncResponseHandlerAdapter.java:92) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.client.handler.BaseAsyncClientHandler$InterceptorCallingHttpResponseHandler.complete(BaseAsyncClientHandler.java:225) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.handleResponse(MakeAsyncHttpRequestStage.java:185) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:171) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:122) ~[measurement-snapshot-captor-stream-reader.jar:?]
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:255) ~[measurement-snapshot-captor-stream-reader.jar:?]
at com.typesafe.netty.HandlerPublisher.publishMessage(HandlerPublisher.java:362) ~[measurement-snapshot-captor-stream-reader.jar:?]
at com.typesafe.netty.HandlerPublisher.flushBuffer(HandlerPublisher.java:304) ~[measurement-snapshot-captor-stream-reader.jar:?]
at com.typesafe.netty.HandlerPublisher.receivedDemand(HandlerPublisher.java:258) ~[measurement-snapshot-captor-stream-reader.jar:?]
at com.typesafe.netty.HandlerPublisher.access$200(HandlerPublisher.java:41) ~[measurement-snapshot-captor-stream-reader.jar:?]
at com.typesafe.netty.HandlerPublisher$ChannelSubscription$1.run(HandlerPublisher.java:452) ~[measurement-snapshot-captor-stream-reader.jar:?]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[measurement-snapshot-captor-stream-reader.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) ~[measurement-snapshot-captor-stream-reader.jar:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464) ~[measurement-snapshot-captor-stream-reader.jar:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) ~[measurement-snapshot-captor-stream-reader.jar:?]
... 1 more```

@pfifer
Copy link
Contributor

pfifer commented Sep 11, 2018

Just opened a PR that will add some logging around the initialization phase. I will merge the PR in a bit, if possible could you try out the new version.

If you do you need to enable debug logging for the following classes:

  • software.amazon.kinesis.retrieval.fanout.FanOutRecordsPublisher
  • software.amazon.kinesis.checkpoint.dynamodb.DynamoDBCheckpointer
  • software.amazon.kinesis.lifecycle.InitializeTask

This will log the sequence number returned from DynamoDB, handled by the InitializeTask, and provided to the FanOutRecordsPublisher for initialization.

@ryangardner
Copy link
Author

My short term workaround was going to be adding a System.exit() in my shard consumer when that error pops up - but I can push out a version with this logging turned on to get that information to you.

@pfifer
Copy link
Contributor

pfifer commented Sep 11, 2018

Are you comfortable building from the source or do you need a release to be available in Central?

@ryangardner
Copy link
Author

I can build from source and put it into my internal artifact server, no big deal

@pfifer
Copy link
Contributor

pfifer commented Sep 11, 2018

The change is now in master. If you can build it. You will need to enable debug logging for the classes I listed above. This should log the sequence number retrieved from DynamoDB, and used to initialize the FanOutRecordsPublisher. It would also be beneficial if you have thread id's in the logs.

You should see something like:

2018-09-11 11:10:29,974 [ShardRecordProcessor-0000] DEBUG s.a.k.c.d.DynamoDBCheckpointer [NONE] [NONE] - [shardId-000000000000] Retrieved lease => Lease(leaseKey=shardId-000000000000, leaseOwner=d25ed3d6-7f0f-4b4a-9ca7-1bbb90430afc, leaseCounter=146, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49587497311274533994574834252742144236107130636007899138,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[]) 
2018-09-11 11:10:29,975 [ShardRecordProcessor-0000] DEBUG s.a.kinesis.lifecycle.InitializeTask [NONE] [NONE] - [shardId-000000000000]: Checkpoint: {SequenceNumber: 49587497311274533994574834252742144236107130636007899138,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null) 
2018-09-11 11:10:29,975 [ShardRecordProcessor-0000] DEBUG s.a.k.r.f.FanOutRecordsPublisher [NONE] [NONE] - [shardId-000000000000] Initializing Publisher @ Sequence: {SequenceNumber: 49587497311274533994574834252742144236107130636007899138,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null) 

When the lease is first acquired. If the problem is occurring at that point you should get the error messages shortly afterwards.

Thanks for your help in tracking this down.

@athielen2
Copy link

@pfifer would it be possible to get this available through Central?

@athielen2
Copy link

athielen2 commented Sep 11, 2018

Don't know if this is helpful or not, but we also noticed that for each shard that is not reading, the ownerSwitchesSinceCheckpoint column in the checkpoint table is non-zero:

screen shot 2018-09-11 at 4 49 49 pm

@pfifer
Copy link
Contributor

pfifer commented Sep 11, 2018

We normally don't publish snapshots to Central, but I can investigate doing so tomorrow. Otherwise it will need to be a full release, and I would take a little bit more time.

@ryangardner
Copy link
Author

Here are some logs:

(Note that the stacktraces might be in a weird order - I exported the logs from my log aggregation tool and it puts them in reverse chronological order - so I used a tail -r on the exported logs to get them back in normal chronological order but that probably did strange things to multi-line messages like stack traces.)

2018-09-11 22:27:39.506  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Sleeping ...
2018-09-11 22:28:23.665  INFO 7 --- [oordinator-0000] s.a.k.l.dynamodb.DynamoDBLeaseTaker      : Worker fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc saw 162 total leases, 52 available leases, 10 workers. Target is 17 leases, I have 0 leases, I will take 17 leases
2018-09-11 22:28:23.807  INFO 7 --- [oordinator-0000] s.a.k.l.dynamodb.DynamoDBLeaseTaker      : Worker fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc successfully took 14 leases: shardId-000000000427, shardId-000000000436, shardId-000000000401, shardId-000000000488, shardId-000000000498, shardId-000000000377, shardId-000000000497, shardId-000000000374, shardId-000000000363, shardId-000000000450, shardId-000000000384, shardId-000000000492, shardId-000000000490, shardId-000000000507
2018-09-11 22:28:23.807  INFO 7 --- [oordinator-0000] s.a.k.l.dynamodb.DynamoDBLeaseTaker      : Worker fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc failed to take 3 leases: shardId-000000000448, shardId-000000000359, shardId-000000000462
2018-09-11 22:28:24.774  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000427, concurrencyToken=e0e36f55-e9bf-4175-80a4-bc10af24e7d0, parentShardIds=[shardId-000000000269], checkpoint={SequenceNumber: 49588160275566154496829720989890449304599334072021424818,SubsequenceNumber: 0})
2018-09-11 22:28:24.776  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000436, concurrencyToken=6bfc7a02-85dc-4aeb-bfaa-78bee1f1f612, parentShardIds=[shardId-000000000276], checkpoint={SequenceNumber: 49588160294677893131970461433826345716085141884089932610,SubsequenceNumber: 0})
2018-09-11 22:28:24.777  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000401, concurrencyToken=b695f3bb-1229-413c-ab29-fb1579322db5, parentShardIds=[shardId-000000000242], checkpoint={SequenceNumber: 49588160223248606261076875455806734688981848970781268242,SubsequenceNumber: 0})
2018-09-11 22:28:24.779  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000488, concurrencyToken=f1dc693a-be84-4252-848e-0d083d3646a9, parentShardIds=[shardId-000000000330], checkpoint={SequenceNumber: 49588160296506554238249968084872546083315933702822828898,SubsequenceNumber: 0})
2018-09-11 22:28:24.780  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000377, concurrencyToken=2521875e-7ca7-4f20-84a8-98c3981dbb5b, parentShardIds=[shardId-000000000218], checkpoint={SequenceNumber: 49588160182750452980545263886462202483121859579256772498,SubsequenceNumber: 0})
2018-09-11 22:28:24.780  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000498, concurrencyToken=31f7976f-c846-4181-83a3-b0a81963d9fa, parentShardIds=[shardId-000000000341], checkpoint={SequenceNumber: 49588160412024414366638603992952345118213116918692323106,SubsequenceNumber: 0})
2018-09-11 22:28:24.783  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000497, concurrencyToken=02c65d17-682d-4f7c-a172-2ebd9466cab8, parentShardIds=[shardId-000000000338], checkpoint={SequenceNumber: 49588160410574865928734109925154736845483584532875583250,SubsequenceNumber: 0})
2018-09-11 22:28:24.783  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000363, concurrencyToken=c1a4e7fc-55d7-4d78-9ccf-56a4bb4a60c1, parentShardIds=[shardId-000000000205], checkpoint={SequenceNumber: 49588160153179664847293657824269075993370095812807235250,SubsequenceNumber: 0})
2018-09-11 22:28:24.784  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000374, concurrencyToken=0020a1f2-bdf8-4c5a-84c0-8eb0c84ca02d, parentShardIds=[shardId-000000000214], checkpoint={SequenceNumber: 49588160179829055359537752244981235300533442529063868258,SubsequenceNumber: 0})
2018-09-11 22:28:24.784  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000384, concurrencyToken=321aedaa-310a-4f9b-8a87-9f24e6079425, parentShardIds=[shardId-000000000226], checkpoint={SequenceNumber: 49588160198249470893524046979762061216566254160974977026,SubsequenceNumber: 0})
2018-09-11 22:28:24.785  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000450, concurrencyToken=b96eb115-c179-4803-87a5-2bdc5cbb6c75, parentShardIds=[shardId-000000000293], checkpoint={SequenceNumber: 49588160317469254724868758295573788814794062754687753250,SubsequenceNumber: 0})
2018-09-11 22:28:24.788  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000492, concurrencyToken=ccbd9153-f1a2-4f73-9806-abb5d5ca7561, parentShardIds=[shardId-000000000332], checkpoint={SequenceNumber: 49588160406181619124623577364989381944926527427488784066,SubsequenceNumber: 0})
2018-09-11 22:28:24.789  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000490, concurrencyToken=fa5f73e5-4174-463e-a354-87c568510674, parentShardIds=[shardId-000000000333], checkpoint={SequenceNumber: 49588160394719036092578829266126988628492334706561392290,SubsequenceNumber: 0})
2018-09-11 22:28:24.790  INFO 7 --- [      Thread-22] s.amazon.kinesis.coordinator.Scheduler   : Created new shardConsumer for : ShardInfo(shardId=shardId-000000000507, concurrencyToken=abc794bc-9a09-494b-b78e-1371f601ee05, parentShardIds=[shardId-000000000349], checkpoint={SequenceNumber: 49588160429708905309073384494638966274635349735443013554,SubsequenceNumber: 0})
2018-09-11 22:28:24.820  INFO 7 --- [dProcessor-0008] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000214. Not blocking on completion of this shard.
2018-09-11 22:28:24.821  INFO 7 --- [dProcessor-0008] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000214] of shard shardId-000000000374
2018-09-11 22:28:24.842  INFO 7 --- [dProcessor-0002] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000242. Not blocking on completion of this shard.
2018-09-11 22:28:24.842  INFO 7 --- [dProcessor-0002] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000242] of shard shardId-000000000401
2018-09-11 22:28:24.848  INFO 7 --- [dProcessor-0005] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000341. Not blocking on completion of this shard.
2018-09-11 22:28:24.848  INFO 7 --- [dProcessor-0005] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000341] of shard shardId-000000000498
2018-09-11 22:28:24.854  INFO 7 --- [dProcessor-0010] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000293. Not blocking on completion of this shard.
2018-09-11 22:28:24.854  INFO 7 --- [dProcessor-0010] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000293] of shard shardId-000000000450
2018-09-11 22:28:24.858  INFO 7 --- [dProcessor-0001] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000276. Not blocking on completion of this shard.
2018-09-11 22:28:24.858  INFO 7 --- [dProcessor-0001] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000276] of shard shardId-000000000436
2018-09-11 22:28:24.860  INFO 7 --- [dProcessor-0006] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000338. Not blocking on completion of this shard.
2018-09-11 22:28:24.860  INFO 7 --- [dProcessor-0006] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000338] of shard shardId-000000000497
2018-09-11 22:28:24.860  INFO 7 --- [dProcessor-0000] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000269. Not blocking on completion of this shard.
2018-09-11 22:28:24.861  INFO 7 --- [dProcessor-0000] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000269] of shard shardId-000000000427
2018-09-11 22:28:24.861  INFO 7 --- [dProcessor-0009] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000226. Not blocking on completion of this shard.
2018-09-11 22:28:24.861  INFO 7 --- [dProcessor-0009] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000226] of shard shardId-000000000384
2018-09-11 22:28:24.862  INFO 7 --- [dProcessor-0004] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000218. Not blocking on completion of this shard.
2018-09-11 22:28:24.862  INFO 7 --- [dProcessor-0004] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000218] of shard shardId-000000000377
2018-09-11 22:28:24.863  INFO 7 --- [dProcessor-0007] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000205. Not blocking on completion of this shard.
2018-09-11 22:28:24.863  INFO 7 --- [dProcessor-0007] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000205] of shard shardId-000000000363
2018-09-11 22:28:24.867  INFO 7 --- [dProcessor-0011] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000332. Not blocking on completion of this shard.
2018-09-11 22:28:24.867  INFO 7 --- [dProcessor-0011] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000332] of shard shardId-000000000492
2018-09-11 22:28:24.868  INFO 7 --- [dProcessor-0012] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000333. Not blocking on completion of this shard.
2018-09-11 22:28:24.868  INFO 7 --- [dProcessor-0012] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000333] of shard shardId-000000000490
2018-09-11 22:28:24.873  INFO 7 --- [dProcessor-0003] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000330. Not blocking on completion of this shard.
2018-09-11 22:28:24.873  INFO 7 --- [dProcessor-0003] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000330] of shard shardId-000000000488
2018-09-11 22:28:24.905  INFO 7 --- [dProcessor-0013] s.a.k.lifecycle.BlockOnParentShardTask   : No lease found for shard shardId-000000000349. Not blocking on completion of this shard.
2018-09-11 22:28:24.905  INFO 7 --- [dProcessor-0013] s.a.k.lifecycle.BlockOnParentShardTask   : No need to block on parents [shardId-000000000349] of shard shardId-000000000507
2018-09-11 22:28:25.794 DEBUG 7 --- [dProcessor-0013] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000427, concurrencyToken=e0e36f55-e9bf-4175-80a4-bc10af24e7d0, parentShardIds=[shardId-000000000269], checkpoint={SequenceNumber: 49588160275566154496829720989890449304599334072021424818,SubsequenceNumber: 0})
2018-09-11 22:28:25.794 DEBUG 7 --- [dProcessor-0010] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000450, concurrencyToken=b96eb115-c179-4803-87a5-2bdc5cbb6c75, parentShardIds=[shardId-000000000293], checkpoint={SequenceNumber: 49588160317469254724868758295573788814794062754687753250,SubsequenceNumber: 0})
2018-09-11 22:28:25.794 DEBUG 7 --- [dProcessor-0008] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000507, concurrencyToken=abc794bc-9a09-494b-b78e-1371f601ee05, parentShardIds=[shardId-000000000349], checkpoint={SequenceNumber: 49588160429708905309073384494638966274635349735443013554,SubsequenceNumber: 0})
2018-09-11 22:28:25.794 DEBUG 7 --- [dProcessor-0002] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000490, concurrencyToken=fa5f73e5-4174-463e-a354-87c568510674, parentShardIds=[shardId-000000000333], checkpoint={SequenceNumber: 49588160394719036092578829266126988628492334706561392290,SubsequenceNumber: 0})
2018-09-11 22:28:25.794 DEBUG 7 --- [dProcessor-0003] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000436, concurrencyToken=6bfc7a02-85dc-4aeb-bfaa-78bee1f1f612, parentShardIds=[shardId-000000000276], checkpoint={SequenceNumber: 49588160294677893131970461433826345716085141884089932610,SubsequenceNumber: 0})
2018-09-11 22:28:25.794 DEBUG 7 --- [dProcessor-0009] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000497, concurrencyToken=02c65d17-682d-4f7c-a172-2ebd9466cab8, parentShardIds=[shardId-000000000338], checkpoint={SequenceNumber: 49588160410574865928734109925154736845483584532875583250,SubsequenceNumber: 0})
2018-09-11 22:28:25.795 DEBUG 7 --- [dProcessor-0001] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000384, concurrencyToken=321aedaa-310a-4f9b-8a87-9f24e6079425, parentShardIds=[shardId-000000000226], checkpoint={SequenceNumber: 49588160198249470893524046979762061216566254160974977026,SubsequenceNumber: 0})
2018-09-11 22:28:25.795 DEBUG 7 --- [dProcessor-0000] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000363, concurrencyToken=c1a4e7fc-55d7-4d78-9ccf-56a4bb4a60c1, parentShardIds=[shardId-000000000205], checkpoint={SequenceNumber: 49588160153179664847293657824269075993370095812807235250,SubsequenceNumber: 0})
2018-09-11 22:28:25.795 DEBUG 7 --- [dProcessor-0005] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000492, concurrencyToken=ccbd9153-f1a2-4f73-9806-abb5d5ca7561, parentShardIds=[shardId-000000000332], checkpoint={SequenceNumber: 49588160406181619124623577364989381944926527427488784066,SubsequenceNumber: 0})
2018-09-11 22:28:25.795 DEBUG 7 --- [dProcessor-0007] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000377, concurrencyToken=2521875e-7ca7-4f20-84a8-98c3981dbb5b, parentShardIds=[shardId-000000000218], checkpoint={SequenceNumber: 49588160182750452980545263886462202483121859579256772498,SubsequenceNumber: 0})
2018-09-11 22:28:25.795 DEBUG 7 --- [dProcessor-0004] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000498, concurrencyToken=31f7976f-c846-4181-83a3-b0a81963d9fa, parentShardIds=[shardId-000000000341], checkpoint={SequenceNumber: 49588160412024414366638603992952345118213116918692323106,SubsequenceNumber: 0})
2018-09-11 22:28:25.795 DEBUG 7 --- [dProcessor-0011] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000488, concurrencyToken=f1dc693a-be84-4252-848e-0d083d3646a9, parentShardIds=[shardId-000000000330], checkpoint={SequenceNumber: 49588160296506554238249968084872546083315933702822828898,SubsequenceNumber: 0})
2018-09-11 22:28:25.795 DEBUG 7 --- [dProcessor-0006] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000374, concurrencyToken=0020a1f2-bdf8-4c5a-84c0-8eb0c84ca02d, parentShardIds=[shardId-000000000214], checkpoint={SequenceNumber: 49588160179829055359537752244981235300533442529063868258,SubsequenceNumber: 0})
2018-09-11 22:28:25.796 DEBUG 7 --- [dProcessor-0012] s.a.kinesis.lifecycle.InitializeTask     : Initializing ShardId ShardInfo(shardId=shardId-000000000401, concurrencyToken=b695f3bb-1229-413c-ab29-fb1579322db5, parentShardIds=[shardId-000000000242], checkpoint={SequenceNumber: 49588160223248606261076875455806734688981848970781268242,SubsequenceNumber: 0})
2018-09-11 22:28:25.804 DEBUG 7 --- [dProcessor-0000] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000363] Retrieved lease => Lease(leaseKey=shardId-000000000363, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=535, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160153179664847293657824269075993370095812807235250,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000205])
2018-09-11 22:28:25.804 DEBUG 7 --- [dProcessor-0003] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000436] Retrieved lease => Lease(leaseKey=shardId-000000000436, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=537, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160294677893131970461433826345716085141884089932610,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000276])
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0000] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000363]: Checkpoint: {SequenceNumber: 49588160153179664847293657824269075993370095812807235250,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0003] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000436]: Checkpoint: {SequenceNumber: 49588160294677893131970461433826345716085141884089932610,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0003] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000436] Initializing Publisher @ Sequence: {SequenceNumber: 49588160294677893131970461433826345716085141884089932610,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0011] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000488] Retrieved lease => Lease(leaseKey=shardId-000000000488, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=421, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160296506554238249968084872546083315933702822828898,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=5, parentShardIds=[shardId-000000000330])
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0003] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0000] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000363] Initializing Publisher @ Sequence: {SequenceNumber: 49588160153179664847293657824269075993370095812807235250,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0002] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000490] Retrieved lease => Lease(leaseKey=shardId-000000000490, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=438, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160394719036092578829266126988628492334706561392290,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000333])
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0000] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.805 DEBUG 7 --- [dProcessor-0011] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000488]: Checkpoint: {SequenceNumber: 49588160296506554238249968084872546083315933702822828898,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0002] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000490]: Checkpoint: {SequenceNumber: 49588160394719036092578829266126988628492334706561392290,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0002] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000490] Initializing Publisher @ Sequence: {SequenceNumber: 49588160394719036092578829266126988628492334706561392290,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0002] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0009] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000497] Retrieved lease => Lease(leaseKey=shardId-000000000497, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=555, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160410574865928734109925154736845483584532875583250,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000338])
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0008] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000507] Retrieved lease => Lease(leaseKey=shardId-000000000507, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=539, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160429708905309073384494638966274635349735443013554,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000349])
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0009] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000497]: Checkpoint: {SequenceNumber: 49588160410574865928734109925154736845483584532875583250,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0009] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000497] Initializing Publisher @ Sequence: {SequenceNumber: 49588160410574865928734109925154736845483584532875583250,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0008] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000507]: Checkpoint: {SequenceNumber: 49588160429708905309073384494638966274635349735443013554,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0009] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0008] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000507] Initializing Publisher @ Sequence: {SequenceNumber: 49588160429708905309073384494638966274635349735443013554,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0008] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0013] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000427] Retrieved lease => Lease(leaseKey=shardId-000000000427, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=515, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160275566154496829720989890449304599334072021424818,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000269])
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0012] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000401] Retrieved lease => Lease(leaseKey=shardId-000000000401, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=529, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160223248606261076875455806734688981848970781268242,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000242])
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0013] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000427]: Checkpoint: {SequenceNumber: 49588160275566154496829720989890449304599334072021424818,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0012] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000401]: Checkpoint: {SequenceNumber: 49588160223248606261076875455806734688981848970781268242,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0013] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000427] Initializing Publisher @ Sequence: {SequenceNumber: 49588160275566154496829720989890449304599334072021424818,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.806 DEBUG 7 --- [dProcessor-0012] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000401] Initializing Publisher @ Sequence: {SequenceNumber: 49588160223248606261076875455806734688981848970781268242,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0012] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0011] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000488] Initializing Publisher @ Sequence: {SequenceNumber: 49588160296506554238249968084872546083315933702822828898,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0013] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0011] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0006] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000374] Retrieved lease => Lease(leaseKey=shardId-000000000374, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=587, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160179829055359537752244981235300533442529063868258,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000214])
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0004] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000498] Retrieved lease => Lease(leaseKey=shardId-000000000498, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=520, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160412024414366638603992952345118213116918692323106,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000341])
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0006] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000374]: Checkpoint: {SequenceNumber: 49588160179829055359537752244981235300533442529063868258,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0007] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000377] Retrieved lease => Lease(leaseKey=shardId-000000000377, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=537, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160182750452980545263886462202483121859579256772498,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000218])
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0006] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000374] Initializing Publisher @ Sequence: {SequenceNumber: 49588160179829055359537752244981235300533442529063868258,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0007] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000377]: Checkpoint: {SequenceNumber: 49588160182750452980545263886462202483121859579256772498,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0004] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000498]: Checkpoint: {SequenceNumber: 49588160412024414366638603992952345118213116918692323106,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0006] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0004] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000498] Initializing Publisher @ Sequence: {SequenceNumber: 49588160412024414366638603992952345118213116918692323106,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0007] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000377] Initializing Publisher @ Sequence: {SequenceNumber: 49588160182750452980545263886462202483121859579256772498,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0007] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0004] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.807  INFO 7 --- [dProcessor-0007] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000377, extended sequence number: {SequenceNumber: 49588160182750452980545263886462202483121859579256772498,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.807  INFO 7 --- [dProcessor-0006] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000374, extended sequence number: {SequenceNumber: 49588160179829055359537752244981235300533442529063868258,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.807  INFO 7 --- [dProcessor-0003] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000436, extended sequence number: {SequenceNumber: 49588160294677893131970461433826345716085141884089932610,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0007] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0006] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.807 DEBUG 7 --- [dProcessor-0003] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.807  INFO 7 --- [dProcessor-0004] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000498, extended sequence number: {SequenceNumber: 49588160412024414366638603992952345118213116918692323106,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0001] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000384] Retrieved lease => Lease(leaseKey=shardId-000000000384, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=525, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160198249470893524046979762061216566254160974977026,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000226])
2018-09-11 22:28:25.808  INFO 7 --- [dProcessor-0002] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000490, extended sequence number: {SequenceNumber: 49588160394719036092578829266126988628492334706561392290,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0002] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.808  INFO 7 --- [dProcessor-0012] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000401, extended sequence number: {SequenceNumber: 49588160223248606261076875455806734688981848970781268242,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808  INFO 7 --- [dProcessor-0009] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000497, extended sequence number: {SequenceNumber: 49588160410574865928734109925154736845483584532875583250,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0012] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.808  INFO 7 --- [dProcessor-0013] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000427, extended sequence number: {SequenceNumber: 49588160275566154496829720989890449304599334072021424818,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0013] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.808  INFO 7 --- [dProcessor-0000] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000363, extended sequence number: {SequenceNumber: 49588160153179664847293657824269075993370095812807235250,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0000] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0001] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000384]: Checkpoint: {SequenceNumber: 49588160198249470893524046979762061216566254160974977026,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0010] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000450] Retrieved lease => Lease(leaseKey=shardId-000000000450, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=543, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160317469254724868758295573788814794062754687753250,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000293])
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0001] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000384] Initializing Publisher @ Sequence: {SequenceNumber: 49588160198249470893524046979762061216566254160974977026,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0001] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0009] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.808  INFO 7 --- [dProcessor-0008] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000507, extended sequence number: {SequenceNumber: 49588160429708905309073384494638966274635349735443013554,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808  INFO 7 --- [dProcessor-0011] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000488, extended sequence number: {SequenceNumber: 49588160296506554238249968084872546083315933702822828898,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.808 DEBUG 7 --- [dProcessor-0010] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000450]: Checkpoint: {SequenceNumber: 49588160317469254724868758295573788814794062754687753250,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0004] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0008] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0010] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000450] Initializing Publisher @ Sequence: {SequenceNumber: 49588160317469254724868758295573788814794062754687753250,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0010] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.809  INFO 7 --- [dProcessor-0001] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000384, extended sequence number: {SequenceNumber: 49588160198249470893524046979762061216566254160974977026,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.809  INFO 7 --- [dProcessor-0010] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000450, extended sequence number: {SequenceNumber: 49588160317469254724868758295573788814794062754687753250,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0001] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0010] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0005] s.a.k.c.dynamodb.DynamoDBCheckpointer    : [shardId-000000000492] Retrieved lease => Lease(leaseKey=shardId-000000000492, leaseOwner=fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc, leaseCounter=521, concurrencyToken=null, lastCounterIncrementNanos=null, checkpoint={SequenceNumber: 49588160406181619124623577364989381944926527427488784066,SubsequenceNumber: 0}, pendingCheckpoint=null, ownerSwitchesSinceCheckpoint=1, parentShardIds=[shardId-000000000332])
2018-09-11 22:28:25.809 DEBUG 7 --- [dProcessor-0005] s.a.kinesis.lifecycle.InitializeTask     : [shardId-000000000492]: Checkpoint: {SequenceNumber: 49588160406181619124623577364989381944926527427488784066,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.810 DEBUG 7 --- [dProcessor-0005] s.a.k.r.fanout.FanOutRecordsPublisher    : [shardId-000000000492] Initializing Publisher @ Sequence: {SequenceNumber: 49588160406181619124623577364989381944926527427488784066,SubsequenceNumber: 0} -- Initial Position: InitialPositionInStreamExtended(position=LATEST, timestamp=null)
2018-09-11 22:28:25.810 DEBUG 7 --- [dProcessor-0011] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:25.810 DEBUG 7 --- [dProcessor-0005] s.a.kinesis.lifecycle.InitializeTask     : Calling the record processor initialize().
2018-09-11 22:28:25.810  INFO 7 --- [dProcessor-0005] c.c.r.rdsloader.RdsProcessor             : InitializationInput received was shardId: shardId-000000000492, extended sequence number: {SequenceNumber: 49588160406181619124623577364989381944926527427488784066,SubsequenceNumber: 0}, pending checkpoint sequence number: null
2018-09-11 22:28:25.810 DEBUG 7 --- [dProcessor-0005] s.a.kinesis.lifecycle.InitializeTask     : Record processor initialize() completed.
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0001] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000436: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000436-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0012] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000401: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819250Z id: shardId-000000000401-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0004] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000498: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000498-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0009] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000497: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000497-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0005] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000488: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819355Z id: shardId-000000000488-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0007] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000490: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819277Z id: shardId-000000000490-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0011] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000427: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000427-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0013] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000374: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819318Z id: shardId-000000000374-1 -- Starting subscribe to shard
2018-09-11 22:28:27.819 DEBUG 7 --- [dProcessor-0003] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000492: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.819846Z id: shardId-000000000492-1 -- Starting subscribe to shard
2018-09-11 22:28:27.821 DEBUG 7 --- [dProcessor-0000] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000363: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.821576Z id: shardId-000000000363-1 -- Starting subscribe to shard
2018-09-11 22:28:27.821 DEBUG 7 --- [dProcessor-0014] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000507: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.821628Z id: shardId-000000000507-1 -- Starting subscribe to shard
2018-09-11 22:28:27.821 DEBUG 7 --- [dProcessor-0006] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000450: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.821688Z id: shardId-000000000450-1 -- Starting subscribe to shard
2018-09-11 22:28:27.821 DEBUG 7 --- [dProcessor-0002] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000384: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.821640Z id: shardId-000000000384-1 -- Starting subscribe to shard
2018-09-11 22:28:27.821 DEBUG 7 --- [dProcessor-0010] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000377: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:27.821710Z id: shardId-000000000377-1 -- Starting subscribe to shard
2018-09-11 22:28:27.865 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000507: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.821628Z id: shardId-000000000507-1 -- Subscribe
2018-09-11 22:28:27.865 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000507: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.821628Z id: shardId-000000000507-1 -- creating record subscription
2018-09-11 22:28:27.869 DEBUG 7 --- [tyEventLoop-0-1] anOutRecordsPublisher$RecordSubscription : shardId-000000000507: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.821628Z id: shardId-000000000507-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.880 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000401: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819250Z id: shardId-000000000401-1 -- Subscribe
2018-09-11 22:28:27.880 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000401: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819250Z id: shardId-000000000401-1 -- creating record subscription
2018-09-11 22:28:27.880 DEBUG 7 --- [tyEventLoop-0-0] anOutRecordsPublisher$RecordSubscription : shardId-000000000401: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819250Z id: shardId-000000000401-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.881 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000497: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819217Z id: shardId-000000000497-1 -- Subscribe
2018-09-11 22:28:27.881 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000497: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000497-1 -- creating record subscription
2018-09-11 22:28:27.881 DEBUG 7 --- [tyEventLoop-0-3] anOutRecordsPublisher$RecordSubscription : shardId-000000000497: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000497-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.883 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000427: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819217Z id: shardId-000000000427-1 -- Subscribe
2018-09-11 22:28:27.883 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000427: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000427-1 -- creating record subscription
2018-09-11 22:28:27.883 DEBUG 7 --- [tyEventLoop-0-2] anOutRecordsPublisher$RecordSubscription : shardId-000000000427: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000427-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.884 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000427: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000427-1 -- Response received
2018-09-11 22:28:27.884 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000507: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.821628Z id: shardId-000000000507-1 -- Response received
2018-09-11 22:28:27.884 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000497: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000497-1 -- Response received
2018-09-11 22:28:27.884 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000401: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819250Z id: shardId-000000000401-1 -- Response received
2018-09-11 22:28:27.885 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000498: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819217Z id: shardId-000000000498-1 -- Subscribe
2018-09-11 22:28:27.885 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000498: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000498-1 -- creating record subscription
2018-09-11 22:28:27.885 DEBUG 7 --- [tyEventLoop-0-0] anOutRecordsPublisher$RecordSubscription : shardId-000000000498: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000498-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.885 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000498: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000498-1 -- Response received
2018-09-11 22:28:27.889 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000490: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819277Z id: shardId-000000000490-1 -- Subscribe
2018-09-11 22:28:27.890 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000490: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819277Z id: shardId-000000000490-1 -- creating record subscription
2018-09-11 22:28:27.890 DEBUG 7 --- [tyEventLoop-0-0] anOutRecordsPublisher$RecordSubscription : shardId-000000000490: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819277Z id: shardId-000000000490-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.890 DEBUG 7 --- [tyEventLoop-0-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000490: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819277Z id: shardId-000000000490-1 -- Response received
2018-09-11 22:28:27.891 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000377: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.821710Z id: shardId-000000000377-1 -- Subscribe
2018-09-11 22:28:27.891 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000377: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.821710Z id: shardId-000000000377-1 -- creating record subscription
2018-09-11 22:28:27.891 DEBUG 7 --- [tyEventLoop-0-2] anOutRecordsPublisher$RecordSubscription : shardId-000000000377: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.821710Z id: shardId-000000000377-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.892 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000377: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.821710Z id: shardId-000000000377-1 -- Response received
2018-09-11 22:28:27.893 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000363: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.821576Z id: shardId-000000000363-1 -- Subscribe
2018-09-11 22:28:27.894 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000363: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.821576Z id: shardId-000000000363-1 -- creating record subscription
2018-09-11 22:28:27.894 DEBUG 7 --- [tyEventLoop-0-3] anOutRecordsPublisher$RecordSubscription : shardId-000000000363: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.821576Z id: shardId-000000000363-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.894 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000363: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.821576Z id: shardId-000000000363-1 -- Response received
2018-09-11 22:28:27.902 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000492: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819846Z id: shardId-000000000492-1 -- Subscribe
2018-09-11 22:28:27.902 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000492: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819846Z id: shardId-000000000492-1 -- creating record subscription
2018-09-11 22:28:27.903 DEBUG 7 --- [tyEventLoop-0-3] anOutRecordsPublisher$RecordSubscription : shardId-000000000492: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819846Z id: shardId-000000000492-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.903 DEBUG 7 --- [tyEventLoop-0-3] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000492: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819846Z id: shardId-000000000492-1 -- Response received
2018-09-11 22:28:27.911 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000374: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819318Z id: shardId-000000000374-1 -- Subscribe
2018-09-11 22:28:27.911 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000374: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819318Z id: shardId-000000000374-1 -- creating record subscription
2018-09-11 22:28:27.911 DEBUG 7 --- [tyEventLoop-0-1] anOutRecordsPublisher$RecordSubscription : shardId-000000000374: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819318Z id: shardId-000000000374-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.911 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000374: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819318Z id: shardId-000000000374-1 -- Response received
2018-09-11 22:28:27.916 DEBUG 7 --- [nc-response-1-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000488: [SubscriptionLifetime]: (RecordFlow#exceptionOccurred) @ 2018-09-11T22:28:27.819355Z id: shardId-000000000488-1 -- java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
	... 1 common frames omitted
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1221) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1177) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1407) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at com.typesafe.netty.http.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:148) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:165) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:189) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:328) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:451) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:505) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:670) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at com.typesafe.netty.HandlerPublisher.handlerRemoved(HandlerPublisher.java:395) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.complete(HandlerPublisher.java:408) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:255) ~[netty-nio-client-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:122) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:171) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.handleResponse(MakeAsyncHttpRequestStage.java:185) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.client.handler.BaseAsyncClientHandler$InterceptorCallingHttpResponseHandler.complete(BaseAsyncClientHandler.java:225) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.complete(SyncResponseHandlerAdapter.java:92) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:30) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:46) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:64) ~[kinesis-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:104) ~[kinesis-2.0.1.jar!/:na]
Caused by: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
	at java.base/java.lang.Thread.run(Thread.java:844) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$Completable.lambda$complete$0(MakeAsyncHttpRequestStage.java:200) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:121) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java:155) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[na:na]
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
2018-09-11 22:28:27.917  WARN 7 --- [nc-response-1-0] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000488: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2018-09-11T22:28:27.819355Z id: shardId-000000000488-1 -- CompletionException/software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
2018-09-11 22:28:27.918 DEBUG 7 --- [nc-response-1-0] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000488: availableQueueSpace zeroing from 8
	... 1 common frames omitted
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1221) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1177) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1407) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at com.typesafe.netty.http.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:148) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:165) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:189) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:328) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:451) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:505) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:670) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at com.typesafe.netty.HandlerPublisher.handlerRemoved(HandlerPublisher.java:395) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.complete(HandlerPublisher.java:408) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:255) ~[netty-nio-client-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:122) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:171) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.handleResponse(MakeAsyncHttpRequestStage.java:185) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.client.handler.BaseAsyncClientHandler$InterceptorCallingHttpResponseHandler.complete(BaseAsyncClientHandler.java:225) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.complete(SyncResponseHandlerAdapter.java:92) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:30) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:46) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:64) ~[kinesis-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:104) ~[kinesis-2.0.1.jar!/:na]
Caused by: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
	at java.base/java.lang.Thread.run(Thread.java:844) [na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) [na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$Completable.lambda$complete$0(MakeAsyncHttpRequestStage.java:200) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:121) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java:155) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[na:na]
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
2018-09-11 22:28:27.921  WARN 7 --- [dProcessor-0014] s.a.kinesis.lifecycle.ShardConsumer      : shardId-000000000488: onError().  Cancelling subscription, and marking self as failed.
2018-09-11 22:28:27.921 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000436: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.819217Z id: shardId-000000000436-1 -- Subscribe
2018-09-11 22:28:27.922 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000436: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000436-1 -- creating record subscription
2018-09-11 22:28:27.922 DEBUG 7 --- [tyEventLoop-0-1] anOutRecordsPublisher$RecordSubscription : shardId-000000000436: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000436-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.922 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000436: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.819217Z id: shardId-000000000436-1 -- Response received
2018-09-11 22:28:27.947 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000384: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.821640Z id: shardId-000000000384-1 -- Subscribe
2018-09-11 22:28:27.947 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000384: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.821640Z id: shardId-000000000384-1 -- creating record subscription
2018-09-11 22:28:27.947 DEBUG 7 --- [tyEventLoop-0-1] anOutRecordsPublisher$RecordSubscription : shardId-000000000384: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.821640Z id: shardId-000000000384-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:27.947 DEBUG 7 --- [tyEventLoop-0-1] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000384: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.821640Z id: shardId-000000000384-1 -- Response received
2018-09-11 22:28:28.063 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000450: [SubscriptionLifetime]: (RecordFlow#onEventStream)  @ 2018-09-11T22:28:27.821688Z id: shardId-000000000450-1 -- Subscribe
2018-09-11 22:28:28.063 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000450: [SubscriptionLifetime]: (RecordFlow#onEventStream) @ 2018-09-11T22:28:27.821688Z id: shardId-000000000450-1 -- creating record subscription
2018-09-11 22:28:28.064 DEBUG 7 --- [tyEventLoop-0-2] anOutRecordsPublisher$RecordSubscription : shardId-000000000450: [SubscriptionLifetime]: (RecordSubscription#onSubscribe) @ 2018-09-11T22:28:27.821688Z id: shardId-000000000450-1 -- Outstanding: 8 items so requesting an item
2018-09-11 22:28:28.064 DEBUG 7 --- [tyEventLoop-0-2] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000450: [SubscriptionLifetime]: (RecordFlow#responseReceived) @ 2018-09-11T22:28:27.821688Z id: shardId-000000000450-1 -- Response received
2018-09-11 22:28:28.179  INFO 7 --- [dProcessor-0004] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.000084S ms in cache
2018-09-11 22:28:28.269  INFO 7 --- [dProcessor-0002] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.000726S ms in cache
2018-09-11 22:28:28.434  INFO 7 --- [dProcessor-0014] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.000706S ms in cache
2018-09-11 22:28:28.449  INFO 7 --- [dProcessor-0001] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.000076S ms in cache
2018-09-11 22:28:28.457  INFO 7 --- [dProcessor-0000] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.003505S ms in cache
2018-09-11 22:28:28.593  INFO 7 --- [dProcessor-0013] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.000119S ms in cache
2018-09-11 22:28:28.634  INFO 7 --- [dProcessor-0010] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.004575S ms in cache
2018-09-11 22:28:28.638  INFO 7 --- [dProcessor-0011] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.003528S ms in cache
2018-09-11 22:28:28.671  INFO 7 --- [dProcessor-0003] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.003506S ms in cache
2018-09-11 22:28:28.671  INFO 7 --- [dProcessor-0006] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.00009S ms in cache
2018-09-11 22:28:28.675  INFO 7 --- [dProcessor-0009] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.00013S ms in cache
2018-09-11 22:28:28.678  INFO 7 --- [dProcessor-0007] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.000075S ms in cache
2018-09-11 22:28:28.680  INFO 7 --- [dProcessor-0012] c.c.r.rdsloader.RdsProcessor             : Processing record that was PT0S ( 0 ms) behind latest and spent PT0.001022S ms in cache
2018-09-11 22:28:28.809  INFO 7 --- [dProcessor-0011] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000377 at sequenceNumber 49588160182750452980545263886462202483121859579256772498, subSequenceNumber 1
2018-09-11 22:28:28.809  INFO 7 --- [dProcessor-0010] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000384 at sequenceNumber 49588160198249470893524046979762061216566254160974977026, subSequenceNumber 1
2018-09-11 22:28:28.811  INFO 7 --- [dProcessor-0001] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000436 at sequenceNumber 49588160294677893131970461433826345716085141884089932610, subSequenceNumber 1
2018-09-11 22:28:28.811  INFO 7 --- [dProcessor-0004] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000427 at sequenceNumber 49588160275566154496829720989890449304599334072021424818, subSequenceNumber 1
	... 1 common frames omitted
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1221) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1177) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1407) ~[netty-handler-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) ~[netty-codec-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at com.typesafe.netty.http.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:148) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:165) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:189) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at com.typesafe.netty.http.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:328) ~[netty-reactive-streams-http-2.0.0.jar!/:na]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:451) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:505) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:670) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at com.typesafe.netty.HandlerPublisher.handlerRemoved(HandlerPublisher.java:395) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.complete(HandlerPublisher.java:408) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:255) ~[netty-nio-client-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:122) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:171) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.handleResponse(MakeAsyncHttpRequestStage.java:185) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.client.handler.BaseAsyncClientHandler$InterceptorCallingHttpResponseHandler.complete(BaseAsyncClientHandler.java:225) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.complete(SyncResponseHandlerAdapter.java:92) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:30) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:46) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:64) ~[kinesis-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:104) ~[kinesis-2.0.1.jar!/:na]
Caused by: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
	at java.base/java.lang.Thread.run(Thread.java:844) [na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$Completable.lambda$complete$0(MakeAsyncHttpRequestStage.java:200) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:121) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java:155) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[na:na]
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: c194b77c-af7f-6d69-9586-f4b7597e980e)
2018-09-11 22:28:28.815  WARN 7 --- [      Thread-22] s.a.kinesis.lifecycle.ShardConsumer      : shardId-000000000488: Failure occurred in retrieval.  Restarting data requests
2018-09-11 22:28:28.816 DEBUG 7 --- [dProcessor-0008] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000488: [SubscriptionLifetime]: (FanOutRecordsPublisher#subscribeToShard) @ 2018-09-11T22:28:28.816030Z id: shardId-000000000488-2 -- Starting subscribe to shard
2018-09-11 22:28:28.816  INFO 7 --- [dProcessor-0014] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000507 at sequenceNumber 49588160429708905309073384494655891236109954956205760434, subSequenceNumber 0
2018-09-11 22:28:28.817  INFO 7 --- [dProcessor-0009] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000374 at sequenceNumber 49588160179829055359537752244981235300533442529063868258, subSequenceNumber 1
2018-09-11 22:28:28.826  INFO 7 --- [dProcessor-0012] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000498 at sequenceNumber 49588160412024414366638603992952345118213116918692323106, subSequenceNumber 1
2018-09-11 22:28:28.837  INFO 7 --- [dProcessor-0002] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000497 at sequenceNumber 49588160410574865928734109925154736845483584532875583250, subSequenceNumber 1
2018-09-11 22:28:28.837  INFO 7 --- [dProcessor-0007] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000363 at sequenceNumber 49588160153179664847293657824269075993370095812807235250, subSequenceNumber 1
2018-09-11 22:28:28.848  INFO 7 --- [dProcessor-0013] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000490 at sequenceNumber 49588160394719036092578829266126988628492334706561392290, subSequenceNumber 1
2018-09-11 22:28:28.848  INFO 7 --- [dProcessor-0003] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000401 at sequenceNumber 49588160223248606261076875455806734688981848970781268242, subSequenceNumber 1
2018-09-11 22:28:28.851  INFO 7 --- [dProcessor-0006] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000450 at sequenceNumber 49588160317469254724868758295577415592252907054528732194, subSequenceNumber 0
2018-09-11 22:28:28.868  INFO 7 --- [dProcessor-0000] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000492 at sequenceNumber 49588160406181619124623577364989381944926527427488784066, subSequenceNumber 1
2018-09-11 22:28:28.885 DEBUG 7 --- [nc-response-1-0] .k.r.f.FanOutRecordsPublisher$RecordFlow : shardId-000000000488: [SubscriptionLifetime]: (RecordFlow#exceptionOccurred) @ 2018-09-11T22:28:28.816030Z id: shardId-000000000488-2 -- java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: d14e06d2-9d4e-623e-855c-451efe8bd3b9)
	... 1 common frames omitted
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463) ~[netty-transport-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) ~[netty-common-4.1.23.Final.jar!/:4.1.23.Final]
	at com.typesafe.netty.HandlerPublisher$ChannelSubscription$1.run(HandlerPublisher.java:452) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.access$200(HandlerPublisher.java:41) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.receivedDemand(HandlerPublisher.java:258) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.flushBuffer(HandlerPublisher.java:304) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at com.typesafe.netty.HandlerPublisher.publishMessage(HandlerPublisher.java:362) ~[netty-reactive-streams-2.0.0.jar!/:na]
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:255) ~[netty-nio-client-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:122) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.complete(MakeAsyncHttpRequestStage.java:171) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$ResponseHandler.handleResponse(MakeAsyncHttpRequestStage.java:185) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.client.handler.BaseAsyncClientHandler$InterceptorCallingHttpResponseHandler.complete(BaseAsyncClientHandler.java:225) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.complete(SyncResponseHandlerAdapter.java:92) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:30) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.response.SdkErrorResponseHandler.handle(SdkErrorResponseHandler.java:46) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:64) ~[kinesis-2.0.1.jar!/:na]
	at software.amazon.awssdk.services.kinesis.model.InvalidArgumentException$BuilderImpl.build(InvalidArgumentException.java:104) ~[kinesis-2.0.1.jar!/:na]
Caused by: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: d14e06d2-9d4e-623e-855c-451efe8bd3b9)
	at java.base/java.lang.Thread.run(Thread.java:844) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) ~[na:na]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$Completable.lambda$complete$0(MakeAsyncHttpRequestStage.java:200) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930) ~[na:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java:121) ~[sdk-core-2.0.1.jar!/:na]
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java:155) ~[sdk-core-2.0.1.jar!/:na]
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) ~[na:na]
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[na:na]
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: d14e06d2-9d4e-623e-855c-451efe8bd3b9)
2018-09-11 22:28:28.886  WARN 7 --- [nc-response-1-0] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000488: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2018-09-11T22:28:28.816030Z id: shardId-000000000488-2 -- CompletionException/software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000438, while it was used in a call to a shard with shardId-000000000488 (Service: Kinesis, Status Code: 400, Request ID: d14e06d2-9d4e-623e-855c-451efe8bd3b9)
2018-09-11 22:28:28.886 DEBUG 7 --- [nc-response-1-0] s.a.k.r.fanout.FanOutRecordsPublisher    : shardId-000000000488: availableQueueSpace zeroing from 8```

@pfifer
Copy link
Contributor

pfifer commented Sep 12, 2018

The value stored in the lease table is a sequence number for shard 438, and not for 488. We would need to find the last successful checkpoint that stored the sequence number to figure out who stored it.

Do you run the checkpoint on the processing thread, or do you execute the checkpoint on another thread?

I'm looking to see if we can add some additional validation to checkpointing.

@pfifer
Copy link
Contributor

pfifer commented Sep 12, 2018

If you can check is that the sequence number in the DynamoDB table. If it isn't it makes me think the state lease state in the KCL is getting mixed up.

@ryangardner
Copy link
Author

I'm not explicitly calling it on another thread, inside my ShardRecordProcessor in my

public void processRecords(ProcessRecordsInput processRecordsInput) { method I have it call a method after each userRecord is processed that is based largely on the way checkpointing is done in this sample: https:/aws/aws-sdk-java/blob/master/src/samples/AmazonKinesis/AmazonKinesisApplicationSampleRecordProcessor.java )

it calls:

   checkpointIfNecessary(processRecordsInput.checkpointer(), record);

passing in the checkpointer from the supplied ProcessRecordsInput

into this method:

    private void checkpointIfNecessary(RecordProcessorCheckpointer checkpointer, KinesisClientRecord record) {
        if (System.currentTimeMillis() > nextCheckpointTimeInMillis) {
            checkpoint(checkpointer, record);
            nextCheckpointTimeInMillis = System.currentTimeMillis() + CHECKPOINT_INTERVAL_MILLIS;
        }
    }
    /**
     * Checkpoint with retries.
     *
     * @param checkpointer
     */
    private void checkpoint(RecordProcessorCheckpointer checkpointer, KinesisClientRecord record) {
        log.info("Checkpointing shard {} at sequenceNumber {}, subSequenceNumber {}", kinesisShardId, record.sequenceNumber(), record.subSequenceNumber());
        for (int i = 0; i < NUM_RETRIES; i++) {
            try {
                checkpointer.checkpoint(record.sequenceNumber(), record.subSequenceNumber());
                break;
            } catch (ShutdownException se) {
                // Ignore checkpoint if the processor instance has been shutdown (fail over).
                log.info("Caught shutdown exception, skipping checkpoint.", se);
                break;
            } catch (ThrottlingException e) {
                // Backoff and re-attempt checkpoint upon transient failures
                if (i >= (NUM_RETRIES - 1)) {
                    log.error("Checkpoint failed after " + (i + 1) + "attempts.", e);
                    break;
                } else {
                    log.info("Transient issue when checkpointing - attempt " + (i + 1) + " of "
                            + NUM_RETRIES, e);
                }
            } catch (InvalidStateException e) {
                // This usually indicates an issue with the DynamoDB table (check for table, provisioned IOPS).
                log.error("Cannot save checkpoint to the DynamoDB table used by the Amazon Kinesis Client Library.", e);
                break;
            }
            try {
                Thread.sleep(BACKOFF_TIME_IN_MILLIS);
            } catch (InterruptedException e) {
                log.debug("Interrupted sleep", e);
            }
        }
    }

The biggest difference between the checkpointing that I have there and the sample is I'm explicitly passing in a sequence number to the checkpointer - but the checkpointer I'm using came from ProcessRecordsInput and the sequence / subsequence I'm checkpointing with also came from a record out of the list of KinesisClientRecords in processRecordsInput.records()

The code also does checkpoints at the shardEnd / shutdownRequested points:

    @Override
    public void shardEnded(ShardEndedInput shardEndedInput) {
        try {
            log.info("Shard ended");
            shardEndedInput.checkpointer().checkpoint();
        } catch (ShutdownException | InvalidStateException e) {
            log.error("Error checkpointing after shard ended input", e);
        }
    }

    @Override
    public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {
        try {
            shutdownRequestedInput.checkpointer().checkpoint();
        } catch (ShutdownException | InvalidStateException e) {
            log.error("Error checkpointing after shutdown was requested", e);
        }
    }

@ryangardner
Copy link
Author

In dynamo for that shardId-000000000488 I have:

{
  "checkpoint": {
    "S": "49588160296506554238249968084872546083315933702822828898"
  },
  "checkpointSubSequenceNumber": {
    "N": "0"
  },
  "leaseCounter": {
    "N": "1637"
  },
  "leaseKey": {
    "S": "shardId-000000000488"
  },
  "leaseOwner": {
    "S": "fa7c2ba8-8eca-4a6b-b7fc-d132afa3e4cc"
  },
  "ownerSwitchesSinceCheckpoint": {
    "N": "5"
  },
  "parentShardId": {
    "SS": [
      "shardId-000000000330"
    ]
  }
}

which matches what was in that line in the logs earlier today:

Initializing ShardId ShardInfo(shardId=shardId-000000000488, concurrencyToken=f1dc693a-be84-4252-848e-0d083d3646a9, parentShardIds=[shardId-000000000330], checkpoint={SequenceNumber: 49588160296506554238249968084872546083315933702822828898,SubsequenceNumber: 0})

(nothing has checkpointed it to a newer value even though the app is still running / processing things)

@pfifer
Copy link
Contributor

pfifer commented Sep 12, 2018

That sequence number is for shard 438. That means it did checkpoint on the wrong sequence number. I'm not sure how that happened at this time. I'm working on a change that will validate sequence numbers before allowing a checkpoint to occur.

If you happened to log the sequence number when you checkpoint that would help to track this down. Concurrently I'm looking to add a validator that should prevent/detect checkpointing with a sequence number that isn't part of the shard.

@ryangardner
Copy link
Author

I do have logs when it was checkpointing - first showing it checkpointing 488 with that sequence number and then showing it checkpointing 438 with the same sequence number:

2018-09-11 13:35:30.315  INFO 7 --- [dProcessor-0065] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000488 at sequenceNumber 49588160296506554238249968084872546083315933702822828898, subSequenceNumber 0

then later:

2018-09-11 13:36:02.675  WARN 7 --- [dProcessor-0065] s.a.kinesis.lifecycle.ShardConsumer      : shardId-000000000488: onError().  Cancelling subscription, and marking self as failed.
software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: 120985645295:dms-l1-ingest-nonprod:shardId-000000000438:kinesis-rds-loader (Service: kinesis, Status Code: 500, Request ID: d1392888-ea8c-6ab6-8528-ee577bd34331)

followed by:

2018-09-11 13:36:02.693  INFO 7 --- [dProcessor-0065] c.c.r.rdsloader.RdsProcessor             : Checkpointing shard shardId-000000000438 at sequenceNumber 49588160296506554238249968084872546083315933702822828898, subSequenceNumber 0

It seems that's the same thread (shardProcessor-0065) but with different shardIds. At first this seemed odd to me, but looking at the logs that thread seems to use several different ShardProcessors - and checkpoints other shards like:

2018-09-11 13:35:44.664 INFO 7 --- [dProcessor-0065] c.c.r.rdsloader.RdsProcessor : Checkpointing shard shardId-000000000352 at sequenceNumber 49588160132952888952226382505947315896701337430961165826, subSequenceNumber 0

@athielen2
Copy link

athielen2 commented Sep 12, 2018

FWIW, we recreated our stack last night and restarted the KCL 2.0 application for the stream. Everything was working as expected until we performed another deployment and the application was restarted again. Shortly after restart, we saw warnings and errors similar to what @ryangardner is seeing:

018-09-12T19:59:10,331 [WARN] software.amazon.kinesis.retrieval.fanout.FanOutRecordsPublisher:128 - shardId-000000000006: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2018-09-12T19:58:34.998Z id: shardId-000000000006-3 -- software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: 308120811573:measnap-MeasurementSnapshotCaptorStreamV2:shardId-000000000007:measnap-MeasurementSnapshotCaptorV2 (Service: kinesis, Status Code: 500, Request ID: f67b65de-03a9-760e-a268-17f5af2738f0)
software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: 308120811573:measnap-MeasurementSnapshotCaptorStreamV2:shardId-000000000007:measnap-MeasurementSnapshotCaptorV2 (Service: kinesis, Status Code: 500, Request ID: f67b65de-03a9-760e-a268-17f5af2738f0) 
2018-09-12T19:59:10,334 [WARN] software.amazon.kinesis.lifecycle.ShardConsumer$InternalSubscriber:165 - shardId-000000000006: onError(). Cancelling subscription, and marking self as failed.
software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: 308120811573:measnap-MeasurementSnapshotCaptorStreamV2:shardId-000000000007:measnap-MeasurementSnapshotCaptorV2 (Service: kinesis, Status Code: 500, Request ID: f67b65de-03a9-760e-a268-17f5af2738f0)
2018-09-12T19:59:11,213 [WARN] software.amazon.kinesis.lifecycle.ShardConsumer:244 - shardId-000000000006: Failure occurred in retrieval. Restarting data requests
software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: 308120811573:measnap-MeasurementSnapshotCaptorStreamV2:shardId-000000000007:measnap-MeasurementSnapshotCaptorV2 (Service: kinesis, Status Code: 500, Request ID: f67b65de-03a9-760e-a268-17f5af2738f0)
2018-09-12T19:59:12,214 [WARN] software.amazon.kinesis.lifecycle.ShardConsumer:244 - shardId-000000000006: Failure occurred in retrieval. Restarting data requests
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.InvalidArgumentException: Invalid StartingSequenceNumber. It encodes shardId-000000000007, while it was used in a call to a shard with shardId-000000000006 (Service: Kinesis, Status Code: 400, Request ID: f145195f-a870-67e7-a556-6b10d7b51775)

@pfifer
Copy link
Contributor

pfifer commented Oct 2, 2018

Yes, I'm going to start looking at it right now.

Edit: Adjusted grammar, since apparently I'm still tired.

@pfifer
Copy link
Contributor

pfifer commented Oct 4, 2018

The SDK team has released 2.0.6 of the SDK, which I've been testing since last night. We will be doing some additional validation today, and a release of the KCL should be coming soon.

@pfifer pfifer added this to the v2.0.3 milestone Oct 5, 2018
@pfifer
Copy link
Contributor

pfifer commented Oct 8, 2018

We've just released version 2.0.3 of the Amazon Kinesis Client which should solve this issue. That release includes the AWS SDK 2.0.6 release, along with other fixes for this issue.

Can you try it out, and let us know if you're still seeing issues?

@athielen2
Copy link

athielen2 commented Oct 9, 2018

@pfifer I will push out a release tonight or tomorrow. Is the following configuration still needed/recommended for the Kinesis Async Client?

.httpClient(NettyNioAsyncHttpClient.builder().maxHttp2Streams(1).maxConcurrency(Integer.MAX_VALUE).build())

@pfifer
Copy link
Contributor

pfifer commented Oct 9, 2018

@athielen2 Using httpClient(...) with a built client is the original source of the bug. This should be fixed in AWS SDK 2.0.6 though. This was causing the SDK to use HTTP 1.1 to call SubscribeToShard, which isn't really supported. You should be able to use KinesisClientUtil.createKinesisAsyncClient(...) now, which will handle setting the maxConcurrency setting.

If you need to configure the httpClient I would recommend not calling .build() on the NettyNioAsyncHttpClient.builder(), and instead pass the builder to the httpClient(...) method. The httpClient(...) method has two overload one that takes a built client, and one that takes a builder. If you pass in a built client that is what the SDK will use, and it won't change the client to any specific settings required by the service client. (This may no longer be the case for the 2.0.6 version of the AWS SDK).

@athielen2
Copy link

@pfifer been running around 36 hours and have not seen any errors dealing with invalid sequence numbers. Thank you for your work on the issue

@vanhoale
Copy link

I'm still seeing this message with version 2.0.3:

{"level":"ERROR","logger_name":"software.amazon.kinesis.lifecycle.ShardConsumer","message":"shardId-000000000000: Last request was dispatched at 2018-10-11T16:24:46.671Z, but no response as of 2018-10-11T16:25:21.902Z (PT35.231S). Cancelling subscription, and restarting.","logger_timestamp":"2018-10-11T16:25:21.902Z"}

@pfifer
Copy link
Contributor

pfifer commented Oct 11, 2018

@vanhoale That indicates that data didn't arrive after 35 seconds, so the next question is did something go wrong ahead of it.

Couple of questions:

  • Are there any messages from the FanOutRecordsPublisher for that shard ahead of this message?
  • Can you also post the versions of the KCL and AWS SDK you're using?

@vanhoale
Copy link

@pfifer there is no message from FanOutRecordsPublisher before the message I posted, and I'm running with KCL 2.0.3 and AWS SDK 2.0.6

@vanhoale
Copy link

I also got those messages:

{"level":"ERROR","logger_name":"com.amazonaws.services.kinesis.producer.LogInputStreamReader","message":"[2018-10-11 12:57:59.222613] [0x00001336][0x000070000c266000] [error] AWS Log: ERRORCurl returned error code 28","logger_timestamp":"2018-10-11T17:57:59.222Z"}
{"level":"ERROR","logger_name":"com.amazonaws.services.kinesis.producer.LogInputStreamReader","message":"[2018-10-11 12:57:59.225583] [0x00001336][0x000070000c2e9000] [error] AWS Log: ERRORCurl returned error code 28","logger_timestamp":"2018-10-11T17:57:59.225Z"}
{"level":"ERROR","logger_name":"com.amazonaws.services.kinesis.producer.LogInputStreamReader","message":"[2018-10-11 12:57:59.225602] [0x00001336][0x000070000c1e3000] [error] AWS Log: ERRORCurl returned error code 28","logger_timestamp":"2018-10-11T17:57:59.225Z"}
{

@vanhoale
Copy link

never mind, that is kinesis producer

@cdfleischmann
Copy link

Hi folks, interestingly I stumbled on to this post myself as I am also experiencing a similar issue as described above... Pulling down 2.0.5 of KCL to test tonight. The issue occurs several hours in to running.

Unlike what is/was described above, I use the Enhanced Fan out of KCL with only 1 shard...

Throughout its run, I get these errors sporadically:

18/11/15 23:26:38 WARN channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Connection reset by peer

But, about 8 hours in to running I get a series of errors like so;

18/11/15 23:26:38 WARN channel.DefaultChannelPipeline: An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.io.IOException: Connection reset by peer
18/11/16 08:51:12 WARN fanout.FanOutRecordsPublisher: shardId-000000000018: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2018-11-15T21:46:40.848Z id: shardId-000000000018-117 -- software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal
Service Error (Service: kinesis, Status Code: 500, Request ID: ec79819c-7d93-afea-b81e-6a1c96170973)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: ec79819c-7d93-afea-b81e-6a1c96170973)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:100)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:60)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: ec79819c-7d93-afea-b81e-6a1c96170973)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:100)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:60)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: ec79819c-7d93-afea-b81e-6a1c96170973)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:100)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:60)
18/11/16 08:51:13 WARN fanout.FanOutRecordsPublisher: shardId-000000000018: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2018-11-15T21:51:12.753Z id: shardId-000000000018-118 -- software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal
Service Error (Service: kinesis, Status Code: 500, Request ID: f1dd016a-e432-c116-a5ba-edfa0fb6678f)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: f1dd016a-e432-c116-a5ba-edfa0fb6678f)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:100)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:60)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: f1dd016a-e432-c116-a5ba-edfa0fb6678f)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:100)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:60)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: f1dd016a-e432-c116-a5ba-edfa0fb6678f)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:100)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:60)
18/11/16 08:51:13 WARN fanout.FanOutRecordsPublisher: shardId-000000000018: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2018-11-15T21:51:13.754Z id: shardId-000000000018-119 -- CompletionException/software.amazon.awssdk.services.kinesis.model.ResourceInUseE
xception: Another active subscription exists for this consumer: ACCOUNT_ID:SHARD_NAME:shardId-000000000018:APPLICATION_NAME-01b34944-323d-4d1c-b61a-a30f0a3a79eb1 (Service: Kinesis, Status Code: 400, Request ID: e6bec218-1cab-1ab3-b2d9-2e89f72fbc2a)
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: ACCOUNT_ID:SHARD_NAME:shardId-000000000018:APPLICATION_NAME-01b34944-323d-4d1c-b61a-a30f0a3a79eb1 (S
ervice: Kinesis, Status Code: 400, Request ID: e6bec218-1cab-1ab3-b2d9-2e89f72fbc2a)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
Caused by: software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: ACCOUNT_ID:SHARD_NAME:shardId-000000000018:APPLICATION_NAME-01b34944-323d-4d1c-b61a-a30f0a3a79eb1 (Service: Kinesis, Status Code: 4
00, Request ID: e6bec218-1cab-1ab3-b2d9-2e89f72fbc2a)
at software.amazon.awssdk.services.kinesis.model.ResourceInUseException$BuilderImpl.build(ResourceInUseException.java:104)
at software.amazon.awssdk.services.kinesis.model.ResourceInUseException$BuilderImpl.build(ResourceInUseException.java:64)
java.util.concurrent.CompletionException: software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: ACCOUNT_ID:SHARD_NAME:shardId-000000000018:APPLICATION_NAME-01b34944-323d-4d1c-b61a-a30f0a3a79eb1 (S
ervice: Kinesis, Status Code: 400, Request ID: e6bec218-1cab-1ab3-b2d9-2e89f72fbc2a)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
Caused by: software.amazon.awssdk.services.kinesis.model.ResourceInUseException: Another active subscription exists for this consumer: ACCOUNT_ID:SHARD_NAME:shardId-000000000018:APPLICATION_NAME-01b34944-323d-4d1c-b61a-a30f0a3a79eb1 (Service: Kinesis, Status Code: 4
00, Request ID: e6bec218-1cab-1ab3-b2d9-2e89f72fbc2a)
at software.amazon.awssdk.services.kinesis.model.ResourceInUseException$BuilderImpl.build(ResourceInUseException.java:104)
at software.amazon.awssdk.services.kinesis.model.ResourceInUseException$BuilderImpl.build(ResourceInUseException.java:64)

Going to try 2.0.5 tonight as I need to plan for an outage.

My KCL configuration is rather simple, it just follows the KCL 1.0 to 2.0 migration guide in that it has;

new Scheduler(
configsBuilder.checkpointConfig(),
configsBuilder.coordinatorConfig(),
configsBuilder.leaseManagementConfig()
.initialLeaseTableReadCapacity(5)
.initialLeaseTableWriteCapacity(5)
.cleanupLeasesUponShardCompletion(true)
.failoverTimeMillis(60000),
configsBuilder.lifecycleConfig(),
configsBuilder.metricsConfig()
.metricsLevel(MetricsLevel.NONE),
configsBuilder.processorConfig()
.callProcessRecordsEvenForEmptyRecordList(false),
configsBuilder.retrievalConfig());

Any tips or tricks much appreciated.

Chris

@majidfn
Copy link

majidfn commented Nov 16, 2018

Hi @pfifer , I'm on 2.0.5 and I still get lots of those warnings in my tests.
I have 4 shards in my stream and my application is running in 2 ECS instances benefiting from enhanced fanout.

{
    "timestamp": "2018-11-16T21:15:11.016Z",
    "logger": "software.amazon.kinesis.lifecycle.ShardConsumer",
    "exceptionClass": "software.amazon.kinesis.retrieval.RetryableRetrievalException",
    "stackTrace": "software.amazon.kinesis.retrieval.RetryableRetrievalException: ReadTimeout
	at software.amazon.kinesis.retrieval.fanout.FanOutRecordsPublisher.errorOccurred(FanOutRecordsPublisher.java: 142)
	at software.amazon.kinesis.retrieval.fanout.FanOutRecordsPublisher.access$700(FanOutRecordsPublisher.java: 51)
	at software.amazon.kinesis.retrieval.fanout.FanOutRecordsPublisher$RecordFlow.exceptionOccurred(FanOutRecordsPublisher.java: 516)
	at software.amazon.awssdk.services.kinesis.DefaultKinesisAsyncClient.lambda$subscribeToShard$1(DefaultKinesisAsyncClient.java: 2102)
	at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java: 760)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java: 736)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java: 474)
	at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java: 1977)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java: 155)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.lambda$execute$0(AsyncRetryableStage.java: 121)
	at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java: 822)
	at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java: 797)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java: 474)
	at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java: 1977)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage$Completable.lambda$completeExceptionally$1(MakeAsyncHttpRequestStage.java: 208)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java: 1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 624)
	at java.lang.Thread.run(Thread.java: 748)
Caused by: software.amazon.awssdk.core.exception.SdkClientException
	at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java: 97)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryExecutor.handle(AsyncRetryableStage.java: 143)
	... 9 more
Caused by: io.netty.handler.timeout.ReadTimeoutException",
    "thread": "ShardRecordProcessor-0002",
    "exceptionMessage": "ReadTimeout",
    "message": "ShardConsumer: shardId-000000000003: onError().  Cancelling subscription, and marking self as failed.",
    "logLevel": "WARN"
}

and also

{
    "timestamp": "2018-11-16T21:19:03.243Z",
    "logger": "software.amazon.awssdk.http.nio.netty.internal.RunnableRequest",
    "exceptionClass": "io.netty.channel.ConnectTimeoutException",
    "stackTrace": "io.netty.channel.ConnectTimeoutException: connection timed out: monitoring.us-east-1.amazonaws.com/ip.ip.ip.ip:443
	at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
	at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:464)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
	at java.lang.Thread.run(Thread.java:748)",
    "thread": "aws-java-sdk-NettyEventLoop-2-0",
    "exceptionMessage": "connection timed out: monitoring.us-east-1.amazonaws.com/ip.ip.ip.ip:443",
    "message": "RunnableRequest: Failed to create connection to https://monitoring.us-east-1.amazonaws.com",
    "logLevel": "ERROR"
}

and finally:

{
	"timestamp": "2018-11-16T21:16:08.397Z",
	"logger": "software.amazon.kinesis.retrieval.fanout.FanOutRecordsPublisher",
	"thread": "ShardRecordProcessor-0001",
	"message": "FanOutRecordsPublisher: shardId-000000000003: (FanOutRecordsPublisher/Subscription#request) - Rejected an attempt to request(6), because subscribers don't match.",
	"logLevel": "WARN"
}

@ryangardner
Copy link
Author

ryangardner commented Nov 17, 2018 via email

@cdfleischmann
Copy link

Upgrading KCL to 2.0.5 and AWS libraries (to preview 13) to seems to have removed most of the errors since rebooting...

<!-- https://mvnrepository.com/artifact/software.amazon.awssdk/core -->
		<dependency>
			<groupId>software.amazon.awssdk</groupId>
			<artifactId>core</artifactId>
			<version>2.0.0-preview-13</version>
			<type>pom</type>
		</dependency>

Except for about 10 or so of the following errors:

18/11/17 15:29:28 WARN fanout.FanOutRecordsPublisher: shardId-000000000018: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2018-11-17T04:28:36.019Z id: shardId-000000000018-210 -- software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: c0508580-7580-44fc-9439-1eb463d61773)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: c0508580-7580-44fc-9439-1eb463d61773)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:100)
at software.amazon.awssdk.services.kinesis.model.InternalFailureException$BuilderImpl.build(InternalFailureException.java:60)
at software.amazon.awssdk.awscore.http.response.AwsJsonErrorResponseHandler.handle(AwsJsonErrorResponseHandler.java:86)
at software.amazon.awssdk.awscore.http.response.AwsJsonErrorResponseHandler.handle(AwsJsonErrorResponseHandler.java:37)
at software.amazon.awssdk.awscore.eventstream.EventStreamAsyncResponseTransformer.handleMessage(EventStreamAsyncResponseTransformer.java:292)
at software.amazon.eventstream.MessageDecoder.feed(MessageDecoder.java:74)
at software.amazon.eventstream.MessageDecoder.feed(MessageDecoder.java:34)
at software.amazon.awssdk.awscore.eventstream.EventStreamAsyncResponseTransformer$ByteSubscriber.onNext(EventStreamAsyncResponseTransformer.java:376)
at software.amazon.awssdk.awscore.eventstream.EventStreamAsyncResponseTransformer$ByteSubscriber.onNext(EventStreamAsyncResponseTransformer.java:352)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onNext(ResponseHandler.java:233)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onNext(ResponseHandler.java:193)
at com.typesafe.netty.HandlerPublisher.publishMessage(HandlerPublisher.java:367)
at com.typesafe.netty.HandlerPublisher.channelRead(HandlerPublisher.java:345)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at com.typesafe.netty.http.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:194)
at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:165)
at com.typesafe.netty.http.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:148)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)

@majidfn
Copy link

majidfn commented Nov 17, 2018

@ryangardner That is correct. I'm using AWS SDK Bundle 2.0.6:

"software.amazon.awssdk" % "bundle" % "2.0.6"

@oripwk
Copy link

oripwk commented Jan 28, 2019

@cdfleischmann, I'm experiencing the same issue here: aws/aws-sdk-java-v2#1039

@ShibaBandit
Copy link

ShibaBandit commented Feb 6, 2019

I'm getting cancelled subscriptions from 500 errors followed seconds later by rate limit errors then Scheduler Thread dies and I stop consuming.

Setup:
Ubuntu Cloud Server AMI v18
Open JDK 1.8
t3.medium instance

Dependencies:

dependencies {
        implementation 'org.slf4j:slf4j-api:1.7.25'
        implementation 'software.amazon.awssdk:s3:2.4.0'
        implementation 'software.amazon.kinesis:amazon-kinesis-client:2.1.0'
        implementation 'joda-time:joda-time:2.10.1'
        implementation 'io.projectreactor:reactor-core'
        implementation 'ch.qos.logback:logback-classic:1.2.3'

    dependencyManagement {
        imports {
            mavenBom 'software.amazon.awssdk:bom:2.4.0'
            mavenBom 'io.projectreactor:reactor-bom:Californium-SR4'
        }
    }
}

Logged Errors/Warnings:

21:18:44.260 [aws-java-sdk-NettyEventLoop-1-2] WARN s.a.k.r.f.FanOutRecordsPublisher - shardId-000000000000: [SubscriptionLifetime] - (FanOutRecordsPublisher#errorOccurred) @ 2019-02-05T21:15:46.446Z id: shardId-000000000000-12 -- software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: e2b1fe5b-c084-8615-b662-0119421a89d8)
software.amazon.awssdk.services.kinesis.model.InternalFailureException: Internal Service Error (Service: kinesis, Status Code: 500, Request ID: e2b1fe5b-c084-8615-b662-0119421a89d8)
21:18:45.834 [ShardRecordProcessor-0002] WARN s.a.k.lifecycle.ShardConsumer - shardId-000000000000: onError(). Cancelling subscription, and marking self as failed.
21:18:46.781 [ShardRecordProcessor-0001] WARN s.a.k.lifecycle.ShardConsumer - shardId-000000000000: onError(). Cancelling subscription, and marking self as failed.
Caused by: software.amazon.awssdk.services.kinesis.model.LimitExceededException: Rate exceeded for consumer arn:<ARN REDACTED> and shard shardId-000000000000 (Service: Kinesis, Status Code: 400, Request ID: d8c8448d-855f-b67b-8c1b-bb7b07c1b9b6)
at software.amazon.awssdk.services.kinesis.model.LimitExceededException$BuilderImpl.build(LimitExceededException.java:118)
at software.amazon.awssdk.services.kinesis.model.LimitExceededException$BuilderImpl.build(LimitExceededException.java:78)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:86)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:62)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:41)
at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.lambda$prepare$0(SyncResponseHandlerAdapter.java:85)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter$BaosSubscriber.onComplete(SyncResponseHandlerAdapter.java:127)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:164)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$700(ResponseHandler.java:64)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:274)
.... truncated ...
Caused by: software.amazon.awssdk.services.kinesis.model.LimitExceededException: Rate exceeded for consumer arn:<ARN REDACTED> and shard shardId-000000000000 (Service: Kinesis, Status Code: 400, Request ID: d8c8448d-855f-b67b-8c1b-bb7b07c1b9b6)
at software.amazon.awssdk.services.kinesis.model.LimitExceededException$BuilderImpl.build(LimitExceededException.java:118)
at software.amazon.awssdk.services.kinesis.model.LimitExceededException$BuilderImpl.build(LimitExceededException.java:78)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:86)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:62)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:41)
at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter.lambda$prepare$0(SyncResponseHandlerAdapter.java:85)
at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
at software.amazon.awssdk.core.internal.http.async.SyncResponseHandlerAdapter$BaosSubscriber.onComplete(SyncResponseHandlerAdapter.java:127)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:164)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$700(ResponseHandler.java:64)
at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:274)
... truncated ...

What is the recommended way to reliably handle issues like this for production code? I have to restart my service when this happens, which is currently in a testing environment. Never had issues like this before trying enhanced fanout with new AWS SDK 2.0.

@sahilpalvia
Copy link
Contributor

The original issue has been resolved with 2.0.3 version of the KCL. Resolving the issue. Please feel free to open new issues for newer bugs seen.

@ashwing
Copy link
Contributor

ashwing commented Aug 22, 2019

Release 2.2.2 has a fix for Invalid StartingSequenceNumber. Continue reading for other fixes in this release.

What to expect from KCL release 2.2.2?

Case 1: If you are consuming KCL 2.x version (prior to KCL 2.2.2) and you replaced or 
modified thread/queue configurations of SchedulerThreadPoolExecutor and faced situations 
like data loss or shard getting struck or intermittent pause in data consumpton, please 
continue reading, else go to next case. These issues are potentially due to undelivered 
executor service events.
 	a. 	This release ensures data loss does not happen, by preventing subsequent events 
 		from getting delivered upon an event delivery failure and later restarting the subscription. 
 		This may, however, lead to intermittent pause in data consumption, as KCL checks for 
 		exceptionally completed subscriptions every 35 secponds. Refer section d for mitigation. 
 		This scenario is identified by the log messages,
 			- ERROR s.a.k.r.f.FanOutRecordsPublisher [NONE] - shardId-000000000035: Received unexpected ack for the active subscription shardId-000000000077-005. Throwing.
 			- ERROR s.a.k.l.ShardConsumerSubscriber [NONE] - shardId-000000000035: Last request was dispatched at 2019-08-21T01:16:22.340Z, but no response as of 2019-08-21T01:16:57.902Z (PT35.562S).  Cancelling subscription, and restarting.
 	b. 	This release prevents the shard stuck situations that happened due to undelivered 
 		control plane messages. [NOTE : Control plane messages are responsible for fetching 
 		further events from KDS service. This includes subscribeToShard() API call and reactive 
 		layer's request(N) calls]
 	c. 	This release, have improved logging that will capture undelivered control plane 
 		messages for troubleshooting. Any undelivered control plane message might still lead 
 		to temporary pause in data consumption. Refer section d for mitigation. This is identified 
 		by the log message,
 			- ERROR s.a.k.l.ShardConsumerSubscriber [NONE] - shardId-000000000035: Last request was dispatched at 2019-08-21T01:16:22.340Z, but no response as of 2019-08-21T01:16:57.902Z (PT35.562S).  Cancelling subscription, and restarting.
 	d. 	Mitigation for undelivered messages: The undelivered messages are primarily due to 
 		reduced SchedulerThreadPoolExecutor capacity. The customers are expected to assess 
 		the state of the SchedulerThreadPoolExecutor using the following diagnostic log messages 
 		and take appropriate actions like reducing the RecordProcessor.processRecords' Time or 
 		scaling out the application or increase the number of threads in the executor.
 			i. 	ExecutorStateEvent INFO log emitted every 30 seconds to check if activeThreads and 
 				queueSize are consistently high.
 					- INFO s.a.k.c.DiagnosticEventLogger - Current thread pool executor state: ExecutorStateEvent(executorName=SchedulerThreadPoolExecutor, currentQueueSize=0, activeThreads=100, coreThreads=0, leasesOwned=40, largestPoolSize=2, maximumPoolSize=2147483647)
 			ii.	RejectedTaskEvent ERROR log emitted when SchedulerThreadPoolExecutor fails to 
 				execute any event
 					- ERROR s.a.k.c.DiagnosticEventLogger [NONE] - Review your thread configuration to prevent task rejections. Task rejections will slow down your application and some shards may stop processing. Current thread pool executor state: ExecutorStateEvent(executorName=SchedulerThreadPoolExecutor, currentQueueSize=0, activeThreads=0, coreThreads=0, leasesOwned=0, largestPoolSize=0, maximumPoolSize=2147483647) - io.reactivex.exceptions.UndeliverableException: java.util.concurrent.RejectedExecutionException: Test exception.

Case 2: If you are consuming KCL 2.x version (prior to KCL 2.2.2) and you did NOT modify 
or replace SchedulerThreadPoolExecutor and faced shard stuck situations or frequent 
intermittent pause in data consumption, please continue reading, else go to next case. 
These issues are potentially due to submitting more tasks to SchedulerThreadPoolExecutor, 
than it can handle, leading to delayed execution of submitted tasks.
	a. 	The common symptom of this situation is the following log message,
			- ERROR s.a.k.l.ShardConsumerSubscriber [NONE] - shardId-000000000077: Last request was dispatched at 2019-08-21T01:16:22.340Z, but no response as of 2019-08-21T01:16:57.902Z (PT35.562S).  Cancelling subscription, and restarting.
	b. 	This release has more diagnostic log messages to identify the issues around the 
		congested SchedulerThreadPoolExecutor
			i. 	FanOutRecordsPublisher WARN log indicating high time (over 11 seconds) taken 
				by SchedulerThreadPoolExecutor to deliver an event to ShardConsumer. 
				Ideally delivery time should be less than a second. If this is consistently 
				high, refer section c.
					- WARN  s.a.k.r.f.FanOutRecordsPublisher [NONE] - shardId-000000000077: Record delivery time to shard consumer is high at 14534 millis. Check the ExecutorStateEvent logs to see the state of the executor service. Also check if the RecordProcessor's processing time is high.
			ii. FanOutRecordsPublisher DEBUG log to check the current event delivery time 
				by SchedulerThreadPoolExecutor. Ideally this should be less than a second. 
				If this is consistently high, refer section c.
					- DEBUG  s.a.k.r.f.FanOutRecordsPublisher [NONE] - shardId-000000000077: Record delivery time to shard consumer is at 401 millis
			iii.ExecutorStateEvent INFO log emitted every 30 seconds to check if activeThreads 
				is consistently high. activeThreads is considered high if it is more than 2X 
				the number of worker leases. If this is consistently high, refer section c.
 					- INFO s.a.k.c.DiagnosticEventLogger - Current thread pool executor state: ExecutorStateEvent(executorName=SchedulerThreadPoolExecutor, currentQueueSize=0, activeThreads=450, coreThreads=0, leasesOwned=64, largestPoolSize=520, maximumPoolSize=2147483647)
	c. 	The customers are expected to assess the state of the SchedulerThreadPoolExecutor using 
		the above diagnostic log messages and take appropriate mitigations like reducing the 
		RecordProcessor.processRecords' Time or scaling out the application. 
		

Case 3. All customers of KCL 2.x, prior to 2.2.2 release, were in a blind spot to throttling or 
any other exception from Cloudwatch metrics publish calls. Now these exceptions are made visible 
and we expect customers to take appropriate actions like increasing the Cloudwatch Put API TPS 
to fix the throttling issue or increasing the concurrent connections of the cloudwatch client 
to fix the limited connections issue.
	a. 	Increasing the concurrency of client - CloudWatchAsyncClient.builder().region(region).httpClientBuilder(NettyNioAsyncHttpClient.builder().maxConcurrency(Integer.MAX_VALUE)).build();
	b. 	Cloudwatch limit increase - https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
investigation v2.x Issues related to the 2.x version
Projects
None yet
Development

No branches or pull requests