Skip to content
This repository has been archived by the owner on Jan 19, 2023. It is now read-only.

Unable to switch to more than 1 context without crashing #3350

Open
sarman-tftsr opened this issue Oct 7, 2022 · 0 comments
Open

Unable to switch to more than 1 context without crashing #3350

sarman-tftsr opened this issue Oct 7, 2022 · 0 comments

Comments

@sarman-tftsr
Copy link

sarman-tftsr commented Oct 7, 2022

What steps did you take and what happened:
[A clear and concise description of what the bug is, and what commands you ran.)
I am needing to administer multiple Kubernetes clusters. If I switch contexts more than 1 or 2 times, Octant will either crash (Launched by CLI), or simply hang (Launched by GUI). When this happens my only action is to close, if launched by GUI, and relaunch the program to get to the context I need.

What did you expect to happen:
The application should be able to switch contexts without locking up like this.
Here are the console output/logs when launched via CLI (Partial due to char. limit):

2022-10-07T08:51:30.909-0500	ERROR	api/content_manager.go:159	generate content	{"client-id": "cdbb5ed8-4646-11ed-a414-f01898e82da4", "err": "generate content: preferred version for StreamTemplate.jetstream.nats.io: unknown version for StreamTemplate.jetstream.nats.io", "content-path": "overview/namespace/apollo-dev"}
github.com/vmware-tanzu/octant/internal/api.(*ContentManager).runUpdate.func1
	github.com/vmware-tanzu/octant/internal/api/content_manager.go:159
github.com/vmware-tanzu/octant/internal/api.(*InterruptiblePoller).Run.func1
	github.com/vmware-tanzu/octant/internal/api/poller.go:86
github.com/vmware-tanzu/octant/internal/api.(*InterruptiblePoller).Run
	github.com/vmware-tanzu/octant/internal/api/poller.go:95
github.com/vmware-tanzu/octant/internal/api.(*ContentManager).Start
	github.com/vmware-tanzu/octant/internal/api/content_manager.go:133
E1007 08:51:31.155137   25950 runtime.go:78] Observed a panic: "close of closed channel" (close of closed channel)
goroutine 73582 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x58a9c40, 0xb692470})
	k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
	k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x75
panic({0x58a9c40, 0xb692470})
	runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
	k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
E1007 08:51:31.155132   25950 runtime.go:78] Observed a panic: "close of closed channel" (close of closed channel)
goroutine 73556 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x58a9c40, 0xb692470})
	k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x85
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
	k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x75
panic({0x58a9c40, 0xb692470})
	runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
	k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
panic: close of closed channel [recovered]
	panic: close of closed channel

goroutine 73582 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
	k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x58a9c40, 0xb692470})
	runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
	k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88
panic: close of closed channel [recovered]
	panic: close of closed channel

goroutine 73556 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x404a3d1})
	k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0xd8
panic({0x58a9c40, 0xb692470})
	runtime/panic.go:1038 +0x215
k8s.io/client-go/tools/cache.(*processorListener).pop(0xc002381500)
	k8s.io/[email protected]/tools/cache/shared_informer.go:752 +0x287
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	k8s.io/[email protected]/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	k8s.io/[email protected]/pkg/util/wait/wait.go:71 +0x88

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Console output posted above ^^^

Environment:
CLI version of Octant:

octant version
Version:  0.25.1
Git commit:  f16cbb951905f1f8549469dfc116ca16cf679d46
Built:  2022-02-24T21:59:56Z

GUI application of Actant:

Version | (dev-version)
-- | --
f16cbb9
2022-02-24T22:39:43Z

KUBECTL Version:

Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-13T14:30:46Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6+k3s1", GitCommit:"418c3fa858b69b12b9cefbcff0526f666a6236b9", GitTreeState:"clean", BuildDate:"2022-04-28T22:16:18Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

OS:

Mac OSX 12.6
  • Octant version (use octant version):
  • Kubernetes version (use kubectl version):
  • OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc):
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant