Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't not remove the pool when tenant show green. #2251

Open
jiuker opened this issue Aug 1, 2024 · 0 comments
Open

Can't not remove the pool when tenant show green. #2251

jiuker opened this issue Aug 1, 2024 · 0 comments

Comments

@jiuker
Copy link
Contributor

jiuker commented Aug 1, 2024

Health check is another syncer.
Expand or remove pools can't depend on this outdated status.

Expected Behavior

Current Behavior

image

pool1,log

INFO: Unable to use the drive https://myminio-pool-2-0.myminio-hl.minio-operator.svc.cluster.local:9000/export/data: drive not found, will be retried
INFO: Unable to use the drive https://myminio-pool-2-1.myminio-hl.minio-operator.svc.cluster.local:9000/export/data: drive not found, will be retried
INFO: Unable to use the drive https://myminio-pool-2-2.myminio-hl.minio-operator.svc.cluster.local:9000/export/data: drive not found, will be retried
INFO: Unable to use the drive https://myminio-pool-2-3.myminio-hl.minio-operator.svc.cluster.local:9000/export/data: drive not found, will be retried
INFO: Waiting for a minimum of 2 drives to come online (elapsed 13m41s)

pool3,log

Error: Unable to resolve DNS for https://myminio-pool-2-3.myminio-hl.minio-operator.svc.cluster.local/export/data: lookup myminio-pool-2-3.myminio-hl.minio-operator.svc.cluster.local on 10.96.0.10:53: no such host (*fmt.wrapError)
       host="myminio-pool-2-3.myminio-hl.minio-operator.svc.cluster.local", elapsedTime="1 second elapsed"
      10: internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()
       9: internal/logger/logonce.go:149:logger.LogOnceIf()
       8: cmd/logging.go:104:cmd.bootLogOnceIf()
       7: cmd/endpoint.go:877:cmd.PoolEndpointList.UpdateIsLocal()
       6: cmd/endpoint.go:1015:cmd.CreatePoolEndpoints()
       5: cmd/endpoint-ellipses.go:506:cmd.createServerEndpoints()

Possible Solution

Steps to Reproduce (for bugs)

use kubectl get tenant -A -w to watch the status.

  1. deploy tenant with pool-1, pool-2, wait Initialized & green
  2. add new pool-3
  3. when the status change into Initialized & green , remove pool-2 quickly
$ kubectl get tenant -A -w
NAMESPACE        NAME      STATE         HEALTH   AGE
minio-operator   myminio   Initialized   yellow   6m5s
minio-operator   myminio   Initialized   yellow   6m28s
minio-operator   myminio                          0s
minio-operator   myminio                          10s
minio-operator   myminio                          10s
minio-operator   myminio                          15s
minio-operator   myminio   Waiting for MinIO TLS Certificate            15s
minio-operator   myminio   Waiting for MinIO TLS Certificate            20s
minio-operator   myminio   Provisioning MinIO Cluster IP Service            20s
minio-operator   myminio   Provisioning Console Service                     20s
minio-operator   myminio   Provisioning MinIO Headless Service              20s
minio-operator   myminio   Provisioning MinIO Headless Service              20s
minio-operator   myminio   Provisioning MinIO Statefulset                   21s
minio-operator   myminio   Provisioning MinIO Statefulset                   21s
minio-operator   myminio   Provisioning MinIO Statefulset                   21s
minio-operator   myminio   Provisioning MinIO Statefulset                   22s
minio-operator   myminio   Provisioning MinIO Statefulset                   22s
minio-operator   myminio   Provisioning MinIO Statefulset                   23s
minio-operator   myminio   Waiting for Tenant to be healthy                 23s
minio-operator   myminio   Waiting for Tenant to be healthy                 25s
minio-operator   myminio   Waiting for Tenant to be healthy                 30s
minio-operator   myminio   Waiting for Tenant to be healthy                 34s
minio-operator   myminio   Waiting for Tenant to be healthy                 39s
minio-operator   myminio   Waiting for Tenant to be healthy                 44s
minio-operator   myminio   Waiting for Tenant to be healthy                 49s
minio-operator   myminio   Waiting for Tenant to be healthy                 54s
minio-operator   myminio   Waiting for Tenant to be healthy        green    57s
minio-operator   myminio   Waiting for Tenant to be healthy        green    57s
minio-operator   myminio   Waiting for Tenant to be healthy        green    58s
minio-operator   myminio   Waiting for Tenant to be healthy        green    58s
minio-operator   myminio   Waiting for Tenant to be healthy        green    59s
minio-operator   myminio   Initialized                             green    59s
minio-operator   myminio   Initialized                             green    68s
minio-operator   myminio   Initialized                             green    73s
minio-operator   myminio   Initialized                             green    73s
minio-operator   myminio   Provisioning MinIO Statefulset          green    73s
minio-operator   myminio   Provisioning MinIO Statefulset          green    73s
minio-operator   myminio   Restarting MinIO                        green    73s
minio-operator   myminio   Restarting MinIO                        green    73s
minio-operator   myminio   Restarting MinIO                        green    83s
minio-operator   myminio   Initialized                             green    83s          <----------- here remove pool-2
minio-operator   myminio   Initialized                             green    87s
minio-operator   myminio   Initialized                             red      106s

image
In fact, pod are restarting at that time.

Context

Regression

Your Environment

  • Version used (minio-operator):
  • Environment name and version (e.g. kubernetes v1.17.2):
  • Server type and version:
  • Operating System and version (uname -a):
  • Link to your deployment file:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants