Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes.io/ingress-bandwidth doesn't seem to work reliably #142

Open
mattwing opened this issue Nov 6, 2023 · 1 comment
Open

kubernetes.io/ingress-bandwidth doesn't seem to work reliably #142

mattwing opened this issue Nov 6, 2023 · 1 comment

Comments

@mattwing
Copy link

mattwing commented Nov 6, 2023

Expected Behavior

Setting kubernetes.io/ingress-bandwidth to e.g. 35M should limit ingress bandwidth to 35M

Current Behavior

The bandwidth is sometimes limited to that amount, sometimes not

Possible Solution

Steps to Reproduce (for bugs)

Here's my networking config:

/etc/cni/net.d/10-canal.conflist

{
  "name": "k8s-pod-network",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "calico",
      "log_level": "info",
      "datastore_type": "kubernetes",
      "nodename": "<mynodehost>",
      "mtu": 1450,
      "ipam": {
          "type": "host-local",
          "ranges": [
              [
                  {
                      "subnet": "usePodCidr"
                  }
              ]
          ]
      },
      "policy": {
          "type": "k8s"
      },
      "kubernetes": {
          "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {"portMappings": true}
    },
    {
      "type": "bandwidth",
      "capabilities": {"bandwidth": true}
    }
  ]
}

My pod spec:

apiVersion: v1
kind: Pod
metadata:
  name: curl-client-35m
  annotations:
    kubernetes.io/ingress-bandwidth: 35M
spec:
  securityContext:
    runAsNonRoot: true
  containers:
  - name: curl-client
    image: curlimages/curl:7.78.0
    command: ["sh", "-c", "curl -sSL -w 'Download speed: %{speed_download} bytes/sec\n'  https://a-large-file-i-can-download-from-my-pod -o /dev/null"]
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      seccompProfile:
        type: RuntimeDefault

But here's the output I'm getting (from kubectl logs pod/curl-client-35m):

[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 10257544 bytes/sec -> 82.060352 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 4171448 bytes/sec -> 33.371584 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 4417035 bytes/sec -> 35.33628 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 4808109 bytes/sec -> 38.464872 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 5672415 bytes/sec -> 45.37932 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 10938176 bytes/sec -> 87.505408 Mbps
[root@my-node ~]# k logs --tail=10 -f pod/curl-client-35m
Download speed: 5281018 bytes/sec -> 42.248144 Mbps

It looks like sometimes this works as expected, since the middle speeds are all roughly 35Mbps. But the first and last few are all faster than 35Mbps, and I'm not sure why.

Context

I'm trying to limit the amount of bandwidth that can be used by my pods in a network-constrained environment.

Your Environment

  • Calico version: rancher/hardened-calico:v3.26.1-build20231009
  • Flannel version: rancher/hardened-flannel:v0.22.1-build20231009
  • Orchestrator version:
  • Operating System and version: Rocky 8
  • Link to your project (optional):
@mattwing
Copy link
Author

mattwing commented Nov 6, 2023

It's possible this is more likely a calico issue than a canal issue so I filed projectcalico/calico#8187 as well

@mattwing mattwing closed this as completed Nov 6, 2023
@mattwing mattwing reopened this Nov 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant