Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split controller RBAC into cluster-wide and tenant roles #2346

Merged
merged 1 commit into from Apr 15, 2020
Merged

Split controller RBAC into cluster-wide and tenant roles #2346

merged 1 commit into from Apr 15, 2020

Conversation

ghost
Copy link

@ghost ghost commented Apr 7, 2020

Changes

The controller currently operates with a single ClusterRole that spans a very broad set of access permissions. In multi-tenant scenarios this kind of RBAC configuration can be quite dangerous. In order to better support potential multi-tenant configurations this PR splits the roles that the controller receives into two:

  1. tekton-pipelines-controller-cluster-access: those permissions needed cluster-wide by the controller.
  2. tekton-pipelines-controller-tenant-access those permissions needed on a namespace-by-namespace basis.

This PR does not actually change the level of access afforded to the controller. Instead, the roles are split but remain cluster-scoped by default. There should be no noticeable change in behaviour from the existing RBAC configuration in master.

If a team wanted to start running a multi-tenant service they would be able to bind tekton-pipelines-controller-tenant-access using a RoleBinding instead of a ClusterRoleBinding, thereby limiting the access that the controller has to specific tenant namespaces.

Hat-tip to @eddie4941 for designing these changes.

Submitter Checklist

These are the criteria that every PR should meet, please check them off as you
review them:

@googlebot googlebot added the cla: yes Trying to make the CLA bot happy with ppl from different companies work on one commit label Apr 7, 2020
@tekton-robot tekton-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Apr 7, 2020
Copy link
Member

@dibyom dibyom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A doc on how a cluster operator could would make this more discoverable! (it could be just the details that is currently in the commit message)
Also: looks like one of the commits is basically #2321?

@ghost
Copy link
Author

ghost commented Apr 8, 2020

Good catch! This PR was branched from the webhook one. I'll rebase on top of master now that it's merged.

Also, good call on the doc, I'll add some supporting information.

The controller currently operates with a single ClusterRole that
spans a very broad set of access permissions. In multi-tenant
scenarios this kind of RBAC configuration can be quite dangerous.

In order to better support potential multi-tenant configurations
this PR splits the roles that the controller receives into two.

This PR does not actually change the level of access afforded to
the controller. Instead, the roles are split but remain
cluster-scoped by default. There should be no noticeable change
in behaviour from the existing RBAC configuration in master.

If a team wanted to start running a multi-tenant service they
would be able to bind tekton-pipelines-controller-tenant-access
using a RoleBinding instead of a ClusterRoleBinding, thereby
limiting the access that the controller has to specific tenant
namespaces.

Full credit goes to to @eddie4941 for designing these changes!
@ghost
Copy link
Author

ghost commented Apr 8, 2020

I've added a short description to the docs/developers/README.md just to document the existence of these two roles and their intended usage. I think there's an opportunity to provide much stronger guidelines on building multi-tenant services with Tekton but I'm not knowledgeable enough to do that right now.

I've also updated this PR to be based off of master, which has removed a couple of files from the diff since they were already merged in an earlier commit.

Copy link
Member

@dibyom dibyom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. I'm sure we can improve on the multi-tenancy best practices in the future but thanks for getting it started!
/approve

- apiGroups: [""]
resources: ["pods", "pods/log", "namespaces", "secrets", "events", "serviceaccounts", "configmaps", "persistentvolumeclaims", "limitranges"]
verbs: ["get", "list", "create", "update", "delete", "patch", "watch"]
# Unclear if this access is actually required. Simply a hold-over from the previous
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, yeah this one is suspect...maybe we can get rid of it in a followup

@tekton-robot tekton-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 8, 2020
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
resourceNames: ["config-logging", "config-observability", "config-artifact-bucket", "config-artifact-pvc", "feature-flags"]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: these configmap names are configurable - overrideable with environment variables in the controller pod. We should probably document these being named in the Role and tell users to make sure they change them here as well if they modify the defaults.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup thats correct. The names are listed out here: https:/tektoncd/pipeline/blob/master/config/controller.yaml#L66-L75

Alternatively, if we are worried about keeping these in sync, we can have the default role thats included in release.yaml not list resourceNames. We can then add a line in the docs saying that its possible to limit the names further if we want by added resourceNames that match those that are passed to the controller via env vars. Though being explicitly is nicer from rbac perspective as you now exactly what the controller is doing.

Copy link
Member

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@tekton-robot tekton-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 15, 2020
@tekton-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dibyom, vdemeester

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tekton-robot tekton-robot merged commit 5d9c881 into tektoncd:master Apr 15, 2020
name: tekton-pipelines-controller-tenant-access
rules:
- apiGroups: [""]
resources: ["pods", "pods/log", "namespaces", "secrets", "events", "serviceaccounts", "configmaps", "persistentvolumeclaims", "limitranges"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just noticed this here. Were listing namespaces even though they are not a namespaces resource. So we can drop it form here. This is already covered in the tekton-pipelines-controller-cluster-access ClusterRole anyway.
This shouldn't cause any issues and should be treated as a no-op, but it would be nice to clean it up.

@ghost ghost mentioned this pull request Apr 15, 2020
1 task
@afrittoli afrittoli added the kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. label Apr 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cla: yes Trying to make the CLA bot happy with ppl from different companies work on one commit kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants