Skip to content
This repository has been archived by the owner on May 22, 2020. It is now read-only.

create a chroot aci for docker and kubelet #34

Closed
wants to merge 1 commit into from

Conversation

mikedanese
Copy link
Contributor

@mikedanese mikedanese commented May 19, 2016

./unpack creates a chroot with docker and kubelet sutible for using as the RootDirectory (chroot env) for a systemd unit.

@mikedanese mikedanese force-pushed the node-aci branch 3 times, most recently from c0c8634 to 27e4c5f Compare May 19, 2016 22:26
@mikedanese
Copy link
Contributor Author

cc @vishh @aaronlevy @roberthbailey

@mikedanese mikedanese force-pushed the node-aci branch 3 times, most recently from 6839b27 to 76ac6c3 Compare May 19, 2016 22:40
@aaronlevy
Copy link

I like the bash rkt-fly! Overall this lgtm.

Interested in your thoughts on the fixed chroot location Vs treating it more like a container. For example, if the node "container" is run with rkt, rootsfs would end up under separate UUIDs each time ( /var/lib/rkt/pods/run/<UUID>).

mkdir -p /opt/kubelet
tar xzvf node.aci -C /opt/kubelet

mount_in /proc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead why not mount in root, followed by a chroot?

mount_in / true
chroot $ROOTFS

@vishh
Copy link
Contributor

vishh commented May 20, 2016

Added my comments.

@aaronlevy
Copy link

Thinking about this a bit more, I kind of like the idea of running from a fixed location. That should skirt issues like: rkt/rkt#2553

@mikedanese
Copy link
Contributor Author

This is blocked by moby/moby#22846

@philips
Copy link

philips commented May 20, 2016

Hrm, I have played with running Docker under rkt before and I remember it working. Let me try it again. Here is the gist: https://gist.github.com/philips/4ba6f9888499266b0ab09d95991e6784

@mikedanese
Copy link
Contributor Author

@philips Thanks for the tip. I will attempt to debug this some more...


acbuild run -- \
curl -sSL --fail \
"https://get.docker.com/builds/Linux/x86_64/docker-1.11.1.tgz" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it too much to assume that docker will already be installed?

the version of docker to pull is very OS specific, and I struggle to think users will not have it already if they plan to use it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My increasing concern here is the "kubernetes supports docker vX". This has already become a little fragile with the CoreOS release cycle, where we have shipped docker too soon from k8s perspective (k8s v1.1 & docker v1.10 iirc), then with the upcoming k8s- v1.3 release it was unclear if v1.11 was going to be the blessed version (while even CoreOS alpha still had v1.10).

Decoupling this from the underlying host would (hopefully) make this a little easier to reason about / ship updates as atomic units.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @aaronlevy that I am more motivated by coupling kubelet/docker versions then making it easy to install docker.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The community itself doesn't agree on the version of docker to use in kube
1.3, I think this blesses a particular version moving forward. I prefer
it's removed. Maybe I just don't understand the expectation for this PR
generally.

On Thursday, June 23, 2016, Mike Danese [email protected] wrote:

In node-aci/build
#34 (comment):

+set -o xtrace
+
+rm -f node.aci
+
+docker2aci docker://debian:jessie
+
+acbuild begin ./library-debian-jessie.aci
+
+acbuild run -- apt-get update
+acbuild run -- apt-get install -y -q apparmor curl iptables
+acbuild run -- apt-get autoremove
+acbuild run -- apt-get clean
+
+acbuild run -- \

Agree with @aaronlevy https:/aaronlevy that I am more
motivated by coupling kubelet/docker versions then making it easy to
install docker.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https:/kubernetes/kube-deploy/pull/34/files/3173ffd06edce04392130f4d4a9f27458cd33022#r68332519,
or mute the thread
https:/notifications/unsubscribe/AF8dbMZ0la5he-1RJ21R2EvGIFZuUG7zks5qOxWVgaJpZM4IisM6
.

@vishh
Copy link
Contributor

vishh commented Jun 23, 2016

@mikedanese As discussed offline today, can we instead use rkt fly to run kubelet and not depend on docker for bootstrapping the kubelet? The rkt dependency is only for kubelet and users can choose to use any runtime..

@mikedanese
Copy link
Contributor Author

@vishh Yes I would like to use rkt to run this in kubernetes-anywhere if we decide to do this, but I also want it to be possible to use this without the rkt dependency. We can't use the upstream stage1_fly aci until rkt/rkt#2567 is fixed. I have a custom stage1 that solves this for now.

@derekwaynecarr
Copy link
Contributor

@mikedanese - is there a rough flow where this is anticipated to use? i thought the baseline was users come with a machine that conforms to a node spec (container runtime there, networking there), and then we do something. I am not sure where to view this PR in a broader context.

@mikedanese mikedanese closed this Jan 11, 2017
@k8s-reviewable
Copy link

This change is Reviewable

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants