search by tags

for the user

adventures into the land of the command line

what happens if you stop a k8s pod with docker stop or kill

(k8s on azure with flannel for container network)

if you stop/delete the container in a non-kubelet way (with docker stop or even with kill), kubelet doesn’t know anything about this and then, the real fun begins… at this point, you will not able spin up new pods on this node anymore… log files will start to have the following output:

"Failed to setup network for pod \"myapp-3ty5d(997a82ce-8c15-11e6-bbb0-42010af00002)\" using network plugins \"kubenet\": Error adding container to network: no IP addresses available in network: kubenet; Skipping pod"

this is a known problem with K8S and flannel. the fix is the following (didn’t try it personally):

stop kubelet and the docker daemon on the effected node

systemctl stop kubelet
systemctl stop docker

delete these directories

rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/

turn off and delete some virtual network things:

ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1

start the docker daemon and kubelet once more

systemctl start docker
systemctl start kubelet