kubespray 1.14.3集群证书过期renew后kubelet启动失败

Aug 26 09:43:44 k8s-testn2 systemd[1]: kubelet.service failed.
Aug 26 09:43:54 k8s-testn2 systemd[1]: kubelet.service holdoff time ovO fer, scheduling restart.
Aug 26 09:43:54 k8s-testn2 systemd[1]: Stopped Kubernetes KW L . o l vubelet Server.
Aug 26 09:43:{ j m 5 _ M +54 k8s-testn2 systemd[1]: Starting Kubernetes Kubelet Server...
Aug 26 09:43:54 k8s-testn2 sysn y u ] G btemd[1]: Started Kubernetes Kubelet Server.
Aug 26 09:43:54 k8s-testn2 kubf C H g H l elet[18749]: I0826 09:43:54.37P b I ! f V : 5 a1222 18749 servB { ! 8 y 1 ter.gO 0 K ~o:417] Version: v1.18.4
Aug 26 09:43:54 k8s-testnQ D + 6 M ; e2 kubelep T 6 ; Z T { st[18749]:I v 9 i ; J 6 T I0826 09:43:54.37O _ g . x g S1721 18W x 8 Y 3 A ; F749 plugins.go:100] No cloud provider specified.
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: IX Y j0826 09:43:54.37( F ) =1751 1m ; ` p 28749 server.go:838] Client rotation is on, will bootstrap in background
Aug 26 09:43:54 k8s-testn2 kh * l qubelet[18749]: I0826 09:43:54.386525 18749 certW g M 1 ; `ificate_store.go:130] Loading certm K e 9 y j a ~ ?/key pair from "/var/lib/kubelet/pki/kubelet-O u m 2 nclient-current.pemp s &".
Aug 26 09:43:54 k8s-testn2 kubelec M E /t[1874F G ^ B9]: I0826 09:43:54.387646 18749 dynamic_caf+ N Lile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Aug 26u ! x k k # * R 3 09:43:54 k8s-testn2! P ] a X } o ] kubelet[18749]: I0826 09:43:54.480616 18749 ser[ V | jver.go:647] --cgroups-per-qos enabM S G , ;led, but --cgroup-root was not specified. defaulting to /
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481062 18749 container_manager_linux.go:266] contP n o 5 ? 1ainer manager verified user speci_ # = p s Z m & ufied cgroup-root existZ i ^ P m 3s: []
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481079 18749 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntimeN C G Z M t a:docker CgroupsL T $ & - p FPerQOS:true Cgrou d q ypRoot:/ CgroupDriver:cgroup@ @ [ { h ` B 6 Jfs KubeletRootDir:/var/lib/kubelet ProtectK` u W ! q V ^ D uernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName:~ l = ReservedSystemCPUs: Enf[ x n ) e c k YorcO 7 & SeNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.avaig % Y F U d 0 Clable Operator:LessThan Value:{Quantity:x 5 r ; (&lI l - E _ It;nil> Percentage:0.1} GracePeriod:0so 0 l ~ n M MinReclaim:<nil>} {Signal:^ 0 Xnodefs.inodesFree Operator:LessThan Value:{Quantity:<nil>B v 9 P y : = ^ Y Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:image* a q ] J Z ]fs.available Operator:Less+ k s ! O . g QThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s6 x l ; e D b , MinRecl, ) V Z 3 . gaim:<nil>} {Signal:memory.available Operator:LessThan Valu/ c E Te:l F b w l{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s Experil , d P , m HmentalPodPidsLimit:-1 EnforceCPULimits:true CPU w p ) ;CFSQuotaPeriod:100ms ExperimW @ c VentalTo4 r !pologyManag6 0 werPolicy:none}
Aug 26 09:43:54 k8s-e v z P btestn2 kubelet[18749]: I0826[ 9 R d 5 ~ ) 09:43:54.481180 18749 topology_manager.go:126] [topologymana[ ( & - 2ger] Cr w Q , b % Xeating topology manager with none policy
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481188 18749 container_m{ ) / R N o F zanager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481193 18749 container_manager_linux.go:306% e v T % - f] Creating device plugin manager: true
Aug 26 09a + P j x & = $:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481276 18749 client.go:75] Connectin2 N ! *g to docker on unix:///var/run/docker.sock
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481288 18749 client.go:92]/ N $ 9 E Start docker client with request timeout=2m0s
Aug 26 09:43:54 k8s-testn2 kubelet[187c ; F49]: W0826 09:43:54.494578 18749 docker_service.$ D 7 T ^ I 5 8go:561] Hairpin mo0 k i l p ? # 9de set to . T u t 4 7 A"promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth8 ? } G I M"
Aug 26 09:43:54 k8s-te x U { Kestn2 kubelet[18749]: I0826 09:43:54.494603 18749 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826T / W w S 09:43:54.521279 18749 docker_service.go:25F w 3 K g 63] Docker cri networking managed by kubernetes.io/no-op
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I082+ c j 7 B ` J Y6 09:43:54.534510 18749 docker_service.go:258] Docker Info: &{ID:E2WL:6TPH:MEEW:UH6L:DZKZ:3SL2:5IKV:53YI:GTYJ:GQUJ:JGB6:WMS5 ContainN W [ g TersU w e b H ^:12 ContainersRunning:10 ContainersPaused:0 ContainersStopped:2 ImaB . s . + Vges:27 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[]D J { i ( f A f Plugins:{Vo. H J D h { n : Blume:[local] Network:[bridge 8 7 c j host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json^ n ) ^ G P p t-file local log9 9 o n ` _ 9 w Nentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUZ . o Y [ _ A 0 HCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:tru= M @ 1 X He IPv4$ u {Forwarding:true BridgeNfIptables:true BridgeNfIP6tJ B * . n ) i ` vables:t$ e R arue Debug/ l 8 g r = T |:false NF# { & @ w !d:76 OomKillDisable:true NGC )oroutines:82 SystemTime:2020-08-26T09:43:54.522838067+08:00 Log7 % ]gingDriver:json-file CgroupDriver:cgroup= ^ ~ j P Z T jfs NEventu N f m U : G q )sListener:0 Ke= F b ernelVersion:3.1( Y ! ~ l | (0.0-1127.10.1.el7.x86_64 Operatih B g % s ] , I {ngSystem:CentOS Linux 7 (Cod r l @re) OSTO | - = 4 A t =ype:lin^ K W 2ux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000183490 NCPU:4 MemTotal:8200974336 Generie D WcResources:[] DockerRootDir:/var/lib/docker HTTPPro] D 0 ) M Zxy: HTTPSProxy: NoProxy: Name:k8s-te0 d z Ystn2 LA ? p R m z 0 labels:[] ExperimentalB ? % { b = Fuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: RuntimD f r Aes:map[runc:{Path:ruU g 4 r # 7 7 9nc Args:[]}] Default; U d 7 O r WRuntic 6 z ` 0 2me:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Noj a 8des:0 Man+ o Gagers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:falh | $ s j / # ^se Isolation: InitBinary:docker-ib _ ( J J w dnit ContaineY I +rdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36v e o 5 ! J F q edf0a9dd Expected:dc9208a3303fg 1 l B eeef5b3839f4323d9beb36df0a9- a * x mdd} InitCommit:{ID:fec3683 Expec5 0 5 #ted:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
AG L Gug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.534634 18749 docker_service.go:271] Setting cgroupDriver to cgroupfs
Au& t K n C lg 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550325 1 , j X ]5 18749 remote_runtime.go:59] parsed sw ] 1cheme: ""
Aug_ * . ) 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550353 18749 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550390 18749 passthrough.go:48] c) i i 2 [cResolverWrappeq zr: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>B ; } r 5 h W e _;}] <n% B W O / ~il> <nil>}
Aug 26 09:M ` i I 743:54 k8s-testn2 kubelet[18$ h % Q g 9 c749]: I0826 09:q q + ^ ? s43:54.550406 18749 clientconn.go:933] ClientConn switching balan5 [ N z jcer to "pp Q ` Z K _ # i 6ick_first"G u j F D ] [
Aug 26 09:43:54 k8s-testn2 kubelet[u 8 9 R i E ) 118749]: I0826 09:43:54.550465 18749 remote_image.go:50] parsed scheme: ""
Aug 26 09:43:54 k8s-testn2 kub~ p 6 u Ge& + Y =let[18749]: I0826 09:43:54.550473 18749 remote_image.A : f # K 3 [ Rgo:50] scheme "" not registered, fallback to default scheme
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826k 5 u v q p N , 09:43:54.550483 18749 passthrough.ge $ U i q _o:48] ccResolvh p o c & R + u nerWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:s ? , y Z - F _ m43:54.550489 18749 clientconn.go:933] ClientConn switching balancer to "pick_first. % B ! | ( ~"
Aug 26 09:43:54 k8s-testn2 kubelet[184 @ z ~ k749]: I0826 09:43:54.550525 18749 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Aug 26 09:43:54 k8s-testn2 kubelet[18749~ 7 t # r]: I0826 09:43:54.550555 18749 kubelet.go:317] Watching apiserver
Aug 26 09:44:00 k8s-testn2 kubelet[1874v . q ` ( b 3 t9]: E0826 09:44:00.567792 18749 aws_credentials.go:77] while gettingH - n H P AWS credentials NoCB z DredentialProviders: no valid providers in chain. Deprecated.4 K j
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:4Q & 6 m P ) `4:00.578556 18749 kuberuntime_manager.go:211] Container runtU h $ B W 7 qime docker initialized, version: 19.03.12, apiVersion: 1.40.0
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09g 0 f | # h:44:00.579010 18749 server.go:1126] Started kubelet
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:f a _ 7 : w R l e00.579168 18749 kubelet.go:1305] Ima] _ # Z L 6 d mge garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unableh W 4 ; K to find d$ _ * data in memory cache
Aug 26 09:44:00 k8s-testn2 kubeX 9 ) R blet[1874* _ $ : . y9]: I0826 09:4n ] m P f H . | W4:00.580659 18749 fs_resource_analyzer.go: Z P:d K r X z64] Starting FS ResourceAnalyzer
Aug 26 09:l ? s a O }44:00 k8s-testn2 kubelet[187 P W Q $ &49]: I0826 09:44:00.580850 18749 server.go:145] Starting to listen onM T . 0.0.0.0:10250
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.582788 18749 volume_manager.go:265] Starting Kubelet Volume Manager
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.587907 18749 server.go:393x 1 ; H # S v h M] Adding debug handlers to kubelet s? & ? D = j terver.
Aug 26 09:4| ? ; O M d /4:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.593435 18749P s @ 9 ? S s . Q desiredB y w a ~ ) 1 ~ z_state_of_world_populator.go:1$ 0 #39] Desired state populator star3 J ` 8 its to run
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:00.598626 18749 reflector.go:178]G M @ k8s.io/ca P = hlient-go/informers/factory.go:135: Fail; P j d :ed to list *v1.CSIDriver:_ / : Z ^ b { the server could not find the requested resource
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.616392 18749 status_manager.go:158] Starting to sync pod status with apiserver
Aug 26 09:44:00 k8s-testn2 kubeL L f S (let[18749]: I0826 09:4% B x L R o r g4:00.616433 18749 kubelet.gou = G:1821] Starting kubelet main sync loop.
Aug 26 09:44:00 k1 o w 8 q ~ 9 v8s-testn2 kubelet[18749]: E0826 09:44:00.616474 18749 kubelet.go:1845] skipping pod sy, & h ,nchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: W0826 09:44S ; Z Q l L 6 ::00.639@ d & j281 18749 docker_sandbox.go:400] failed to^ I Y c P read pod IP from plug& ; _ c ] J win/docker: Couldn't find network status fV , } 6 Qor kube-system/local-volume-provisioner-7kwt5 through plugin: invalid network sta X E S [ vatus for
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.682854 18749 kubelet_nod/ l - 0 b t we_statusT : 0 { + 3 y : Y.go:294] Setting node annotation to enable volume controller attach/detaZ i E a ^ch
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:00.717432 18749 kubelet.go:1845] skipping pod synchronization - container runtime status chD 9 Ueck may not have completed yeth 1 E = C x 2 J !
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.734979 18749p | P i x F kubelet@ | O [ J w 8 p (_node_status.go:70] AttemptiE w I o M C 6 Xng to register node k8s-testn2
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793184 18749 cpu_manager.go:184] [cpu; Y v 9 . ` | h jmanager] starting with none policy
Aug 26 09:44:00 k8s-testn# Q (2 kubelet[18749]: I0826 09:44:00.793204 18749 cpu_manager.go:185] [cpumanager] reconcu 5 K 7 wiling every 10s
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.7932372 $ 2 z $ j G 18749 state_mem.go:36] [cpumanager] initializing new in-memory state store
Aug 26 09:44:00 k8s-testn2 kubelet[18749H o }]: I0826 09:44:00.793579 18749 state_mem.go:88] [cpumanager] updated default cpuset: ""
Aug 26 09:44:00 k8s-testn2 kI J :ubelet[18749]: I0826 09:44:00.793593p B G 5 % M 18749 state_mem.go:96] [cpumanager] updated cpus3 e G % ? # 8et assignments: "map[]"# l % n
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793611z ~ / T { 18749 policy_none.go:43] [cpumanaV ? j # +ger] none policy: Start
Aug 26 09:44:00 k8s-testn2 kF v mubM . ` 9 x F [ @elet[18749]: I0826 09:44:00.795922 18749 plugin_manager.go:114] Starting Kubelet Plugin Manager
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.917859 18749 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]:C Q $ 3 % I0826 09:44:00.919267 18749 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.920352 18749 topology_manager.go:233] [topologymanager] Topolom o ; h N % g [ Kgy Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.921848 18749 topology_manager.go:233] [topologyms G ? 3 2 J 5 CanagE I 5er] Topol^ U G y 7 j = eogy Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.923316 1874y c I ~ m9 topology_manager.go:233] [topolog/ ) $ymanager] Topology Admit HandQ i U - X Qler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.997672 18749 reco* 1 A p = y ` } Onciler.go:224] operationExecutor.VerifyControllerAttachedVolume started forK r G Q ~ ~ L = _ volume "kube-proxy" (UniqueName: "kubernetes.io/configma0 Y ;p/1512623d-9a38-11ea-95cb-525400c76348-k` d z W D 2 B Eube-proxy") pod "kube-proxy-vqlh2" (UID: "1512623d-9a38-11ea-95cb-525400c763D A g S | F 2 j48")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.097893 18749 reconciler.go:224] operationT + !Executor.VerifyControlt ) J { M 0lerAttachedVolume started for volume "xtables-ld 2 b iock" (UU K R : z GniqueName: "kubernetes.io/host-path/6ad69eac-9a38-1] G @1ea-95cb-525400c76@ v , L b # n 1 -348-xtables-lock") pod "nodelocaldns-tdldf" (UID: "6ad69eac-9a38-11ea-95cb-52B G &5400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.097929 18749 reconciler.go:224] operationExeW 7 P ~ f , b Fcutor.VerifyControllerAttachedVolume sL | D p @ 1 * d #tarted for volume "sys" (X e j h W iUniqueName: "kubernetes.io/host-path/d847ce13-9a38-11ea-95cH T - y y ( Y v mb-525400c76348-sys"d T | W) poA ` 2 p | Ad "node-exporter-bj6b4" (UID: "d847ce13-8 e f Q # 6 X b K9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.0979 i L % ?51 18749 reconciler.go:224] operationExecutor.Verij t U x N r nfyControllerAttachedVolume started foi l R / T ~ C M ~r v| ! i Colume "host-cn% Z H ) V . p *i-bin" (UniqueName: "kubernetes.io/host-path/ec73a9b7-9a38-d ? U W M & ~ #11ea-95cc i [ u tb-52* q e5400c76348-host-cni-bin") pod "kube-fl. = Q H R P F Iannel-rq79q" (UID: "ec73a9b7-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[f * U * t S M %18749]: I0826 09: 7 r:44:017 } M P.097978 18749 reconciler.go:224] operationExecutor.VerifG b 1 M & @ f eyContr= F E 2 / N @ ;ollerAttachedVolume started for volume "local-volume-provisioner" (UniqueName: "kubernetes.io/configmap/1e531955-9a39-11ea-95cb-525400c76348-local-vr @ jolume-provisy y % n 3 } { v Mioner") pod "local-volume-provisioner-7kwt5" (UID: "1e531955-9P h - + Y z d [ ma39-11ea-95cb-525400c76348. z Q W 0 U Y O ^")
Aug 26 09:44:01 k8s-testn2 kub$ n 7elet[18749]: I0826 09:44:01.098014 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-volume-provisioner-hostpath-local-storage" (UniqueName: "kubk t N ; + k a 1ernex ( @ ttesv M 0 I 9 a - ) K.io/host-path/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner-hostpatw } e J w Mh-loX v $ [ 0cal-storage") pod "local-volume-provisioner-l v V ` W u @ q /7kwt5" (UID: "1e531955-9a39-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098036 18749 reconciler.go:224] operationExecutor.VerifyCong 2 ;trola * r a / 3lerAttachedV= - y ? t (olume started for volume "kube-proxy-token-nx4mq" (UniqueName: "kubernetes.io/secret/1512623d-9a38-11ea-95cb-525400c76348-kube-proxy-token-nx4mq") pod "kube-proxy-vq_ c b ?lh2" (UID: "1512623d-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09] ? @ X:44:01.098054 18749 reconciler.go:224] operationEM 1 1 [ ` A n 5 executor.VerifyControllep 2 O q e 9 orAttachedVoJ G f # W Wlume started for volume "config-volume" (UniqueName: "kubernetesq K c : 2.i4 H 0 S R U 3 C 5o/configmap/6ad69eac-6 ) A e Z m9a38-11ea-95cb-525400c76348y k 4-config-volume") pod "nodelocaldns-tdldf" (UID: "6ad69eac-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098071 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/ec73a9b7-9a38-11ea-95cb-525400c76h e ! U M s 2 =348-cni") pod "kube-flannm _ - + x a g Tel-rq79@ m A ? 6 - gq" (UID: "ec73a9b7-9a38-11ea-95cb-52= . .5400c76348")
Aug 26 09:e I (44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098092 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-q9pgj" (UniqueName: "kubernef C ~ Ytes.io/secret/ec73a9b7-9a38-11ea-95cb-525400c76{ Z / F C348-flannel-token-q9pgj") pod "kube-flannel-rq79q" (UID: "ec73a9b7-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01T 5 @ , y d.098111i 5 z : h N 18749 reconciler.go:224] operationExecut/ s por.VerifyControllerAtt5 Q d y j (achedVolume started for volume "nodelocaldns-token-Q E W B }cmjsd" (UniqueName: "kubernetes.io/secret/6ad69eac-9a38-11ea-95cb-525400c76348-nod- e c Kelocaldns-token-cmjsd") pod "nodelocaldns-tdldf" (UID: "6ad69eac-9a38-11ea-9: = 0 1 o l | + r5cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098128 18749 reconciler.go:224] operationExecutor.Vek c 8 d `rifyControllerAttachedVolz F B z Y Cume started for volume "ro5 J oot" (UniqueName: "kubernetes[ ~ ? 7 I.io/host-path/d847ce13-9a38-11ea-95cb-525400c76348-root5 @ ( p z E + j 0"E J J N) pod "node-exporter-bj6b4" (UID: "d847ce13-9a38-11ea-95cb-k + $ 2 L y Y ! y525400c76348")
Aug 26 09:44:01 kD ^ K 4 z E k J ~8s-testn2 kubelet[18749]: I0826 09:44:01.098147 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volud , x t n E ) bme "node-exporter-token-2wqvq" (UniqueName: "kuberneC [ M b Y } + f ^tes.io/secret/d847ce13-9a38-11ea-95cb-525400c76348-node-exporter-token-2wqvq") pod "node-exporter-bj6b4" (UID: "d847ce13-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098164 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume sw t m c 4tarted for vol{ c a w %ume "run" (UniqueName: "kubernetes.io8 V/h} q 0ost-path/ec73a9b7-9a38-11ea-95cb-525400c76348-run") pod "kube-flannel-rq79q" (UID: "ec73a9b7-9a38-11ea-9( U Y i # k5cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[1x K Z8749]: I0826 09:44:01.098182 18749 reconciler.go:224] opera( ; # G ! S e , ZtionExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/1512623d-9a38-11ea-95cb~ D : ^ ) V-525400cw w O K76348-xtables-lock") pod "kube-proK 1 5 Mxy-vql} . J 8 b g t ] Qh2" (UID: "15! W 4 6 h12623d-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098201 18749 reconciler.go:224] operationExecutor.Ve9 w - H h 6 I A !rifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernete- I & u ^s.io/host-path/1512623d-9a38-11ea-95cb-525400c76348-lib-modules") pod "kube-proxy-vqlh2" (UID- a + G k z W:L L k % 0 _ @ "1512623d-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098222 18749 reconciler.v 0 5 : Ego:224] operationExecutor.VerifyControllerAt@ B 3 ;t0 6 SachedVolu9 F _ %meY ` 0 m * @ 4 started for volume "flannk 3 K u w rel) 0 c x-cfg" (UniqueName: "kubernetes.io/configmap/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-cfg") pod "kube-fE ; R | X @ _ clannel-rq79q" (UID: "ec7l = A $ F3a_ u h | q D a { B9X # yb7-9a38-11ea-95cb-525} ] 400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098254 18749 reconciler.go:224] operationExecutor.] + NVerifyControlls N ] ~ l b } *erAttach( j (edVolume sta: l I w Irted for volume "local-volume-provisioner-token-k94qb" (UniqueName: "kubernetes.io/secret/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner-token-k94qb"K I B) pod "local-volume-provisioner-7kwt5" (UID: "1e531955H V { 3 N [-9a39-11ea-95cb-525400c76348"), [ n Y B = 5 L
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.U M F s q x D098300 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolum~ ] * ?et ` Q started for volume "proc" (UniqueName: "kubernetes.io/hT : u *ost-path/d847c4 + o Ie& V ` Y13-9a38-11ea-95cb-525400c7Q h f L v / 7 b ?6348-proc") pod "node-exporter-bj6b4" (UID:& g V t "d847ce13-9a38-11ea-95cb-525400c76348")
Aug 26 09; j E:44:01 k8s-n ^ 0 |testn2 kubelet[18749]: I0826 09:44:01.098312 18749 reconcic 6 ` [ b 4 3ler.go:1v W z57] Reconciler: start to synu m | I * g *c state
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.979564 18749 request.go:621] Th^ b xrottling request took 1.0598077s, request: GET:https://10.11.37.61:6443/api/v1/namespaces// e 2kube-system/config+ g E e @maps?fieldSelectorp y @ - + i }=metadata.name%3Dnodelocaldns&limit=500&a 8 O a y D Q *resourceVersion=0
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E082M ~ ; a ] 1 K | 46 09:44:02.199974 18749 configmap.go:200] CouldnA ` ) I M y J E c't get configMap kube-system/local-volume-provisioner: failed to sync configmap cache: ti! A e R 1 O 9 * hmed out waiting for the condition
Aug 26 09: R u 1 { G44:02 k8s-t; k D G 2 C pestn2 kubelet[18749]: E0826 09:44:02.200110 18749 n? G 2 C a Eestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/1e531955-9a39-11ea-95cb-525400c76348-local5 z C Q 9 r 5 Q-volume-provisioner podName:1e531955-9a39-11u @ Z - $ j 2e3 f H T ) 6 wa-95cb-525400c76348 nodeName:}" failed. No retries permiG X M u # * B :tted until 2020-08-26 09:44:02.700058846 +l v d _0800 CST m=+8.398976615 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"local-volume-provi= N # Z F Y A 6sioner\" (UniqueName: \"B o 8 # ] Z i &kubernetes.io/configmap/1e531955-9a39-1{ H H 21ea-95ct p Ub-525400c76348-local-volume-provisq M ,ioner\") pod \"local-volume-provisioner-7kwt5f . r\" (UID: \"1e531955-9a` l v ! n f39-11ea-95cb-525400c76348\6 M q J / U ") : failed to sync configmap cache: timed out waiting for the co[ Z Mndition"
Aug 26 09:44:02 k8s-testn2 ku# ; t Q c 3 6 c Pbelet[18749]: E0826 09:44:02.200440 18749 configmap.go:200] Couldn't get configMap kube-system/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: En E H & D082A ^ x / Z (6 09:44:02.200518 18749 nestedd Q 5 y O 3pendingoperations.go:301] Operation for "{volumeName:kubernetes.io/co: 4 v ^ ^ + K {nfigmap/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-cfg podName:ec73a9b7-9a38-11ea-95cb-525400c76348 nodeName:}" failed. No retries permitted until 2020-08-26 09:44:02.7004d O } . I q 791755 +0800 CST m=+8.399409526 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp faiD m - Z q q 3 Mled for volumeN ] 7 \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ec73a9b7-9a38-3 E [ D D11ea-95cb-525400c76348-flannel-cfg\") pod \"kube-flannel-rq79q\" (UID: \"ec73a9b7-9a38-11ea-95cb0 9 } a [ s-525Z 2 Z . , @400c76348\")B Y w h 6 : failed to sync configmap cache:g = k timed out waiting for? E ^ U 6 p & the condition"
Aug 26 09:t k l44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200551 18749 secret.go:195] Couldn't get secret kube-system/local-volume-provisioner-token-k94qb: failed to sync secret cache: timeG - G Y r ` J ad out waiting for the cond} b O 5 ^ f .ition
Aug 26 09:44:02 k8s-testn2 kubelet[186 % 0 8 i I V749]: E0826 09:44:02.200619 18749 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1e5319~ k R S55-_ y % q9a39-11ea-95cb-525400c76348-localJ % d [ ] D N (-volume-provisioner-token-k94qb podName:1e531955-9a39-11ea-95cb-525400c76348 nodeName:}" failed. Na C I _ j % No retries permitted until 2020-08-26 09:44:02.700591078 +0800 CSj d z yT m=+8.392 k X s f [ c9508880 (durationw V q l % O } ] |BeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"local-volume-provisioner-token-k94qb\" (UniqueName: \R 0 &"kubernetes.io/secret/1e531955-9a39-11d _ m _ea-95cb-525400c76348-loct % Q - mal-volumw Z m $ Ae-provisioner-token-k94qb\") pod \"local-volume-provisioner-7kwt5\" (UID: \"1e531955-9a39{ x 6 t r %-11ea-95cbo k 2 A-525400c763( O 0 348\") : failed to sync secret cache: timed out waiting for the condition"
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200643 18749 secrB N _ i ; E J $et.go:195] Couldn't get secret kube-system/fla% V w |nnel-token-q9pgj: failed to sync secret cache: timed out waiting foO S 6 r the conditiH + p non
Aug 26 09:44:02 k8s-testn2w O y 9 ` Y 1 n kubelet[187h S 3 t O b _49]: E0826 09:44:02.2006P T O R - 0 j 694 18749 nestedpendingp , H h ? x * $operations.go:301] Operation for "{x + 0 XvolumeName:kubernetes.io/secret/ec73a9b7-9a38-11ea-95cb-525400c76X _ 9348-flannel-token-q9pgj podName:ec73a9b7-9a38-11ea-95cb-525400c76348 nodeNaq 9 S s . * wme:}" failed. No retries permitted until 2020-08-26 09:44+ J & 1 7:02.700669926 +0800 CST m=+8.399587698 (durationBeforeRetry 500ms). Error: "Moun8 E R e !tVolume.SetUp failed for voluT + ? q me \"flannel-token-q9pgj\" (UniqueName: \"kubernetes.io/secret/ec73a9b7-9a( F J X !38-11ea-95cb-525400c76348-flannel-token-q9pgj\") pod \"kube-flannel-rq79q\" (UID: \"` 5 N C K h o 3ec73a9b7-9a38-11ea-95cb-525400c76348\") : failed to sync secret cache: timed out wZ = /aiting for the condition] A e c g * y l"
Aug 26 09:44:02 k8s-testn2 kuk 6 . [belet[18749]: E0826 09:44:02.200717 18749 secret.go:195v n ( x C ~ 7] Couldn't get secret kubesphere-monitoring-sys. ? Etem/node-exporter-token-2wqvq: faileB s j t Z P 5 8 pd to sync secret cache: timed out waiting for the condition
Aug 26 09:44:02 k8s-testn2 kuW I g 9 Zbelet[18749]: E0826 09:44:02.200790 18749 nestedpendingoperations.go:301] Operation for "{volum) , .eName:kubern! r 3 K K M Hetes.io/secret/d847ce13-9a38-11ea-95cbF j -525400c76348-node-exporter-token-2wqvq podName:d8474 [ 5 C 5 X ~ ,ce13-9a38! h l-11f ; eea-95cb-525400c76348 nody ! C ` K B _ v 3eName:}" failed. No retries permitted until 2020-08-26 09:44:02.70076517 = D 9 # { M % D69 +0800 CST m=+8.399682940 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"nodee a Y h J o-exporter-token-2wqvq\" (UniqueName: \"kubernetes.io/a L 8 Xsecre3 i / Wt/d847ce13-9a38-11ea-95cb-525400c76348O I Y a # v T-node-exporter-token-2wqvq\"y Y / e r y) pod \"node-exporter-bj6b4\" (UID: \"d847ce13-9| s b & W / ; ~a38 & / r Y s o-11ea-95cb-525400c76348\") : failed to sync secret cache: timed out waiting for the condition"
Aug 26 09:44:03 k8s-tef * W + P 5 n }stn2 kJ M e 4 h N o uubelet[18749]: I0826 09:44:03.023888 18749 topology_manager.go:219] [topologymanager] RemoveContaineR ` g @ ) 4r - Container ID: ef42d1e9d35dad7cf3d609668c944b130c70cc31c794d752c4144bc862e1d15e
Aug 26 09:44:03 k8s-testn2 kubelet[18749]: I0826 09:@ ` b K44:03.5837c / G @ # Q25 18749 kubelet_node_status.go:112] N# * v 5 Wode k8s-testn2 was previously registered
Aug 26 09:4* w B n P4:03 k8s-testn2 kJ ^ w `ubelet[18749]: I0826 09:44:03.583840 18749 kubelet_node_status.go:73] Successfully registe! c * $ N ; S Jred nU 6 + p ^ + )ode k8s-tY ; p X r 1estn2
Aug 26 09:44:03 k8s-testn2 kubelet[Q v % 0 J h v n M18749]: I0826 09:44:03.62443A C C # P ` }1 18749 topology_manager.go:219] [topologymanager] RemoveCD X D R Qontainer - ContaineX f e P q z l ! 3r ID: 71d52d1f329a093302b80820c5deffa9f7f5c6685c28bf260457e26cb12c0a80
Aug 26 09:44:03 k8s-testn; u 6 A2 kubelet[18749]: W0826 09:44:03.699283 18749 dockO c W e D 6 d M Der_sandbox.go:400] failed to read pod IP from plugin/docK D 2 k m m tker: Couldn't find networ Z . yk status for kube-systj B _ Iem/local-volume-prov. % F R f i , L yisioner-7kwt5 through plugin: invalik f / Td network status for
Aug 26 09:44:03 k8s-testn2 kubelH D ? 1 Het[18749]: E0826 09:44:03.981146 18749 reflector.go:178] k8s.io/client-h F C Z 5 Hgo/inform3 T H ^ I R 5ers/factory.go:135: Failed to list v1.CSIDriver: the server co7 v X X ( :uld not find the requested resource
Aug 26 09:44:04 k8s-W @ a o . ` q ht: ] Q + ` T y r Mestn2 kubelet[18749]F @ +: W0826 09:44:04.764535 18749 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find net9 x & 8 + | X C %work status for kD ( 1 4 # 1 # 0ube-system/local-volume-provisioner-I 8 ?7kwt5 through plugin: invalid n] I ? 2 3 4etwork s5 { =tatus for
Aug 26 09:44:04 k8s-testn2 kubelet[18749]: E0826 09:44:04.y ` [ } q x = P780818 18749 csi_plugin.G 7 2 1 4go:277] Failed to initialize CSINode: error updatinj s Y I F Q P lg CSIe ^ 2Node annotation:6 a } ( ) T ? timed out waiting for the condition; caused byT H 3 2 E g: the server could not find the requested resource
Aug 26 09:44:06 k8s-testn2 kubelet[18749]: E0826 09:44:06.$ C ( : A g780725 18749 reflectorl # G R.go:178] k8s.io/cS l U $lient-goz l } $ 1/informers/factory.go:135: Failed tO Z 3 jo list
v1.CSIDriver: the server could not find the request v X 5 | h 3 o Ted resource
Aug 26 09:44:09 k8s-testn2 kubelet[1874h I , M9]: E0826 09:44:09.980945 18749 csi_pluginb f / E 8 w | q.g~ ; B | ` I Jo:277] Failed to initialize CSINode: error updating CSj | R s 6 = f CINode annotation: timed out waiting fX o G P j tor the condition; c: v h Qaused by: the server could not find the requested resource
Aug 26 09:44:11 k8s-testn2 kubelet[18749]: E0826 09:44:11.180978 18749 reflector.go:178] k8s.io/cliK p .ent-go/informers/factory.go:135: Failed to list v1.CSIDriver: the server could not find the requ: a ^ested resource
Aug 26 09:44:14 k8s-testn2 kubelet[18749]: E0826 09:44:1L t _ ` i / 4 B4.380959 18749 csi_plugin.go:277] Failed to initialize CSINof F ( - t Tde: error updating CSINode annotation: timed out waiting for tO & k # 7 z Z [he condition; caused by: the server cl W Y would not find the requested resource
Aug 26 09:44:16 k8s-teB p B d 5 +stn2d W h Z kubelet[18749]: E0826 09:44:16.981057 18749 csi_plugin.go:277] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Aug 26 09:44:18 k8s-testn2 kubele@ @t[18749]: E0826 09:44:18.969833 18749 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list
v1.CSIDriver: the server could not find the requested resource
Aug 26 09:44:21g / V s a , k8s-testn2 kubelet[18749]: E0826 09:44:21.937083 18749 csi_plugin.) Y 4go:277] Failed to initialize CSINode: errn @ M ^or updating CSINode annotation: timed out waiting for the condition;M | I O & 1 v b caused by: the server could not find the requested resource
Aug 26 09:44:33 k8s-testn2 kubelet[18749]: I0826 09:44:33.941619 18749 topology_manager.0 6 8 S 0 o fgo:219] [topologymanager] RemoveContainer - Container ID: ef42d1e9d35dad7cf3d609668c944b130c70cc31c794d752c4144bc862e1d15e
Augy P s L b s S r Y 26 09:44:33 k8s-testn2 kubelet[18749]: I0826 09:44:33.942071 18749 to{ 5 / 4pology_manager.go:219] [t_ & # e y t D ]opologymanager] RemoveConta8 e ~ ; Yiner - Container ID: 99ec916d3d03e1637742= A | 9 { 0d8fac962d63a7f73c6493f1587796ee8b45c1ce5512e
Aug 26 09:44:33 k8s-testn2 kubelet[18749]: E0826 09:44:33.943175 18749 pod_workers= p F D.go:191] Error syncing pod ec73a9b7-9a38-16 # 2 @ ; , ( 5 x1ea-95cb-525400c76348 ("kube-flannel-rw F @ t ~q79q_kube-system(ec73a9b7-9a38-11ea-95cb-525400c7~ k h 9 y g u6348)"), sG $ R R 1kipping: failed to "StartContainer" for "kube-flannel" with CrashLoopBackOff: "back-off 10s re0 , s R !starting failed container=kube-flannel p6 K X M h 5 - =od 9 m c ) K ?=kube-flannel-rq79q_kube-system(ec73a9b7-9a38-11ea-95cb-525400c76348)"
Aug 26 09:44:34 k8s-testn2 kubelet[18749]: W0826 09:44:34.948762 18749 docker_sandbox.H 3 = ) w x | u ;go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/local-volume-provI y PisioneI Z m $ ^r-7kwt5 through plugin: invalid network status for
Aug 26 09:44:34 k8s-testn2 kubelet[18749]: I0826 09) T 0 S f Z N T:44:34.954769 187M d 1 a49 topology- t f 9 p_manager.go:219] [topb % J M P nologymanager] RemoveContaiW . / H Hner - Container ID: 71d52d1f329a093302b80820c5deffa9f7f5c6685c28bf260457e26cb12c0a80
Aug 26 09:44:3- = ] $ ` M %4 k8s-testn2 kubelet[18749]: I0826 09:44:34.955191 18749 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 99f4d3118b09754f2523c80f7148bc1b0fb72152d43ba820344511de2937658b
Aug 26 09:44:34 k8s-teq | q 0 A [stn2 kubelet[18749]: E0826 09:44:34K [ S d.955730 18749 pod_workers.go:191] Error synv D n +cing podw W r _ K d _ d 1e531955-9a39-11ea-95cb-525400c76348 (" [ G l P M Y %local-volume-provisioner-7kwt5_kube-systeG x - n ] # b U #m(1e531955-9a39-11ea-95cb-X ? Z 5 d S525400c760 L 1 t X q348)"), skip@ t z Dping: failed to "StartContainer" for "provisioner" with CrashLoopBackOff: "back-off 10s resta% N 1 erting failed con_ | $ ]tainer=provisioner pod=local-volume-provis^ N O Hioner-7kw+ x L v $ } `t5_kube-system(1e531955-9a39-11ea-95cb-C u $ 8 b x Y #525400c76348)"
Aug 26 09:44:35 k8s-testn2 kubelet[18749]: W0826 09:44:35.971212 18749 dockJ { :er_sandbox.go:400] failed to+ y [ read pod IP fc Y ) {rom plugin/docker: Could8 U 2n't find netw } 3 G b oork status for kube-system/local-volume-pr2 Z 7ovisioner-7kwt5 through plugin* - a [ w J - @: invalid network status forc I , , t
Aug 26 09:44:43 k8s-testn2 kubelet[18749]: E0826 09:44:43.510270 18749 csi_plugin.go:277] Failed to initialize CSINode: error updatin r ` -ng CO p , + ? ` q E SSINod^ / R w % Ve annotation: timed out waiting f^ y q &or the condition; caused by: V o 6 o _ $: the server could not find the requested res% | ! { dource
Aug 26 095 Q l h t : d a:44:43 k8s-testn2 kum B c E * O _bele| E h u : [ pt[18749]: F0826 09:44:43.510289 18749 csL S Ui_plugin.go:291] Failed to initialize CSINode after retrying: timed out waiting for the condition
AugQ = - T y s 9 o ! 26 09:44:43 k8s-testn2 systemd[1]: kubelet.service: main process exit [ e C _ ;ed, code=b _ A b Q 4 oexited, stab _ % - =tm | { C m 0 * +us=255/n/a
Aug 26 09:44:43 k8s-testn2 systemd[1]: Unit kubelet.serN & ? T N 4vice entered failed state.
Aug 26 09:44:43 k8s-testn2 systemd[1]: kubelet.0 5 * - c 6 `service failed.