-
-
Notifications
You must be signed in to change notification settings - Fork 17
/
Copy pathlogs.txt
784 lines (740 loc) · 100 KB
/
logs.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
*
* ==> Audit <==
* |-----------|------|----------|------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|-----------|------|----------|------------|---------|---------------------|---------------------|
| start | | minikube | ericrennie | v1.31.0 | 18 Jul 23 16:50 EDT | 18 Jul 23 16:52 EDT |
| start | | minikube | ericrennie | v1.31.0 | 18 Jul 23 19:18 EDT | 18 Jul 23 19:19 EDT |
| dashboard | | minikube | ericrennie | v1.31.0 | 18 Jul 23 20:20 EDT | |
| start | | minikube | ericrennie | v1.31.0 | 18 Jul 23 20:27 EDT | 18 Jul 23 20:27 EDT |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 15:06 EDT | 19 Jul 23 15:08 EDT |
| stop | | minikube | ericrennie | v1.31.0 | 19 Jul 23 16:25 EDT | 19 Jul 23 16:25 EDT |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 16:32 EDT | 19 Jul 23 16:34 EDT |
| stop | | minikube | ericrennie | v1.31.0 | 19 Jul 23 16:39 EDT | 19 Jul 23 16:40 EDT |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 16:40 EDT | 19 Jul 23 16:42 EDT |
| stop | | minikube | ericrennie | v1.31.0 | 19 Jul 23 16:54 EDT | 19 Jul 23 16:54 EDT |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 16:54 EDT | 19 Jul 23 16:56 EDT |
| stop | | minikube | ericrennie | v1.31.0 | 19 Jul 23 17:00 EDT | 19 Jul 23 17:00 EDT |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 17:01 EDT | |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 17:02 EDT | |
| delete | | minikube | ericrennie | v1.31.0 | 19 Jul 23 17:03 EDT | 19 Jul 23 17:03 EDT |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 17:03 EDT | 19 Jul 23 17:05 EDT |
| stop | | minikube | ericrennie | v1.31.0 | 19 Jul 23 17:19 EDT | 19 Jul 23 17:19 EDT |
| start | | minikube | ericrennie | v1.31.0 | 19 Jul 23 17:19 EDT | |
|-----------|------|----------|------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/07/19 17:19:56
Running on machine: Erics-iMac
Binary: Built with gc go1.20.6 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0719 17:19:56.014679 16258 out.go:296] Setting OutFile to fd 1 ...
I0719 17:19:56.014858 16258 out.go:348] isatty.IsTerminal(1) = true
I0719 17:19:56.014861 16258 out.go:309] Setting ErrFile to fd 2...
I0719 17:19:56.014867 16258 out.go:348] isatty.IsTerminal(2) = true
I0719 17:19:56.015081 16258 root.go:338] Updating PATH: /Users/ericrennie/.minikube/bin
I0719 17:19:56.017228 16258 out.go:303] Setting JSON to false
I0719 17:19:56.049697 16258 start.go:128] hostinfo: {"hostname":"Erics-iMac.lan","uptime":25015,"bootTime":1689776581,"procs":555,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6.7","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b10012f9-1ba0-51de-99ec-413095744af1"}
W0719 17:19:56.049799 16258 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0719 17:19:56.055443 16258 out.go:177] 😄 minikube v1.31.0 on Darwin 12.6.7
I0719 17:19:56.059486 16258 notify.go:220] Checking for updates...
I0719 17:19:56.059708 16258 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 17:19:56.059915 16258 driver.go:373] Setting default libvirt URI to qemu:///system
I0719 17:19:56.063564 16258 out.go:177] ✨ Using the qemu2 driver based on existing profile
I0719 17:19:56.067286 16258 start.go:298] selected driver: qemu2
I0719 17:19:56.067310 16258 start.go:880] validating driver "qemu2" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54044 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0719 17:19:56.067407 16258 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0719 17:19:56.067690 16258 cni.go:84] Creating CNI manager for ""
I0719 17:19:56.067701 16258 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
I0719 17:19:56.067711 16258 start_flags.go:319] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:54044 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:builtin Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
I0719 17:19:56.067909 16258 iso.go:125] acquiring lock: {Name:mkd195d5ea264f22cfc7949f980e5e336df2d339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0719 17:19:56.071569 16258 out.go:177] 👍 Starting control plane node minikube in cluster minikube
I0719 17:19:56.078034 16258 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
I0719 17:19:56.078080 16258 preload.go:148] Found local preload: /Users/ericrennie/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
I0719 17:19:56.078097 16258 cache.go:57] Caching tarball of preloaded images
I0719 17:19:56.078241 16258 preload.go:174] Found /Users/ericrennie/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0719 17:19:56.078250 16258 cache.go:60] Finished verifying existence of preloaded tar for v1.27.3 on docker
I0719 17:19:56.078337 16258 profile.go:148] Saving config to /Users/ericrennie/.minikube/profiles/minikube/config.json ...
I0719 17:19:56.080695 16258 start.go:365] acquiring machines lock for minikube: {Name:mk3b1eefe0d7779fd4a04d8ca1e85343de109856 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 17:19:56.080769 16258 start.go:369] acquired machines lock for "minikube" in 60.195µs
I0719 17:19:56.080789 16258 start.go:96] Skipping create...Using existing machine configuration
I0719 17:19:56.080796 16258 fix.go:54] fixHost starting:
I0719 17:19:56.080976 16258 fix.go:102] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0719 17:19:56.080987 16258 fix.go:128] unexpected machine state, will restart: <nil>
I0719 17:19:56.084417 16258 out.go:177] 🔄 Restarting existing qemu2 VM for "minikube" ...
I0719 17:19:56.088221 16258 main.go:141] libmachine: executing: qemu-system-x86_64 -display none -accel hvf -m 6000 -smp 2 -boot d -cdrom /Users/ericrennie/.minikube/machines/minikube/boot2docker.iso -qmp unix:/Users/ericrennie/.minikube/machines/minikube/monitor,server,nowait -pidfile /Users/ericrennie/.minikube/machines/minikube/qemu.pid -nic user,model=virtio,hostfwd=tcp::53653-:22,hostfwd=tcp::53654-:2376,hostname=minikube -daemonize /Users/ericrennie/.minikube/machines/minikube/disk.qcow2
I0719 17:19:56.163235 16258 main.go:141] libmachine: STDOUT:
W0719 17:19:56.163253 16258 main.go:139] libmachine: STDERR: qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
I0719 17:19:56.163269 16258 main.go:141] libmachine: Waiting for VM to start (ssh -p 53653 [email protected])...
I0719 17:21:11.271750 16258 profile.go:148] Saving config to /Users/ericrennie/.minikube/profiles/minikube/config.json ...
I0719 17:21:11.272071 16258 machine.go:88] provisioning docker machine ...
I0719 17:21:11.272088 16258 buildroot.go:166] provisioning hostname "minikube"
I0719 17:21:11.272319 16258 main.go:141] libmachine: Using SSH client type: native
I0719 17:21:11.272723 16258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100411840] 0x1004148e0 <nil> [] 0s} localhost 53653 <nil> <nil>}
I0719 17:21:11.272732 16258 main.go:141] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0719 17:21:11.366209 16258 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
I0719 17:21:11.366285 16258 main.go:141] libmachine: Using SSH client type: native
I0719 17:21:11.366667 16258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100411840] 0x1004148e0 <nil> [] 0s} localhost 53653 <nil> <nil>}
I0719 17:21:11.366678 16258 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
I0719 17:21:11.447594 16258 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0719 17:21:11.447611 16258 buildroot.go:172] set auth options {CertDir:/Users/ericrennie/.minikube CaCertPath:/Users/ericrennie/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/ericrennie/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/ericrennie/.minikube/machines/server.pem ServerKeyPath:/Users/ericrennie/.minikube/machines/server-key.pem ClientKeyPath:/Users/ericrennie/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/ericrennie/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/ericrennie/.minikube}
I0719 17:21:11.447654 16258 buildroot.go:174] setting up certificates
I0719 17:21:11.447666 16258 provision.go:83] configureAuth start
I0719 17:21:11.447674 16258 provision.go:138] copyHostCerts
I0719 17:21:11.447752 16258 exec_runner.go:144] found /Users/ericrennie/.minikube/ca.pem, removing ...
I0719 17:21:11.447759 16258 exec_runner.go:203] rm: /Users/ericrennie/.minikube/ca.pem
I0719 17:21:11.447921 16258 exec_runner.go:151] cp: /Users/ericrennie/.minikube/certs/ca.pem --> /Users/ericrennie/.minikube/ca.pem (1090 bytes)
I0719 17:21:11.448206 16258 exec_runner.go:144] found /Users/ericrennie/.minikube/cert.pem, removing ...
I0719 17:21:11.448211 16258 exec_runner.go:203] rm: /Users/ericrennie/.minikube/cert.pem
I0719 17:21:11.448316 16258 exec_runner.go:151] cp: /Users/ericrennie/.minikube/certs/cert.pem --> /Users/ericrennie/.minikube/cert.pem (1131 bytes)
I0719 17:21:11.448592 16258 exec_runner.go:144] found /Users/ericrennie/.minikube/key.pem, removing ...
I0719 17:21:11.448596 16258 exec_runner.go:203] rm: /Users/ericrennie/.minikube/key.pem
I0719 17:21:11.448720 16258 exec_runner.go:151] cp: /Users/ericrennie/.minikube/certs/key.pem --> /Users/ericrennie/.minikube/key.pem (1675 bytes)
I0719 17:21:11.448886 16258 provision.go:112] generating server cert: /Users/ericrennie/.minikube/machines/server.pem ca-key=/Users/ericrennie/.minikube/certs/ca.pem private-key=/Users/ericrennie/.minikube/certs/ca-key.pem org=ericrennie.minikube san=[127.0.0.1 localhost localhost 127.0.0.1 minikube minikube]
I0719 17:21:11.703221 16258 provision.go:172] copyRemoteCerts
I0719 17:21:11.703460 16258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0719 17:21:11.703478 16258 sshutil.go:53] new ssh client: &{IP:localhost Port:53653 SSHKeyPath:/Users/ericrennie/.minikube/machines/minikube/id_rsa Username:docker}
I0719 17:21:11.759285 16258 ssh_runner.go:362] scp /Users/ericrennie/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1090 bytes)
I0719 17:21:11.791783 16258 ssh_runner.go:362] scp /Users/ericrennie/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I0719 17:21:11.825897 16258 ssh_runner.go:362] scp /Users/ericrennie/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0719 17:21:11.857892 16258 provision.go:86] duration metric: configureAuth took 410.204249ms
I0719 17:21:11.857903 16258 buildroot.go:189] setting minikube options for container-runtime
I0719 17:21:11.858055 16258 config.go:182] Loaded profile config "minikube": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 17:21:11.858113 16258 main.go:141] libmachine: Using SSH client type: native
I0719 17:21:11.858460 16258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100411840] 0x1004148e0 <nil> [] 0s} localhost 53653 <nil> <nil>}
I0719 17:21:11.858466 16258 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0719 17:21:11.940415 16258 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0719 17:21:11.940422 16258 buildroot.go:70] root file system type: tmpfs
I0719 17:21:11.940522 16258 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0719 17:21:11.940614 16258 main.go:141] libmachine: Using SSH client type: native
I0719 17:21:11.940992 16258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100411840] 0x1004148e0 <nil> [] 0s} localhost 53653 <nil> <nil>}
I0719 17:21:11.941039 16258 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0719 17:21:12.037118 16258 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0719 17:21:12.037222 16258 main.go:141] libmachine: Using SSH client type: native
I0719 17:21:12.037592 16258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100411840] 0x1004148e0 <nil> [] 0s} localhost 53653 <nil> <nil>}
I0719 17:21:12.037604 16258 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0719 17:21:13.718254 16258 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0719 17:21:13.718267 16258 machine.go:91] provisioned docker machine in 2.446130381s
I0719 17:21:13.718281 16258 start.go:300] post-start starting for "minikube" (driver="qemu2")
I0719 17:21:13.718293 16258 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0719 17:21:13.718385 16258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0719 17:21:13.718395 16258 sshutil.go:53] new ssh client: &{IP:localhost Port:53653 SSHKeyPath:/Users/ericrennie/.minikube/machines/minikube/id_rsa Username:docker}
I0719 17:21:13.773100 16258 ssh_runner.go:195] Run: cat /etc/os-release
I0719 17:21:13.778114 16258 info.go:137] Remote host: Buildroot 2021.02.12
I0719 17:21:13.778127 16258 filesync.go:126] Scanning /Users/ericrennie/.minikube/addons for local assets ...
I0719 17:21:13.778210 16258 filesync.go:126] Scanning /Users/ericrennie/.minikube/files for local assets ...
I0719 17:21:13.778251 16258 start.go:303] post-start completed in 59.965097ms
I0719 17:21:13.778259 16258 fix.go:56] fixHost completed within 1m17.695536266s
I0719 17:21:13.778308 16258 main.go:141] libmachine: Using SSH client type: native
I0719 17:21:13.778651 16258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100411840] 0x1004148e0 <nil> [] 0s} localhost 53653 <nil> <nil>}
I0719 17:21:13.778659 16258 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0719 17:21:13.857150 16258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689801673.385383946
I0719 17:21:13.857157 16258 fix.go:206] guest clock: 1689801673.385383946
I0719 17:21:13.857163 16258 fix.go:219] Guest: 2023-07-19 17:21:13.385383946 -0400 EDT Remote: 2023-07-19 17:21:13.77826 -0400 EDT m=+77.801732760 (delta=-392.876054ms)
I0719 17:21:13.857185 16258 fix.go:190] guest clock delta is within tolerance: -392.876054ms
I0719 17:21:13.857188 16258 start.go:83] releasing machines lock for "minikube", held for 1m17.774483632s
I0719 17:21:13.857563 16258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0719 17:21:13.857647 16258 sshutil.go:53] new ssh client: &{IP:localhost Port:53653 SSHKeyPath:/Users/ericrennie/.minikube/machines/minikube/id_rsa Username:docker}
I0719 17:21:13.857972 16258 ssh_runner.go:195] Run: cat /version.json
I0719 17:21:13.857981 16258 sshutil.go:53] new ssh client: &{IP:localhost Port:53653 SSHKeyPath:/Users/ericrennie/.minikube/machines/minikube/id_rsa Username:docker}
I0719 17:21:13.901778 16258 ssh_runner.go:195] Run: systemctl --version
I0719 17:21:14.317451 16258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0719 17:21:14.323478 16258 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0719 17:21:14.323550 16258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0719 17:21:14.343275 16258 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0719 17:21:14.343284 16258 start.go:466] detecting cgroup driver to use...
I0719 17:21:14.343390 16258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0719 17:21:14.366009 16258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0719 17:21:14.379967 16258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0719 17:21:14.393746 16258 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0719 17:21:14.393815 16258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0719 17:21:14.406632 16258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0719 17:21:14.419698 16258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0719 17:21:14.432174 16258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0719 17:21:14.446142 16258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0719 17:21:14.459907 16258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0719 17:21:14.474483 16258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0719 17:21:14.486590 16258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0719 17:21:14.498552 16258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0719 17:21:14.671455 16258 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0719 17:21:14.700218 16258 start.go:466] detecting cgroup driver to use...
I0719 17:21:14.700298 16258 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0719 17:21:14.727384 16258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0719 17:21:14.750532 16258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0719 17:21:14.778730 16258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0719 17:21:14.795356 16258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0719 17:21:14.811520 16258 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0719 17:21:14.854298 16258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0719 17:21:14.871925 16258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0719 17:21:14.896666 16258 ssh_runner.go:195] Run: which cri-dockerd
I0719 17:21:14.900501 16258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0719 17:21:14.912903 16258 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0719 17:21:14.934862 16258 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0719 17:21:15.089430 16258 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0719 17:21:15.245429 16258 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0719 17:21:15.245441 16258 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0719 17:21:15.269680 16258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0719 17:21:15.426554 16258 ssh_runner.go:195] Run: sudo systemctl restart docker
I0719 17:21:17.133397 16258 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.706785161s)
I0719 17:21:17.133465 16258 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0719 17:21:17.294527 16258 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0719 17:21:17.457571 16258 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0719 17:21:17.616609 16258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0719 17:21:17.784955 16258 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0719 17:21:17.809436 16258 out.go:177]
W0719 17:21:17.812391 16258 out.go:239] ❌ Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:
stderr:
Job failed. See "journalctl -xe" for details.
W0719 17:21:17.812408 16258 out.go:239]
W0719 17:21:17.813365 16258 out.go:239] [31m╭───────────────────────────────────────────────────────────────────────────────────────────╮[0m
[31m│[0m [31m│[0m
[31m│[0m 😿 If the above advice does not help, please let us know: [31m│[0m
[31m│[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose [31m│[0m
[31m│[0m [31m│[0m
[31m│[0m Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. [31m│[0m
[31m│[0m [31m│[0m
[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯[0m
I0719 17:21:17.823138 16258 out.go:177]
*
* ==> Docker <==
* -- Journal begins at Wed 2023-07-19 21:21:02 UTC, ends at Wed 2023-07-19 21:22:46 UTC. --
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106241758Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106255909Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106267922Z" level=info msg="NRI interface is disabled by configuration."
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106284106Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106334273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106675162Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106766898Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106786458Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106802218Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106819401Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106833180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106846472Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106861453Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106874865Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106937747Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.106960534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107017788Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107402166Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107481431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107501139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107525017Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107578446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107596819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107612585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107676560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107861842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107883913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107898120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107910827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107924533Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.107959333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108025222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108041763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108055536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108068547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108083331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108096444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108108896Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108123393Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108135250Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108146989Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108434402Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108555303Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108604550Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 19 21:21:16 minikube dockerd[823]: time="2023-07-19T21:21:16.108752160Z" level=info msg="containerd successfully booted in 0.029345s"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.132817816Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.174043911Z" level=info msg="Loading containers: start."
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.514789633Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.573354112Z" level=info msg="Loading containers: done."
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.633331334Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.633401800Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.633412857Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.633419497Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.633436982Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.633490881Z" level=info msg="Daemon has completed initialization"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.661014479Z" level=info msg="API listen on /var/run/docker.sock"
Jul 19 21:21:16 minikube dockerd[817]: time="2023-07-19T21:21:16.661209029Z" level=info msg="API listen on [::]:2376"
Jul 19 21:21:16 minikube systemd[1]: Started Docker Application Container Engine.
Jul 19 21:21:17 minikube systemd[1]: Dependency failed for CRI Interface for Docker Application Container Engine.
Jul 19 21:21:17 minikube systemd[1]: cri-docker.service: Job cri-docker.service/start failed with result 'dependency'.
*
* ==> container status <==
* CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41db9460c068 d547262b5fbc "/bin/prometheus-con…" 7 minutes ago Exited (0) 3 minutes ago k8s_config-reloader_prometheus-prometheus-kube-prometheus-prometheus-0_default_29f6f92e-44e8-45fa-8d6d-c5a134c45d04_0
02549f428343 quay.io/prometheus/prometheus "/bin/prometheus --w…" 7 minutes ago Exited (0) 3 minutes ago k8s_prometheus_prometheus-prometheus-kube-prometheus-prometheus-0_default_29f6f92e-44e8-45fa-8d6d-c5a134c45d04_0
823c23d63962 d547262b5fbc "/bin/prometheus-con…" 7 minutes ago Exited (0) 3 minutes ago k8s_config-reloader_alertmanager-prometheus-kube-prometheus-alertmanager-0_default_8dec83a9-c1dd-4d6a-9acf-15bbd064e19e_0
409dc34cf056 quay.io/prometheus/alertmanager "/bin/alertmanager -…" 7 minutes ago Exited (0) 3 minutes ago k8s_alertmanager_alertmanager-prometheus-kube-prometheus-alertmanager-0_default_8dec83a9-c1dd-4d6a-9acf-15bbd064e19e_0
83f396e5b714 quay.io/prometheus-operator/prometheus-config-reloader "/bin/prometheus-con…" 7 minutes ago Exited (0) 7 minutes ago k8s_init-config-reloader_prometheus-prometheus-kube-prometheus-prometheus-0_default_29f6f92e-44e8-45fa-8d6d-c5a134c45d04_0
42cfeefc2089 quay.io/prometheus-operator/prometheus-config-reloader "/bin/prometheus-con…" 7 minutes ago Exited (0) 7 minutes ago k8s_init-config-reloader_alertmanager-prometheus-kube-prometheus-alertmanager-0_default_8dec83a9-c1dd-4d6a-9acf-15bbd064e19e_0
587e0c9b9e85 grafana/grafana "/run.sh" 7 minutes ago Exited (0) 3 minutes ago k8s_grafana_prometheus-grafana-7478bfcd9-dmfxf_default_77a04024-6778-4362-8c3a-b6bf1e4168b9_0
a96c8165d95e registry.k8s.io/pause:3.9 "/pause" 8 minutes ago Exited (0) 3 minutes ago k8s_POD_alertmanager-prometheus-kube-prometheus-alertmanager-0_default_8dec83a9-c1dd-4d6a-9acf-15bbd064e19e_0
bc830d0e31c8 registry.k8s.io/pause:3.9 "/pause" 8 minutes ago Exited (0) 3 minutes ago k8s_POD_prometheus-prometheus-kube-prometheus-prometheus-0_default_29f6f92e-44e8-45fa-8d6d-c5a134c45d04_0
63d053d52979 quay.io/prometheus-operator/prometheus-operator "/bin/operator --kub…" 8 minutes ago Exited (0) 3 minutes ago k8s_kube-prometheus-stack_prometheus-kube-prometheus-operator-7dfb8b9b74-r29vp_default_bee49d54-52c2-4bdc-afe9-d5461ad5282f_0
90ee6ce1e548 registry.k8s.io/kube-state-metrics/kube-state-metrics "/kube-state-metrics…" 8 minutes ago Exited (2) 3 minutes ago k8s_kube-state-metrics_prometheus-kube-state-metrics-5b49fbfdf9-t9v4b_default_6de91caf-d065-49c0-a308-907f702a7d69_0
e3e6950e4adf 12bfdf6df7b0 "python -u /app/side…" 8 minutes ago Exited (0) 3 minutes ago k8s_grafana-sc-datasources_prometheus-grafana-7478bfcd9-dmfxf_default_77a04024-6778-4362-8c3a-b6bf1e4168b9_0
6e9e33dd1465 quay.io/kiwigrid/k8s-sidecar "python -u /app/side…" 8 minutes ago Exited (0) 3 minutes ago k8s_grafana-sc-dashboard_prometheus-grafana-7478bfcd9-dmfxf_default_77a04024-6778-4362-8c3a-b6bf1e4168b9_0
8e3df3620bab quay.io/prometheus/node-exporter "/bin/node_exporter …" 8 minutes ago Exited (143) 3 minutes ago k8s_node-exporter_prometheus-prometheus-node-exporter-5q7lk_default_a3a160cc-9e6b-45a2-ad17-3db29d2eb8a8_0
5dfea1c17326 registry.k8s.io/pause:3.9 "/pause" 8 minutes ago Exited (0) 3 minutes ago k8s_POD_prometheus-grafana-7478bfcd9-dmfxf_default_77a04024-6778-4362-8c3a-b6bf1e4168b9_0
5677acd52261 registry.k8s.io/pause:3.9 "/pause" 8 minutes ago Exited (0) 3 minutes ago k8s_POD_prometheus-kube-prometheus-operator-7dfb8b9b74-r29vp_default_bee49d54-52c2-4bdc-afe9-d5461ad5282f_0
996f83722b5e registry.k8s.io/pause:3.9 "/pause" 8 minutes ago Exited (0) 3 minutes ago k8s_POD_prometheus-kube-state-metrics-5b49fbfdf9-t9v4b_default_6de91caf-d065-49c0-a308-907f702a7d69_0
74b9b08ed33c registry.k8s.io/pause:3.9 "/pause" 8 minutes ago Exited (0) 3 minutes ago k8s_POD_prometheus-prometheus-node-exporter-5q7lk_default_a3a160cc-9e6b-45a2-ad17-3db29d2eb8a8_0
6d6a3db438eb 6e38f40d628d "/storage-provisioner" 16 minutes ago Exited (2) 3 minutes ago k8s_storage-provisioner_storage-provisioner_kube-system_b40c045c-836c-498b-a216-776cb9c3256b_1
f9c671b654f1 ead0a4a53df8 "/coredns -conf /etc…" 16 minutes ago Exited (0) 3 minutes ago k8s_coredns_coredns-5d78c9869d-ztzvg_kube-system_f386a107-09ac-400c-b9f7-326dcf9b9c27_0
5afe4a2fedee 5780543258cf "/usr/local/bin/kube…" 16 minutes ago Exited (2) 3 minutes ago k8s_kube-proxy_kube-proxy-6mnjz_kube-system_bb532329-2abd-461f-a1ea-e847f47ed3b7_0
af50bcb176c1 6e38f40d628d "/storage-provisioner" 16 minutes ago Exited (1) 16 minutes ago k8s_storage-provisioner_storage-provisioner_kube-system_b40c045c-836c-498b-a216-776cb9c3256b_0
1b3525175acf registry.k8s.io/pause:3.9 "/pause" 16 minutes ago Exited (0) 3 minutes ago k8s_POD_kube-proxy-6mnjz_kube-system_bb532329-2abd-461f-a1ea-e847f47ed3b7_0
26604668ca4f registry.k8s.io/pause:3.9 "/pause" 16 minutes ago Exited (0) 3 minutes ago k8s_POD_coredns-5d78c9869d-ztzvg_kube-system_f386a107-09ac-400c-b9f7-326dcf9b9c27_0
088776015b16 registry.k8s.io/pause:3.9 "/pause" 16 minutes ago Exited (0) 3 minutes ago k8s_POD_storage-provisioner_kube-system_b40c045c-836c-498b-a216-776cb9c3256b_0
1fb237e4ff7d 41697ceeb70b "kube-scheduler --au…" 17 minutes ago Exited (1) 3 minutes ago k8s_kube-scheduler_kube-scheduler-minikube_kube-system_e14e2f92c469337ac62a252dad99dcc5_0
bd52e743c4b7 08a0c939e61b "kube-apiserver --ad…" 17 minutes ago Exited (137) 3 minutes ago k8s_kube-apiserver_kube-apiserver-minikube_kube-system_52c7827b668dacfc820671bdb03eb45e_0
103a768da68c 7cffc01dba0e "kube-controller-man…" 17 minutes ago Exited (2) 3 minutes ago k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_8fd976984f81182c650e1a7b3a5489b7_0
df98d07ae628 86b6af7dd652 "etcd --advertise-cl…" 17 minutes ago Exited (0) 3 minutes ago k8s_etcd_etcd-minikube_kube-system_d3bb814607b61262cc219c8020a27fc0_0
f5480abf50df registry.k8s.io/pause:3.9 "/pause" 17 minutes ago Exited (0) 3 minutes ago k8s_POD_kube-apiserver-minikube_kube-system_52c7827b668dacfc820671bdb03eb45e_0
f3d966f5f577 registry.k8s.io/pause:3.9 "/pause" 17 minutes ago Exited (0) 3 minutes ago k8s_POD_kube-controller-manager-minikube_kube-system_8fd976984f81182c650e1a7b3a5489b7_0
ee998efeae08 registry.k8s.io/pause:3.9 "/pause" 17 minutes ago Exited (0) 3 minutes ago k8s_POD_etcd-minikube_kube-system_d3bb814607b61262cc219c8020a27fc0_0
1e9379b76e0d registry.k8s.io/pause:3.9 "/pause" 17 minutes ago Exited (0) 3 minutes ago k8s_POD_kube-scheduler-minikube_kube-system_e14e2f92c469337ac62a252dad99dcc5_0
time="2023-07-19T21:22:48Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/cri-dockerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
*
* ==> coredns [f9c671b654f1] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 4369d49e705690634e66dc4876ba448687add67b4e702a1c8bd9cbe26bf81de42209d08c6b52f2167c69004abbe79b233480d7bb5830c218d455f30e7efd3686
CoreDNS-1.10.1
linux/amd64, go1.20, 055b2c3
[INFO] 127.0.0.1:43466 - 25021 "HINFO IN 5719781401643078827.5804491486235365698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.093599811s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] 10.244.0.4:59117 - 45942 "AAAA IN raw.githubusercontent.com.default.svc.cluster.local. udp 80 false 1232" NXDOMAIN qr,aa,rd 162 0.000352956s
[INFO] 10.244.0.4:58700 - 52199 "AAAA IN grafana.com.default.svc.cluster.local. udp 66 false 1232" NXDOMAIN qr,aa,rd 148 0.00053333s
[INFO] 10.244.0.4:52387 - 32734 "A IN raw.githubusercontent.com.default.svc.cluster.local. udp 80 false 1232" NXDOMAIN qr,aa,rd 162 0.000516818s
[INFO] 10.244.0.4:49507 - 26904 "AAAA IN raw.githubusercontent.com.svc.cluster.local. udp 72 false 1232" NXDOMAIN qr,aa,rd 154 0.00011105s
[INFO] 10.244.0.4:53595 - 46861 "A IN grafana.com.default.svc.cluster.local. udp 66 false 1232" NXDOMAIN qr,aa,rd 148 0.000141093s
[INFO] 10.244.0.4:52880 - 37327 "AAAA IN grafana.com.svc.cluster.local. udp 58 false 1232" NXDOMAIN qr,aa,rd 140 0.000112958s
[INFO] 10.244.0.4:55577 - 31469 "A IN raw.githubusercontent.com.svc.cluster.local. udp 72 false 1232" NXDOMAIN qr,aa,rd 154 0.000062898s
[INFO] 10.244.0.4:37234 - 38151 "AAAA IN raw.githubusercontent.com.cluster.local. udp 68 false 1232" NXDOMAIN qr,aa,rd 150 0.000102087s
[INFO] 10.244.0.4:55469 - 56959 "A IN raw.githubusercontent.com.cluster.local. udp 68 false 1232" NXDOMAIN qr,aa,rd 150 0.000193867s
[INFO] 10.244.0.4:42661 - 58682 "A IN grafana.com.svc.cluster.local. udp 58 false 1232" NXDOMAIN qr,aa,rd 140 0.000111768s
[INFO] 10.244.0.4:47840 - 4522 "A IN grafana.com.cluster.local. udp 54 false 1232" NXDOMAIN qr,aa,rd 136 0.000147251s
[INFO] 10.244.0.4:52919 - 16026 "AAAA IN grafana.com.cluster.local. udp 54 false 1232" NXDOMAIN qr,aa,rd 136 0.000056851s
[INFO] 10.244.0.4:46374 - 40673 "AAAA IN grafana.com. udp 40 false 1232" NOERROR qr,rd,ra 68 0.009529108s
[INFO] 10.244.0.4:42452 - 16837 "A IN grafana.com. udp 40 false 1232" NOERROR qr,rd,ra 56 0.01009878s
[INFO] 10.244.0.4:54915 - 57131 "A IN raw.githubusercontent.com. udp 54 false 1232" NOERROR qr,rd,ra 207 0.042802626s
[INFO] 10.244.0.4:42081 - 29683 "AAAA IN raw.githubusercontent.com. udp 54 false 1232" NOERROR qr,rd,ra 255 0.043445605s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
*
* ==> dmesg <==
* [Jul19 21:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[ +6.292379] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[Jul19 21:21] systemd-fstab-generator[100]: Ignoring "noauto" for root device
[ +0.123201] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +4.153339] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +5.562867] systemd-fstab-generator[501]: Ignoring "noauto" for root device
[ +0.162740] systemd-fstab-generator[512]: Ignoring "noauto" for root device
[ +2.191640] systemd-fstab-generator[744]: Ignoring "noauto" for root device
[ +0.430632] systemd-fstab-generator[783]: Ignoring "noauto" for root device
[ +0.146333] systemd-fstab-generator[794]: Ignoring "noauto" for root device
[ +0.189436] systemd-fstab-generator[807]: Ignoring "noauto" for root device
[ +1.863616] systemd-fstab-generator[990]: Ignoring "noauto" for root device
[ +0.169118] systemd-fstab-generator[1001]: Ignoring "noauto" for root device
[ +0.152715] systemd-fstab-generator[1012]: Ignoring "noauto" for root device
[ +0.165361] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
*
* ==> etcd [df98d07ae628] <==
* {"level":"info","ts":"2023-07-19T21:05:35.064Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://10.0.2.15:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://10.0.2.15:2380","--initial-cluster=minikube=https://10.0.2.15:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://10.0.2.15:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://10.0.2.15:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"info","ts":"2023-07-19T21:05:35.068Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://10.0.2.15:2380"]}
{"level":"info","ts":"2023-07-19T21:05:35.068Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-07-19T21:05:35.069Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"]}
{"level":"info","ts":"2023-07-19T21:05:35.073Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.7","git-sha":"215b53cf3","go-version":"go1.17.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://10.0.2.15:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2023-07-19T21:05:35.092Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"9.858068ms"}
{"level":"info","ts":"2023-07-19T21:05:35.105Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"f074a195de705325","cluster-id":"ef296cf39f5d9d66"}
{"level":"info","ts":"2023-07-19T21:05:35.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=()"}
{"level":"info","ts":"2023-07-19T21:05:35.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became follower at term 0"}
{"level":"info","ts":"2023-07-19T21:05:35.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f074a195de705325 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2023-07-19T21:05:35.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became follower at term 1"}
{"level":"info","ts":"2023-07-19T21:05:35.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
{"level":"warn","ts":"2023-07-19T21:05:35.118Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2023-07-19T21:05:35.128Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2023-07-19T21:05:35.154Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2023-07-19T21:05:35.178Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"f074a195de705325","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-07-19T21:05:35.182Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2023-07-19T21:05:35.182Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2023-07-19T21:05:35.182Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-07-19T21:05:35.182Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-07-19T21:05:35.198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
{"level":"info","ts":"2023-07-19T21:05:35.198Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
{"level":"info","ts":"2023-07-19T21:05:35.202Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-07-19T21:05:35.202Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-07-19T21:05:35.202Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"10.0.2.15:2380"}
{"level":"info","ts":"2023-07-19T21:05:35.222Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"10.0.2.15:2380"}
{"level":"info","ts":"2023-07-19T21:05:35.246Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-19T21:05:35.260Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:minikube ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
{"level":"info","ts":"2023-07-19T21:05:35.261Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-07-19T21:05:35.269Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
{"level":"info","ts":"2023-07-19T21:05:35.272Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-19T21:05:35.273Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-19T21:05:35.273Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2023-07-19T21:05:35.278Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-07-19T21:05:35.287Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-07-19T21:05:35.317Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-07-19T21:05:35.329Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-07-19T21:05:46.171Z","caller":"traceutil/trace.go:171","msg":"trace[1641790361] transaction","detail":"{read_only:false; response_revision:304; number_of_response:1; }","duration":"106.241951ms","start":"2023-07-19T21:05:46.065Z","end":"2023-07-19T21:05:46.171Z","steps":["trace[1641790361] 'process raft request' (duration: 32.845785ms)","trace[1641790361] 'compare' (duration: 73.065651ms)"],"step_count":2}
{"level":"info","ts":"2023-07-19T21:05:54.436Z","caller":"traceutil/trace.go:171","msg":"trace[179649855] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"207.615881ms","start":"2023-07-19T21:05:54.229Z","end":"2023-07-19T21:05:54.436Z","steps":["trace[179649855] 'process raft request' (duration: 207.507908ms)"],"step_count":1}
{"level":"info","ts":"2023-07-19T21:06:31.038Z","caller":"traceutil/trace.go:171","msg":"trace[2131474467] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"284.005009ms","start":"2023-07-19T21:06:30.753Z","end":"2023-07-19T21:06:31.038Z","steps":["trace[2131474467] 'process raft request' (duration: 283.908944ms)"],"step_count":1}
{"level":"info","ts":"2023-07-19T21:15:37.590Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":633}
{"level":"info","ts":"2023-07-19T21:15:37.591Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":633,"took":"1.011199ms","hash":720540028}
{"level":"info","ts":"2023-07-19T21:15:37.591Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":720540028,"revision":633,"compact-revision":-1}
{"level":"info","ts":"2023-07-19T21:19:33.895Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-07-19T21:19:33.895Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"]}
{"level":"info","ts":"2023-07-19T21:19:33.949Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f074a195de705325","current-leader-member-id":"f074a195de705325"}
{"level":"info","ts":"2023-07-19T21:19:33.952Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"10.0.2.15:2380"}
{"level":"info","ts":"2023-07-19T21:19:33.953Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"10.0.2.15:2380"}
{"level":"info","ts":"2023-07-19T21:19:33.953Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"minikube","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"]}
*
* ==> kernel <==
* 21:22:48 up 1 min, 0 users, load average: 0.09, 0.07, 0.02
Linux minikube 5.10.57 #1 SMP Sat Jul 15 01:42:36 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [bd52e743c4b7] <==
* "BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0719 21:19:43.692994 1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0719 21:19:43.703148 1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0719 21:19:43.773871 1 logging.go:59] [core] [Channel #202 SubChannel #203] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0719 21:19:43.866251 1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0719 21:19:43.870792 1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0719 21:19:44.015740 1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0719 21:19:44.043267 1 logging.go:59] [core] [Channel #211 SubChannel #212] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-controller-manager [103a768da68c] <==
* I0719 21:05:55.926229 1 shared_informer.go:318] Caches are synced for PVC protection
I0719 21:05:55.931823 1 shared_informer.go:318] Caches are synced for ephemeral
I0719 21:05:55.984600 1 shared_informer.go:318] Caches are synced for attach detach
I0719 21:05:56.331749 1 shared_informer.go:318] Caches are synced for garbage collector
I0719 21:05:56.331831 1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
I0719 21:05:56.353703 1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6mnjz"
I0719 21:05:56.359430 1 shared_informer.go:318] Caches are synced for garbage collector
I0719 21:05:56.474099 1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1"
I0719 21:05:56.796354 1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-ztzvg"
I0719 21:10:55.973355 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="prometheuses.monitoring.coreos.com"
I0719 21:10:55.973439 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="alertmanagers.monitoring.coreos.com"
I0719 21:10:55.973462 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="alertmanagerconfigs.monitoring.coreos.com"
I0719 21:10:55.973485 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="probes.monitoring.coreos.com"
I0719 21:10:55.973506 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="prometheusrules.monitoring.coreos.com"
I0719 21:10:55.973526 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="prometheusagents.monitoring.coreos.com"
I0719 21:10:55.973556 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="podmonitors.monitoring.coreos.com"
I0719 21:10:55.973584 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="servicemonitors.monitoring.coreos.com"
I0719 21:10:55.973674 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="thanosrulers.monitoring.coreos.com"
I0719 21:10:55.973957 1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="scrapeconfigs.monitoring.coreos.com"
I0719 21:10:55.974121 1 shared_informer.go:311] Waiting for caches to sync for resource quota
I0719 21:10:56.274905 1 shared_informer.go:318] Caches are synced for resource quota
I0719 21:10:56.424378 1 shared_informer.go:311] Waiting for caches to sync for garbage collector
I0719 21:10:56.424455 1 shared_informer.go:318] Caches are synced for garbage collector
I0719 21:13:36.393382 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:36.424442 1 event.go:307] "Event occurred" object="default/prometheus-kube-prometheus-admission-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-kube-prometheus-admission-create-7knp5"
I0719 21:13:36.427664 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:36.443991 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:36.446407 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:36.466311 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:51.635961 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:52.814602 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:53.660814 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:53.825122 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:53.835170 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:53.846958 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:53.847464 1 event.go:307] "Event occurred" object="default/prometheus-kube-prometheus-admission-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0719 21:13:53.915429 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-create
I0719 21:13:54.431314 1 event.go:307] "Event occurred" object="default/prometheus-prometheus-node-exporter" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-prometheus-node-exporter-5q7lk"
I0719 21:13:54.431425 1 event.go:307] "Event occurred" object="default/prometheus-kube-state-metrics" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set prometheus-kube-state-metrics-5b49fbfdf9 to 1"
I0719 21:13:54.437650 1 event.go:307] "Event occurred" object="default/prometheus-kube-prometheus-operator" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set prometheus-kube-prometheus-operator-7dfb8b9b74 to 1"
I0719 21:13:54.438837 1 event.go:307] "Event occurred" object="default/prometheus-kube-state-metrics-5b49fbfdf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-kube-state-metrics-5b49fbfdf9-t9v4b"
I0719 21:13:54.490870 1 event.go:307] "Event occurred" object="default/prometheus-kube-prometheus-operator-7dfb8b9b74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-kube-prometheus-operator-7dfb8b9b74-r29vp"
I0719 21:13:54.495071 1 event.go:307] "Event occurred" object="default/prometheus-grafana" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set prometheus-grafana-7478bfcd9 to 1"
I0719 21:13:54.548691 1 event.go:307] "Event occurred" object="default/prometheus-grafana-7478bfcd9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-grafana-7478bfcd9-dmfxf"
I0719 21:13:57.374790 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:13:57.425627 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:13:57.427839 1 event.go:307] "Event occurred" object="default/prometheus-kube-prometheus-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: prometheus-kube-prometheus-admission-patch-6qjcn"
I0719 21:13:57.452313 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:13:57.474019 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:13:57.484363 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:02.125367 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:03.478416 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:04.138288 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:04.488425 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:04.500606 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:04.506815 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:04.507921 1 event.go:307] "Event occurred" object="default/prometheus-kube-prometheus-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0719 21:14:04.580645 1 job_controller.go:523] enqueueing job default/prometheus-kube-prometheus-admission-patch
I0719 21:14:28.400732 1 event.go:307] "Event occurred" object="default/alertmanager-prometheus-kube-prometheus-alertmanager" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod alertmanager-prometheus-kube-prometheus-alertmanager-0 in StatefulSet alertmanager-prometheus-kube-prometheus-alertmanager successful"
I0719 21:14:29.150136 1 event.go:307] "Event occurred" object="default/prometheus-prometheus-kube-prometheus-prometheus" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod prometheus-prometheus-kube-prometheus-prometheus-0 in StatefulSet prometheus-prometheus-kube-prometheus-prometheus successful"
*
* ==> kube-proxy [5afe4a2fedee] <==
* I0719 21:05:58.572843 1 node.go:141] Successfully retrieved node IP: 10.0.2.15
I0719 21:05:58.573399 1 server_others.go:110] "Detected node IP" address="10.0.2.15"
I0719 21:05:58.573542 1 server_others.go:554] "Using iptables proxy"
I0719 21:05:58.622209 1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
I0719 21:05:58.622232 1 server_others.go:192] "Using iptables Proxier"
I0719 21:05:58.622325 1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0719 21:05:58.623092 1 server.go:658] "Version info" version="v1.27.3"
I0719 21:05:58.623258 1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0719 21:05:58.640824 1 config.go:188] "Starting service config controller"
I0719 21:05:58.641381 1 shared_informer.go:311] Waiting for caches to sync for service config
I0719 21:05:58.641420 1 config.go:97] "Starting endpoint slice config controller"
I0719 21:05:58.641428 1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0719 21:05:58.642121 1 config.go:315] "Starting node config controller"
I0719 21:05:58.642133 1 shared_informer.go:311] Waiting for caches to sync for node config
I0719 21:05:58.742435 1 shared_informer.go:318] Caches are synced for node config
I0719 21:05:58.742532 1 shared_informer.go:318] Caches are synced for service config
I0719 21:05:58.742587 1 shared_informer.go:318] Caches are synced for endpoint slice config
*
* ==> kube-scheduler [1fb237e4ff7d] <==
* I0719 21:05:39.531691 1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0719 21:05:39.531773 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0719 21:05:39.580156 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0719 21:05:39.581053 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0719 21:05:39.580730 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0719 21:05:39.581312 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0719 21:05:39.580965 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0719 21:05:39.581456 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0719 21:05:39.581009 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0719 21:05:39.581563 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0719 21:05:39.585044 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0719 21:05:39.585262 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0719 21:05:39.585379 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:39.585449 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0719 21:05:39.585580 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0719 21:05:39.585653 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0719 21:05:39.585770 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:39.585854 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0719 21:05:39.586241 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0719 21:05:39.586827 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0719 21:05:39.586429 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:39.586977 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
W0719 21:05:39.586471 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0719 21:05:39.587766 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0719 21:05:39.586520 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0719 21:05:39.588205 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0719 21:05:39.586547 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:39.588325 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0719 21:05:39.586592 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0719 21:05:39.588439 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0719 21:05:39.586637 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0719 21:05:39.588542 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0719 21:05:40.437958 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0719 21:05:40.438163 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0719 21:05:40.514709 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0719 21:05:40.514759 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0719 21:05:40.538532 1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0719 21:05:40.538608 1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
W0719 21:05:40.549236 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0719 21:05:40.549305 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0719 21:05:40.702501 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0719 21:05:40.702529 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0719 21:05:40.711201 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0719 21:05:40.711274 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0719 21:05:40.726986 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:40.727075 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0719 21:05:40.821635 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0719 21:05:40.822255 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0719 21:05:40.827371 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:40.827518 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0719 21:05:40.845636 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:40.845706 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0719 21:05:40.861530 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0719 21:05:40.861604 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0719 21:05:40.906351 1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0719 21:05:40.906382 1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0719 21:05:42.432817 1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0719 21:19:33.969209 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0719 21:19:33.969262 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0719 21:19:33.969467 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Wed 2023-07-19 21:21:02 UTC, ends at Wed 2023-07-19 21:22:49 UTC. --
-- No entries --
*
* ==> storage-provisioner [6d6a3db438eb] <==
* I0719 21:06:28.678120 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0719 21:06:28.712204 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0719 21:06:28.712490 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0719 21:06:28.723718 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0719 21:06:28.724157 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_8a939d51-81d8-4b9d-a322-80dc7ba9543e!
I0719 21:06:28.726693 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fdf2a9e6-14bf-4ca4-ad9e-be889290c776", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_8a939d51-81d8-4b9d-a322-80dc7ba9543e became leader
I0719 21:06:28.824819 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_8a939d51-81d8-4b9d-a322-80dc7ba9543e!
*
* ==> storage-provisioner [af50bcb176c1] <==
* I0719 21:05:58.159824 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0719 21:06:28.163779 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout